Image Title

Search Results for Broadcom:

Kim Leyenaar, Broadcom | SuperComputing 22


 

(Intro music) >> Welcome back. We're LIVE here from SuperComputing 22 in Dallas Paul Gillin, for Silicon Angle in theCUBE with my guest host Dave... excuse me. And our, our guest today, this segment is Kim Leyenaar who is a storage performance architect at Broadcom. And the topic of this conversation is, is is networking, it's connectivity. I guess, how does that relate to the work of a storage performance architect? >> Well, that's a really good question. So yeah, I have been focused on storage performance for about 22 years. But even, even if we're talking about just storage the entire, all the components have a really big impact on ultimately how quickly you can access your data. So, you know, the, the switches the memory bandwidth, the, the expanders the just the different protocols that you're using. And so, and the big part of is actually ethernet because as you know, data's not siloed anymore. You have to be able to access it from anywhere in the world. >> Dave: So wait, so you're telling me that we're just not living in a CPU centric world now? >> Ha ha ha >> Because it is it is sort of interesting. When we talk about supercomputing and high performance computing we're always talking about clustering systems. So how do you connect those systems? Isn't that, isn't that kind of your, your wheelhouse? >> Kim: It really is. >> Dave: At Broadcom. >> It's, it is, it is Broadcom's wheelhouse. We are all about interconnectivity and we own the interconnectivity. You know, you know, years ago it was, 'Hey, you know buy this new server because, you know, we we've added more cores or we've got better memory.' But now you've got all this siloed data and we've got you know, we've got this, this stuff or defined kind of environment now this composable environments where, hey if you need more networking, just plug this in or just go here and just allocate yourself more. So what we're seeing is these silos really of, 'hey here's our compute, here's your networking, here's your storage.' And so, how do you put those all together? The thing is interconnectivity. So, that's really what we specialize in. I'm really, you know, I'm really happy to be here to talk about some of the things that that we do to enable high performance computing. >> Paul: Now we're seeing, you know, new breed of AI computers being built with multiple GPUs very large amounts of data being transferred between them. And the internet really has become a, a bottleneck. The interconnect has become a bottle, a bottleneck. Is that something that Broadcom is working on alleviating? >> Kim: Absolutely. So we work with a lot of different, there's there's a lot of different standards that we work with to define so that we can make sure that we work everywhere. So even if you're just a dentist's office that's deploying one server, or we're talking about these hyperscalers that are, you know that have thousands or, you know tens of thousands of servers, you know, we're working on making sure that the next generation is able to outperform the previous generation. Not only that, but we found that, you know with these siloed things, if, if you add more storage but that means we're going to eat up six cores using that it's not really as useful. So Broadcom's really been focused on trying to offload the CPU. So we're offloading it from, you know data security, data protection, you know, we're we do packet sniffing ourselves and things like that. So no longer do we rely on the CPU to do that kind of processing for us but we become very smart devices all on our own so that they work very well in these kind of environments. >> Dave: So how about, give, give us an example. I know a lot of the discussion here has been around using ethernet as the connectivity layer. >> Yes. >> You know, in in, in the past, people would think about supercomputing as exclusively being InfiniBand based. >> Ha ha ha. >> But give, give us an idea of what Broadcom is doing in the ethernet space. What, you know, what's what are the advantages of using ethernet? >> Kim: So we've made two really big announcements. The first one is our Tomahawk five ethernet switch. So it's a 400 gigi ethernet switch. And the other thing we announced too was our Thor. So we have, these are our network controllers that also support up to 400 gigi each as well. So, those two alone, it just, it's amazing to me how much data we're able to transfer with those. But not only that, but they're super super intelligent controllers too. And then we realized, you know, hey, we're we're managing all this data, let's go ahead and offload the CPU. So we actually adopted the Rocky Standards. So that's one of the things that puts us above InfiniBand is that ethernet is ubiquitous, it's everywhere. And InfiniBand is primarily just owned by one or two companies. And, and so, and it's also a lot more expensive. So ethernet is just, it's everywhere. And now with the, with the Rocky standards, we're working along with, it's, it's, it does what you're talking about much better than, you know predecessors. >> Tell us about the Rocky Standards. I'm not familiar with it. I'm sure some of our listeners are not. What is the Rocky standard? >> Kim: Ha ha ha. So it's our DNA over converged to ethernet. I'm not a Rocky expert myself but I am an expert on how to offload the CPU. And so one of the things it does is instead of using the CPU to transfer the data from, you know the user space over to the next, you know server when you're transferring it we actually will do it ourselves. So we'll handle it ourselves. We will take it, we will move it across the wire and we will put it in that remote computer. And we don't have to ask the CPU to do anything to get involved in that. So big, you know, it's a big savings. >> Yeah, I mean in, in a nutshell, because there are parts of the InfiniBand protocol that are essentially embedded in RDMA over converged ethernet. So... >> Right. >> So if you can, if you can leverage kind of the best of both worlds, but have it in an ethernet environment which is already ubiquitous, it seems like it's, kind of democratizing supercomputing and, and HPC and I know you guys are big partners with Dell as an example, you guys work with all sorts of other people. >> Kim: Yeah. >> But let's say, let's say somebody is going to be doing ethernet for connectivity, you also offer switches? >> Kim: We do, actually. >> So is that, I mean that's another piece of the puzzle. >> That's a big piece of the puzzle. So we just released our, our Atlas 2 switch. It is a PCIE Gen Five switch. And... >> Dave: What does that mean? What does Gen five, what does that mean? >> Oh, Gen Five PCIE, it's it's a magic connectivity right now. So, you know, we talk about the Sapphire Rapids release as well as the GENUWA release. I know that those, you know those have been talked about a lot here. I've been walking around and everybody's talking about it. Well, those enable the Gen Five PCIE interfaces. So we've been able to double the bandwidth from the Gen Four up to the Gen Five. So, in order to, to support that we do now have our Atlas two PCIE Gen Five switch. And it allows you to connect especially around here we're talking about, you know artificial intelligence and machine learning. A lot of these are relying on the GPU and the DPU that you see, you know a lot of people talking about enabling. So by in, you know, putting these switches in the servers you can connect multitudes of not only NVME devices but also these GPUs and these, these CPUs. So besides that we also have the storage component of it too. So to support that, we we just recently have released our 9,500 series HBAs which support 24 gig SAS. And you know, this is kind of a, this is kind of a big deal for some of our hyperscalers that say, Hey, look our next generation, we're putting a hundred hard drives in. So we're like, you know, so a lot of it is maybe for cold storage, but by giving them that 24 gig bandwidth and by having these mass 24 gig SAS expanders that allows these hyperscalers to build up their systems. >> Paul: And how are you supporting the HPC community at large? And what are you doing that's exclusively for supercomputing? >> Kim: Exclusively for? So we're doing the interconnectivity really for them. You know, you can have as, as much compute power as you want, but these are very data hungry applications and a lot of that data is not sitting right in the box. A lot of that data is sitting in some other country or in some other city, or just the box next door. So to be able to move that data around, you know there's a new concept where they say, you know do the compute where the data is and then there's another kind of, you know the other way is move the data around which is a lot easier kind of sometimes, but so we're allowing us to move that data around. So for that, you know, we do have our our tomahawk switches, we've got our Thor NICS and of course we got, you know, the really wide pipe. So our, our new 9,500 series HBA and RAID controllers not only allow us to do, so we're doing 28 gigabytes a second that we can trans through the one controller, and that's on protected data. So we can actually have the high availability protected data of RAID 5 or RAID 6, or RAID 10 in the box giving in 27 gigabytes a second. So it's, it's unheard of the latency that we're seeing even off of this too, we have a right cash latency that is sub 8 microseconds that is lower than most of the NVME drives that you see, you know that are available today. So, so you know we're able to support these applications that require really low latency as well as data protection. >> Dave: So, so often when we talk about the underlying hardware, it's a it's a game of, you know, whack-a-mole chase the bottleneck. And so you've mentioned PCIE five, a lot of folks who will be implementing five, gen five PCIE five are coming off of three, not even four. >> Kim: I know. >> So make, so, so they're not just getting a last generation to this generation bump but they're getting a two generations, bump. >> Kim: They are. >> How does that, is it the case that it would never make sense to use a next gen or a current gen card in an older generation bus because of the mismatch and performance? Are these things all designed to work together? >> Uh... That's a really tough question. I want to say, no, it doesn't make sense. It, it really makes sense just to kind of move things forward and buy a card that's made for the bus it's in. However, that's not always the case. So for instance, our 9,500 controller is a Gen four PCIE but what we did, we doubled the PCIE so it's a by 16, even though it's a gen four, it's a by 16. So we're getting really, really good bandwidth out of it. As I said before, you know, we're getting 28, 27.8 or almost 28 gigabytes a second bandwidth out of that by doubling the PCIE bus. >> Dave: But they worked together, it all works together? >> All works together. You can put, you can put our Gen four and a Gen five all day long and they work beautifully. Yeah. We, we do work to validate that. >> We're almost out our time. But I, I want to ask you a more, nuts and bolts question, about storage. And we've heard for, you know, for years of the aerial density of hard disk has been reached and there's really no, no way to excel. There's no way to make the, the dish any denser. What is the future of the hard disk look like as a storage medium? >> Kim: Multi actuator actually, we're seeing a lot of multi-actuator. I was surprised to see it come across my desk, you know because our 9,500 actually does support multi-actuator. And, and, and so it was really neat after I've been working with hard drives for 22 years and I remember when they could do 30 megabytes a second, and that was amazing. That was like, wow, 30 megabytes a second. And then, about 15 years ago, they hit around 200 to 250 megabytes a second, and they stayed there. They haven't gone anywhere. What they have done is they've increased the density so that you can have more storage. So you can easily go out and buy 15 to 30 terabyte drive, but you're not going to get any more performance. So what they've done is they've added multiple actuators. So each one of these can do its own streaming and each one of these can actually do their own seeking. So you can get two and four. And I've even seen a talk about, you know eight actuator per disc. I, I don't think that, I think that's still theory, but but they could implement those. So that's one of the things that we're seeing. >> Paul: Old technology somehow finds a way to, to remain current. >> It does. >> Even it does even in the face of new alternatives. Kim Leyenaar, Storage Architect, Storage Performance Architect at Broadcom Thanks so much for being here with us today. Thank you so much for having me. >> This is Paul Gillin with Dave Nicholson here at SuperComputing 22. We'll be right back. (Outro music)

Published Date : Nov 16 2022

SUMMARY :

And the topic of this conversation is, is So, you know, the, the switches So how do you connect those systems? buy this new server because, you know, we you know, new breed So we're offloading it from, you know I know a lot of the You know, in in, in the What, you know, what's And then we realized, you know, hey, we're What is the Rocky standard? the data from, you know of the InfiniBand protocol So if you can, if you can So is that, I mean that's So we just released So we're like, you know, So for that, you know, we do have our it's a game of, you know, So make, so, so they're not out of that by doubling the PCIE bus. You can put, you can put And we've heard for, you know, for years so that you can have more storage. to remain current. Even it does even in the with Dave Nicholson here

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Kim LeyenaarPERSON

0.99+

Dave NicholsonPERSON

0.99+

DavePERSON

0.99+

15QUANTITY

0.99+

PaulPERSON

0.99+

BroadcomORGANIZATION

0.99+

KimPERSON

0.99+

30 megabytesQUANTITY

0.99+

oneQUANTITY

0.99+

twoQUANTITY

0.99+

thousandsQUANTITY

0.99+

9,500QUANTITY

0.99+

28QUANTITY

0.99+

22 yearsQUANTITY

0.99+

six coresQUANTITY

0.99+

Paul GillinPERSON

0.99+

DellORGANIZATION

0.99+

fourQUANTITY

0.99+

DallasLOCATION

0.99+

24 gigQUANTITY

0.99+

two companiesQUANTITY

0.99+

first oneQUANTITY

0.99+

RockyORGANIZATION

0.98+

27.8QUANTITY

0.98+

todayDATE

0.98+

30 terabyteQUANTITY

0.98+

both worldsQUANTITY

0.98+

about 22 yearsQUANTITY

0.97+

two generationsQUANTITY

0.97+

each oneQUANTITY

0.97+

SuperComputing 22ORGANIZATION

0.97+

one controllerQUANTITY

0.97+

threeQUANTITY

0.96+

two really big announcementsQUANTITY

0.96+

250 megabytesQUANTITY

0.96+

one serverQUANTITY

0.94+

Gen fourCOMMERCIAL_ITEM

0.94+

up to 400 gigiQUANTITY

0.93+

Rocky standardsORGANIZATION

0.93+

tens of thousands of serversQUANTITY

0.93+

400 gigiQUANTITY

0.92+

around 200QUANTITY

0.92+

9,500 seriesQUANTITY

0.92+

excelTITLE

0.91+

9,500 seriesCOMMERCIAL_ITEM

0.9+

16QUANTITY

0.9+

InfiniBandORGANIZATION

0.89+

sub 8 microsecondsQUANTITY

0.89+

gen fourCOMMERCIAL_ITEM

0.89+

eight actuatorQUANTITY

0.89+

second bandwidthQUANTITY

0.88+

Atlas 2COMMERCIAL_ITEM

0.86+

GENUWAORGANIZATION

0.86+

ThorORGANIZATION

0.85+

fiveTITLE

0.85+

about 15 years agoDATE

0.84+

28 gigabytesQUANTITY

0.84+

Gen FiveCOMMERCIAL_ITEM

0.83+

27 gigabytes a secondQUANTITY

0.82+

Peter Del Vecchio, Broadcom and Armando Acosta, Dell Technologies | SuperComputing 22


 

(upbeat music) (logo swooshing) >> Good morning and welcome back to Dallas, ladies and gentlemen, we are here with theCUBE Live from Supercomputing 2022. David, my cohost, how are you doing? Exciting, day two, feeling good? >> Very exciting. Ready to start off the day. >> Very excited. We have two fascinating guests joining us to kick us off. Please welcome Pete and Armando. Gentlemen, thank you for being here with us. >> Thank you for having us. >> Thank you for having us. >> I'm excited that you're starting off the day because we've been hearing a lot of rumors about Ethernet as the fabric for HPC, but we really haven't done a deep dive yet during the show. You all seem all in on Ethernet. Tell us about that. Armando, why don't you start? >> Yeah, I mean, when you look at Ethernet, customers are asking for flexibility and choice. So when you look at HPC, InfiniBand's always been around, right? But when you look at where Ethernet's coming in, it's really our commercial in their enterprise customers. And not everybody wants to be in the top 500, what they want to do is improve their job time and improve their latency over the network. And when you look at Ethernet, you kind of look at the sweet spot between 8, 12, 16, 32 nodes, that's a perfect fit for Ethernet in that space and those types of jobs. >> I love that. Pete, you want to elaborate? >> Yeah, sure. I mean, I think one of the biggest things you find with Ethernet for HPC is that, if you look at where the different technologies have gone over time, you've had old technologies like, ATM, Sonic, Fifty, and pretty much everything is now kind of converged toward Ethernet. I mean, there's still some technologies such as InfiniBand, Omni-Path, that are out there. But basically, they're single source at this point. So what you see is that there is a huge ecosystem behind Ethernet. And you see that also the fact that Ethernet is used in the rest of the enterprise, is used in the cloud data centers, that is very easy to integrate HPC based systems into those systems. So as you move HPC out of academia into enterprise, into cloud service providers, it's much easier to integrate it with the same technology you're already using in those data centers, in those networks. >> So what's the state of the art for Ethernet right now? What's the leading edge? what's shipping now and what's in the near future? You're with Broadcom, you guys designed this stuff. >> Pete: Yeah. >> Savannah: Right. >> Yeah, so leading edge right now, got a couple things-- >> Savannah: We love good stage prop here on the theCUBE. >> Yeah, so this is Tomahawk 4. So this is what is in production, it's shipping in large data centers worldwide. We started sampling this in 2019, started going into data centers in 2020. And this is 25.6 terabytes per second. >> David: Okay. >> Which matches any other technology out there. Like if you look at say, InfinBand, highest they have right now that's just starting to get into production is 25.6 T. So state of the art right now is what we introduced, We announced this in August, This is Tomahawk 5, so this is 51.2 terabytes per second. So double the bandwidth, out of any other technology that's out there. And the important thing about networking technology is when you double the bandwidth, you don't just double the efficiency, actually, winds up being a factor of six efficiency. >> Savannah: Wow. >> 'Cause if you want, I can go into that, but... >> Why not? >> Well, what I want to know, please tell me that in your labs, you have a poster on the wall that says T five, with some like Terminator kind of character. (all laughs) 'Cause that would be cool. If it's not true, just don't say anything. I'll just... >> Pete: This can actually shift into a terminator. >> Well, so this is from a switching perspective. >> Yeah. >> When we talk about the end nodes, when we talk about creating a fabric, what's the latest in terms of, well, the nicks that are going in there, what speed are we talking about today? >> So as far as 30 speeds, it tends to be 50 gigabits per second. >> David: Okay. >> Moving to a hundred gig PAM-4. >> David: Okay. >> And we do see a lot of nicks in the 200 gig Ethernet port speed. So that would be four lanes, 50 gig. But we do see that advancing to 400 gig fairly soon, 800 gig in the future. But say state of the art right now, we're seeing for the end node tends to be 200 gig E based on 50 gig PAM-4. >> Wow. >> Yeah, that's crazy. >> Yeah, that is great. My mind is act actively blown. I want to circle back to something that you brought up a second ago, which I think is really astute. When you talked about HPC moving from academia into enterprise, you're both seeing this happen, where do you think we are on the adoption curve and sort of in that cycle? Armando, do you want to go? >> Yeah, well, if you look at the market research, they're actually telling you it's 50/50 now. So Ethernet is at the level of 50%, InfinBand's at 50%, right? >> Savannah: Interesting. >> Yeah, and so what's interesting to us, customers are coming to us and say, hey, we want to see flexibility and choice and, hey, let's look at Ethernet and let's look at InfiniBand. But what is interesting about this is that we're working with Broadcom, we have their chips in our lab, we their have switches in our lab. And really what we're trying to do is make it easy to simple and configure the network for essentially MPI. And so the goal here with our validated designs is really to simplify this. So if you have a customer that, hey, I've been InfiniBand but now I want to go Ethernet, there's going to be some learning curves there. And so what we want to do is really simplify that so that we can make it easy to install, get the cluster up and running and they can actually get some value out the cluster. >> Yeah, Pete, talk about that partnership. what does that look like? I mean, are you working with Dell before the T six comes out? Or you just say what would be cool is we'll put this in the T six? >> No, we've had a very long partnership both on the hardware and the software side. Dell's been an early adopter of our silicon. We've worked very closely on SI and Sonic on the operating system, and they provide very valuable feedback for us on our roadmap. So before we put out a new chip, and we have actually three different product lines within the switching group, within Broadcom, we've then gotten very valuable feedback on the hardware and on the APIs, on the operating system that goes on top of those chips. So that way when it comes to market, Dell can take it and deliver the exact features that they have in the current generation to their customers to have that continuity. And also they give us feedback on the next gen features they'd like to see again, in both the hardware and the software. >> So I'm fascinated by... I always like to know like what, yeah, exactly. Look, you start talking about the largest supercomputers, most powerful supercomputers that exist today, and you start looking at the specs and there might be two million CPUs, 2 million CPU cores. Exoflap of performance. What are the outward limits of T five in switches, building out a fabric, what does that look like? What are the increments in terms of how many... And I know it's a depends answer, but how many nodes can you support in a scale out cluster before you need another switch? Or what does that increment of scale look like today? >> Yeah, so this is 51.2 terabytes per second. Where we see the most common implementation based on this would be with 400 gig Ethernet ports. >> David: Okay. >> So that would be 128, 400 gig E ports connected to one chip. Now, if you went to 200 gig, which is kind of the state of the art for the nicks, you can have double that. So in a single hop, you can have 256 end nodes connected through one switch. >> Okay, so this T five, that thing right there, (all laughing) inside a sheet metal box, obviously you've got a bunch of ports coming out of that. So what's the form factor look like for where that T five sits? Is there just one in a chassis or you have.. What does that look like? >> It tends to be pizza boxes these days. What you've seen overall is that the industry's moved away from chassis for these high end systems more towardS pizza boxes. And you can have composable systems where, in the past you would have line cards, either the fabric cards that the line cards are plug into or interfaced to. These days what tends to happen is you'd have a pizza box and if you wanted to build up like a virtual chassis, what you would do is use one of those pizza boxes as the fabric card, one of them as the line card. >> David: Okay. >> So what we see, the most common form factor for this is they tend to be two, I'd say for North America, most common would be a 2RU, with 64 OSFP ports. And often each of those OSFP, which is an 800 gig E or 800 gig port, we've broken out into two 400 gig ports. >> So yeah, in 2RU, and this is all air cooled, in 2RU, you've got 51.2 T. We do see some cases where customers would like to have different optics and they'll actually deploy 4RU, just so that way they have the phase-space density. So they can plug in 128, say QSFP 112. But yeah, it really depends on which optics, if you want to have DAK connectivity combined with optics. But those are the two most common form factors. >> And Armando, Ethernet isn't necessarily Ethernet in the sense that many protocols can be run over it. >> Right. >> I think I have a projector at home that's actually using Ethernet physical connections. But, so what are we talking about here in terms of the actual protocol that's running over this? Is this exactly the same as what you think of as data center Ethernet, or is this RDMA over converged Ethernet? What Are we talking about? >> Yeah, so RDMA, right? So when you look at running, essentially HPC workloads, you have the NPI protocol, so message passing interface, right? And so what you need to do is you may need to make sure that that NPI message passing interface runs efficiently on Ethernet. And so this is why we want to test and validate all these different things to make sure that that protocol runs really, really fast on Ethernet. If you look at NPIs officially, built to, hey, it was designed to run on InfiniBand but now what you see with Broadcom, with the great work they're doing, now we can make that work on Ethernet and get same performance, so that's huge for customers. >> Both of you get to see a lot of different types of customers. I kind of feel like you're a little bit of a looking into the crystal ball type because you essentially get to see the future knowing what people are trying to achieve moving forward. Talk to us about the future of Ethernet in HPC in terms of AI and ML, where do you think we're going to be next year or 10 years from now? >> You want to go first or you want me to go first? >> I can start, yeah. >> Savannah: Pete feels ready. >> So I mean, what I see, I mean, Ethernet, what we've seen is that as far as on, starting off of the switch side, is that we've consistently doubled the bandwidth every 18 to 24 months. >> That's impressive. >> Pete: Yeah. >> Nicely done, casual, humble brag there. That was great, I love that. I'm here for you. >> I mean, I think that's one of the benefits of Ethernet, is the ecosystem, is the trajectory the roadmap we've had, I mean, you don't see that in any of the networking technology. >> David: More who? (all laughing) >> So I see that, that trajectory is going to continue as far as the switches doubling in bandwidth, I think that they're evolving protocols, especially again, as you're moving away from academia into the enterprise, into cloud data centers, you need to have a combination of protocols. So you'll probably focus still on RDMA, for the supercomputing, the AI/ML workloads. But we do see that as you have a mix of the applications running on these end nodes, maybe they're interfacing to the CPUs for some processing, you might use a different mix of protocols. So I'd say it's going to be doubling a bandwidth over time, evolution of the protocols. I mean, I expect that Rocky is probably going to evolve over time depending on the AI/ML and the HPC workloads. I think also there's a big change coming as far as the physical connectivity within the data center. Like one thing we've been focusing on is co-packed optics. So right now, this chip is, all the balls in the back here, there's electrical connections. >> How many are there, by the way? 9,000 plus on the back of that-- >> 9,352. >> I love how specific it is. It's brilliant. >> Yeah, so right now, all the SERDES, all the signals are coming out electrically based, but we've actually shown, we actually we have a version of Tomahawk 4 at 25.6 T that has co-packed optics. So instead of having electrical output, you actually have optics directly out of the package. And if you look at, we'll have a version of Tomahawk 5. >> Nice. >> Where it's actually even a smaller form factor than this, where instead of having the electrical output from the bottom, you actually have fibers that plug directly into the sides. >> Wow. Cool. >> So I see there's the bandwidth, there's radix's increasing, protocols, different physical connectivity. So I think there's a lot of things throughout, and the protocol stack's also evolving. So a lot of excitement, a lot of new technology coming to bear. >> Okay, You just threw a carrot down the rabbit hole. I'm only going to chase this one, okay? >> Peter: All right. >> So I think of individual discreet physical connections to the back of those balls. >> Yeah. >> So if there's 9,000, fill in the blank, that's how many connections there are. How do you do that many optical connections? What's the mapping there? What does that look like? >> So what we've announced for Tomahawk 5 is it would have FR4 optics coming out. So you'd actually have 512 fiber pairs coming out. So basically on all four sides, you'd have these fiber ribbons that come in and connect. There's actually fibers coming out of the sides there. We wind up having, actually, I think in this case, we would actually have 512 channels and it would wind up being on 128 actual fiber pairs because-- >> It's miraculous, essentially. >> Savannah: I know. >> Yeah. So a lot of people are going to be looking at this and thinking in terms of InfiniBand versus Ethernet, I think you've highlighted some of the benefits of specifically running Ethernet moving forward as HPC which sort of just trails slightly behind super computing as we define it, becomes more pervasive AI/ML. What are some of the other things that maybe people might not immediately think about when they think about the advantages of running Ethernet in that environment? Is it about connecting the HPC part of their business into the rest of it? What are the advantages? >> Yeah, I mean, that's a big thing. I think, and one of the biggest things that Ethernet has again, is that the data centers, the networks within enterprises, within clouds right now are run on Ethernet. So now, if you want to add services for your customers, the easiest thing for you to do is the drop in clusters that are connected with the same networking technology. So I think one of the biggest things there is that if you look at what's happening with some of the other proprietary technologies, I mean, in some cases they'll have two different types of networking technologies before they interface to Ethernet. So now you've got to train your technicians, you train your assist admins on two different network technologies. You need to have all the debug technology, all the interconnect for that. So here, the easiest thing is you can use Ethernet, it's going to give you the same performance and actually, in some cases, we've seen better performance than we've seen with Omni-Path, better than in InfiniBand. >> That's awesome. Armando, we didn't get to you, so I want to make sure we get your future hot take. Where do you see the future of Ethernet here in HPC? >> Well, Pete hit on a big thing is bandwidth, right? So when you look at, train a model, okay? So when you go and train a model in AI, you need to have a lot of data in order to train that model, right? So what you do is essentially, you build a model, you choose whatever neural network you want to utilize. But if you don't have a good data set that's trained over that model, you can't essentially train the model. So if you have bandwidth, you want big pipes because you have to move that data set from the storage to the CPU. And essentially, if you're going to do it maybe on CPU only, but if you do it on accelerators, well, guess what? You need a big pipe in order to get all that data through. And here's the deal, the bigger the pipe you have, the more data, the faster you can train that model. So the faster you can train that model, guess what? The faster you get to some new insight, maybe it's a new competitive advantage, maybe it's some new way you design a product, but that's a benefit of speed, you want faster, faster, faster. >> It's all about making it faster and easier-- for the users. >> Armando: It is. >> I love that. Last question for you, Pete, just because you've said Tomahawk seven times, and I'm thinking we're in Texas, stakes, there's a lot going on with that. >> Making me hungry. >> I know, exactly. I'm sitting out here thinking, man, I did not have big enough breakfast. How did you come up with the name Tomahawk? >> So Tomahawk, I think it just came from a list. So we have a tried end product line. >> Savannah: Ah, yes. >> Which is a missile product line. And Tomahawk is being kind of like the bigger and batter missile, so. >> Savannah: Love this. Yeah, I mean-- >> So do you like your engineers? You get to name it. >> Had to ask. >> It's collaborative. >> Okay. >> We want to make sure everyone's in sync with it. >> So just it's not the Aquaman tried. >> Right. >> It's the steak Tomahawk. I think we're good now. >> Now that we've cleared that-- >> Now we've cleared that up. >> Armando, Pete, it was really nice to have both you. Thank you for teaching us about the future of Ethernet and HCP. David Nicholson, always a pleasure to share the stage with you. And thank you all for tuning in to theCUBE live from Dallas. We're here talking all things HPC and supercomputing all day long. We hope you'll continue to tune in. My name's Savannah Peterson, thanks for joining us. (soft music)

Published Date : Nov 16 2022

SUMMARY :

David, my cohost, how are you doing? Ready to start off the day. Gentlemen, thank you about Ethernet as the fabric for HPC, So when you look at HPC, Pete, you want to elaborate? So what you see is that You're with Broadcom, you stage prop here on the theCUBE. So this is what is in production, So state of the art right 'Cause if you want, I have a poster on the wall Pete: This can actually Well, so this is from it tends to be 50 gigabits per second. 800 gig in the future. that you brought up a second ago, So Ethernet is at the level of 50%, So if you have a customer that, I mean, are you working with Dell and on the APIs, on the operating system that exist today, and you Yeah, so this is 51.2 of the art for the nicks, chassis or you have.. in the past you would have line cards, for this is they tend to be two, if you want to have DAK in the sense that many as what you think of So when you look at running, Both of you get to see a lot starting off of the switch side, I'm here for you. in any of the networking technology. But we do see that as you have a mix I love how specific it is. And if you look at, from the bottom, you actually have fibers and the protocol stack's also evolving. carrot down the rabbit hole. So I think of individual How do you do that many coming out of the sides there. What are some of the other things the easiest thing for you to do is Where do you see the future So the faster you can train for the users. I love that. How did you come up So we have a tried end product line. kind of like the bigger Yeah, I mean-- So do you like your engineers? everyone's in sync with it. It's the steak Tomahawk. And thank you all for tuning

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavidPERSON

0.99+

2019DATE

0.99+

David NicholsonPERSON

0.99+

2020DATE

0.99+

PetePERSON

0.99+

TexasLOCATION

0.99+

AugustDATE

0.99+

PeterPERSON

0.99+

SavannahPERSON

0.99+

30 speedsQUANTITY

0.99+

200 gigQUANTITY

0.99+

Savannah PetersonPERSON

0.99+

50 gigQUANTITY

0.99+

ArmandoPERSON

0.99+

128QUANTITY

0.99+

DellORGANIZATION

0.99+

9,000QUANTITY

0.99+

400 gigQUANTITY

0.99+

BroadcomORGANIZATION

0.99+

50%QUANTITY

0.99+

twoQUANTITY

0.99+

128, 400 gigQUANTITY

0.99+

800 gigQUANTITY

0.99+

DallasLOCATION

0.99+

512 channelsQUANTITY

0.99+

9,352QUANTITY

0.99+

24 monthsQUANTITY

0.99+

one chipQUANTITY

0.99+

Tomahawk 4COMMERCIAL_ITEM

0.99+

bothQUANTITY

0.99+

North AmericaLOCATION

0.99+

next yearDATE

0.99+

oneQUANTITY

0.98+

512 fiberQUANTITY

0.98+

seven timesQUANTITY

0.98+

Tomahawk 5COMMERCIAL_ITEM

0.98+

four lanesQUANTITY

0.98+

9,000 plusQUANTITY

0.98+

Dell TechnologiesORGANIZATION

0.98+

todayDATE

0.97+

AquamanPERSON

0.97+

BothQUANTITY

0.97+

InfiniBandORGANIZATION

0.97+

QSFP 112OTHER

0.96+

hundred gigQUANTITY

0.96+

Peter Del VecchioPERSON

0.96+

25.6 terabytes per secondQUANTITY

0.96+

two fascinating guestsQUANTITY

0.96+

single sourceQUANTITY

0.96+

64 OSFPQUANTITY

0.95+

RockyORGANIZATION

0.95+

two million CPUsQUANTITY

0.95+

25.6 T.QUANTITY

0.95+

Peter Del Vecchio, Broadcom and Armando Acosta, Dell Technologies | SuperComputing 22


 

>>You can put this in a conference. >>Good morning and welcome back to Dallas. Ladies and gentlemen, we are here with the cube Live from, from Supercomputing 2022. David, my cohost, how you doing? Exciting. Day two. Feeling good. >>Very exciting. Ready to start off the >>Day. Very excited. We have two fascinating guests joining us to kick us off. Please welcome Pete and Armando. Gentlemen, thank you for being here with us. >>Having us, >>For having us. I'm excited that you're starting off the day because we've been hearing a lot of rumors about ethernet as the fabric for hpc, but we really haven't done a deep dive yet during the show. Y'all seem all in on ethernet. Tell us about that. Armando, why don't you start? >>Yeah. I mean, when you look at ethernet, customers are asking for flexibility and choice. So when you look at HPC and you know, infinite band's always been around, right? But when you look at where Ethernet's coming in, it's really our commercial and their enterprise customers. And not everybody wants to be in the top 500. What they want to do is improve their job time and improve their latency over the network. And when you look at ethernet, you kinda look at the sweet spot between 8, 12, 16, 32 nodes. That's a perfect fit for ethernet and that space and, and those types of jobs. >>I love that. Pete, you wanna elaborate? Yeah, yeah, >>Yeah, sure. I mean, I think, you know, one of the biggest things you find with internet for HPC is that, you know, if you look at where the different technologies have gone over time, you know, you've had old technologies like, you know, atm, Sonic, fitty, you know, and pretty much everything is now kind of converged toward ethernet. I mean, there's still some technologies such as, you know, InfiniBand, omnipath that are out there. Yeah. But basically there's single source at this point. So, you know, what you see is that there is a huge ecosystem behind ethernet. And you see that also, the fact that ethernet is used in the rest of the enterprise is using the cloud data centers that is very easy to integrate HPC based systems into those systems. So as you move HPC out of academia, you know, into, you know, into enterprise, into cloud service providers is much easier to integrate it with the same technology you're already using in those data centers, in those networks. >>So, so what's this, what is, what's the state of the art for ethernet right now? What, you know, what's, what's the leading edge, what's shipping now and what and what's in the near future? You, you were with Broadcom, you guys design this stuff. >>Yeah, yeah. Right. Yeah. So leading edge right now, I got a couple, you know, Wes stage >>Trough here on the cube. Yeah. >>So this is Tomahawk four. So this is what is in production is shipping in large data centers worldwide. We started sampling this in 2019, started going into data centers in 2020. And this is 25.6 tets per second. Okay. Which matches any other technology out there. Like if you look at say, infin band, highest they have right now that's just starting to get into production is 25 point sixt. So state of the art right now is what we introduced. We announced this in August. This is Tomahawk five. So this is 51.2 terabytes per second. So double the bandwidth have, you know, any other technology that's out there. And the important thing about networking technology is when you double the bandwidth, you don't just double the efficiency, it's actually winds up being a factor of six efficiency. Wow. Cause if you want, I can go into that, but why >>Not? Well, I, what I wanna know, please tell me that in your labs you have a poster on the wall that says T five with, with some like Terminator kind of character. Cause that would be cool if it's not true. Don't just don't say anything. I just want, I can actually shift visual >>It into a terminator. So. >>Well, but so what, what are the, what are the, so this is, this is from a switching perspective. Yeah. When we talk about the end nodes, when we talk about creating a fabric, what, what's, what's the latest in terms of, well, the kns that are, that are going in there, what's, what speed are we talking about today? >>So as far as 30 speeds, it tends to be 50 gigabits per second. Okay. Moving to a hundred gig pan four. Okay. And we do see a lot of Knicks in the 200 gig ethernet port speed. So that would be, you know, four lanes, 50 gig. But we do see that advancing to 400 gig fairly soon. 800 gig in the future. But say state of the art right now, we're seeing for the end nodes tends to be 200 gig E based on 50 gig pan four. Wow. >>Yeah. That's crazy. Yeah, >>That is, that is great. My mind is act actively blown. I wanna circle back to something that you brought up a second ago, which I think is really astute. When you talked about HPC moving from academia into enterprise, you're both seeing this happen. Where do you think we are on the adoption curve and sort of in that cycle? Armand, do you wanna go? >>Yeah, yeah. Well, if you look at the market research, they're actually telling it's 50 50 now. So ethernet is at the level of 50%. InfiniBand is at 50%. Right. Interesting. Yeah. And so what's interesting to us, customers are coming to us and say, Hey, we want to see, you know, flexibility and choice and hey, let's look at ethernet and let's look at InfiniBand. But what is interesting about this is that we're working with Broadcom, we have their chips in our lab, we have our switches in our lab. And really what we're trying to do is make it easy to simple and configure the network for essentially mpi. And so the goal here with our validated designs is really to simplify this. So if you have a customer that, Hey, I've been in fbe, but now I want to go ethernet, you know, there's gonna be some learning curves there. And so what we wanna do is really simplify that so that we can make it easy to install, get the cluster up and running, and they can actually get some value out of the cluster. >>Yeah. Peter, what, talk about that partnership. What, what, what does that look like? Is it, is it, I mean, are you, you working with Dell before the, you know, before the T six comes out? Or you just say, you know, what would be cool, what would be cool is we'll put this in the T six? >>No, we've had a very long partnership both on the hardware and the software side. You know, Dell has been an early adopter of our silicon. We've worked very closely on SI and Sonic on the operating system, you know, and they provide very valuable feedback for us on our roadmap. So before we put out a new chip, and we have actually three different product lines within the switching group within Broadcom, we've then gotten, you know, very valuable feedback on the hardware and on the APIs, on the operating system that goes on top of those chips. So that way when it comes to market, you know, Dell can take it and, you know, deliver the exact features that they have in the current generation to their customers to have that continuity. And also they give us feedback on the next gen features they'd like to see again in both the hardware and the software. >>So, so I, I'm, I'm just, I'm fascinated by, I I, I always like to know kind like what Yeah, exactly. Exactly right. Look, you, you start talking about the largest super supercomputers, most powerful supercomputers that exist today, and you start looking at the specs and there might be 2 million CPUs, 2 million CPU cores, yeah. Ex alop of, of, of, of performance. What are the, what are the outward limits of T five in switches, building out a fabric, what does that look like? What are the, what are the increments in terms of how many, and I know it, I know it's a depends answer, but, but, but how many nodes can you support in a, in a, in a scale out cluster before you need another switch? What does that increment of scale look like today? >>Yeah, so I think, so this is 51.2 terras per second. What we see the most common implementation based on this would be with 400 gig ethernet ports. Okay. So that would be 128, you know, 400 giggi ports connected to, to one chip. Okay. Now, if you went to 200 gig, which is kind of the state of the art for the Nicks, you can have double that. Okay. So, you know, in a single hop you can have 256 end nodes connected through one switch. >>So, okay, so this T five, that thing right there inside a sheet metal box, obviously you've got a bunch of ports coming out of that. So what is, what does that, what's the form factor look like for that, for where that T five sits? Is there just one in a chassis or you have, what does that look >>Like? It tends to be pizza boxes these days. Okay. What you've seen overall is that the industry's moved away from chassis for these high end systems more towards pizza, pizza boxes. And you can have composable systems where, you know, in the past you would have line cards, either the fabric cards that the line cards are plugged into or interface to these days, what tends to happen is you'd have a pizza box, and if you wanted to build up like a virtual chassis, what you would do is use one of those pizza boxes as the fabric card, one of them as the, the line card. >>Okay. >>So what we see, the most common form factor for this is they tend to be two, I'd say for North America, most common would be a two R U with 64 OSF P ports. And often each of those OSF p, which is an 800 gig e or 800 gig port, we've broken out into two 400 gig quarts. Okay. So yeah, in two r u you've got, and this is all air cooled, you know, in two re you've got 51.2 T. We do see some cases where customers would like to have different optics, and they'll actually deploy a four U just so that way they have the face place density, so they can plug in 128, say qsf P one 12. But yeah, it really depends on which optics, if you wanna have DAK connectivity combined with, with optics. But those are the two most common form factors. >>And, and Armando ethernet isn't, ethernet isn't necessarily ethernet in the sense that many protocols can be run over it. Right. I think I have a projector at home that's actually using ethernet physical connections. But what, so what are we talking about here in terms of the actual protocol that's running over this? Is this exactly the same as what you think of as data center ethernet, or, or is this, you know, RDMA over converged ethernet? What, what are >>We talking about? Yeah, so our rdma, right? So when you look at, you know, running, you know, essentially HPC workloads, you have the NPI protocol, so message passing interface, right? And so what you need to do is you may need to make sure that that NPI message passing interface runs efficiently on ethernet. And so this is why we want to test and validate all these different things to make sure that that protocol runs really, really fast on ethernet, if you look at NPI is officially, you know, built to, Hey, it was designed to run on InfiniBand, but now what you see with Broadcom and the great work they're doing now, we can make that work on ethernet and get, you know, it's same performance. So that's huge for customers. >>Both of you get to see a lot of different types of customers. I kind of feel like you're a little bit of a, a looking into the crystal ball type because you essentially get to see the future knowing what people are trying to achieve moving forward. Talk to us about the future of ethernet in hpc in terms of AI and ml. Where, where do you think we're gonna be next year or 10 years from now? >>You wanna go first or you want me to go first? I can start. >>Yeah. Pete feels ready. >>So I mean, what I see, I mean, ethernet, I mean, is what we've seen is that as far as on the starting off of the switch side, is that we've consistently doubled the bandwidth every 18 to 24 months. That's >>Impressive. >>Yeah. So nicely >>Done, casual, humble brag there. That was great. That was great. I love that. >>I'm here for you. I mean, I think that's one of the benefits of, of Ethan is like, is the ecosystem, is the trajectory, the roadmap we've had, I mean, you don't see that in any other networking technology >>More who, >>So, you know, I see that, you know, that trajectory is gonna continue as far as the switches, you know, doubling in bandwidth. I think that, you know, they're evolving protocols. You know, especially again, as you're moving away from academia into the enterprise, into cloud data centers, you need to have a combination of protocols. So you'll probably focus still on rdma, you know, for the supercomputing, the a AIML workloads. But we do see that, you know, as you have, you know, a mix of the applications running on these end nodes, maybe they're interfacing to the, the CPUs for some processing, you might use a different mix of protocols. So I'd say it's gonna be doubling a bandwidth over time evolution of the protocols. I mean, I expect that Rocky is probably gonna evolve over time depending on the a AIML and the HPC workloads. I think also there's a big change coming as far as the physical connectivity within the data center. Like one thing we've been focusing on is co-pack optics. So, you know, right now this chip is all, all the balls in the back here, there's electrical connections. How >>Many are there, by the way? 9,000 plus on the back of that >>352. >>I love how specific it is. It's brilliant. >>Yeah. So we get, so right now, you know, all the thirties, all the signals are coming out electrically based, but we've actually shown, we have this, actually, we have a version of Hawk four at 25 point sixt that has co-pack optics. So instead of having electrical output, you actually have optics directly out of the package. And if you look at, we'll have a version of Tomahawk five Nice. Where it's actually even a smaller form factor than this, where instead of having the electrical output from the bottom, you actually have fibers that plug directly into the sides. Wow. Cool. So I see, you know, there's, you know, the bandwidth, there's radis increasing protocols, different physical connectivity. So I think there's, you know, a lot of things throughout, and the protocol stack's also evolving. So, you know, a lot of excitement, a lot of new technology coming to bear. >>Okay. You just threw a carrot down the rabbit hole. I'm only gonna chase this one. Okay. >>All right. >>So I think of, I think of individual discreet physical connections to the back of those balls. Yeah. So if there's 9,000, fill in the blank, that's how many connections there are. How do you do that in many optical connections? What's, what's, what's the mapping there? What does that, what does that look like? >>So what we've announced for TAMA five is it would have fr four optics coming out. So you'd actually have, you know, 512 fiber pairs coming out. So you'd have, you know, basically on all four sides, you'd have these fiber ribbons that come in and connect. There's actually fibers coming out of the, the sides there. We wind up having, actually, I think in this case, we would actually have 512 channels and it would wind up being on 128 actual fiber pairs because >>It's, it's miraculous, essentially. It's, I know. Yeah, yeah, yeah, yeah. Yeah. So, so, you know, a lot of people are gonna be looking at this and thinking in terms of InfiniBand versus versus ethernet. I think you've highlighted some of the benefits of specifically running ethernet moving forward as, as hpc, you know, which is sort of just trails slightly behind supercomputing as we define it, becomes more pervasive AI ml. What, what are some of the other things that maybe people might not immediately think about when they think about the advantages of running ethernet in that environment? Is it, is it connecting, is it about connecting the HPC part of their business into the rest of it? What, or what, what are the advantages? >>Yeah, I mean, that's a big thing. I think, and one of the biggest things that ethernet has again, is that, you know, the data centers, you know, the networks within enterprises within, you know, clouds right now are run on ethernet. So now if you want to add services for your customers, the easiest thing for you to do is, you know, the drop in clusters that are connected with the same networking technology, you know, so I think what, you know, one of the biggest things there is that if you look at what's happening with some of the other proprietary technologies, I mean, in some cases they'll have two different types of networking technologies before they interface to ethernet. So now you've got to train your technicians, you train your, your assist admins on two different network technologies. You need to have all the, the debug technology, all the interconnect for that. So here, the easiest thing is you can use ethernet, it's gonna give you the same performance. And actually in some cases we seen better performance than we've seen with omnipath than, you know, better than in InfiniBand. >>That's awesome. Armando, we didn't get to you, so I wanna make sure we get your future hot take. Where do you see the future of ethernet here in hpc? >>Well, Pete hit on a big thing is bandwidth, right? So when you look at train a model, okay, so when you go and train a model in ai, you need to have a lot of data in order to train that model, right? So what you do is essentially you build a model, you choose whatever neural network you wanna utilize, but if you don't have a good data set that's trained over that model, you can't essentially train the model. So if you have bandwidth, you want big pipes because you have to move that data set from the storage to the cpu. And essentially, if you're gonna do it maybe on CPU only, but if you do it on accelerators, well guess what? You need a big pipe in order to get all that data through. And here's the deal. The bigger the pipe you have, the more data, the faster you can train that model. So the faster you can train that model, guess what? The faster you get to some new insight, maybe it's a new competitive advantage. Maybe it's some new way you design a product, but that's a benefit of speed you want faster, faster, faster. >>It's all about making it faster and easier. It is for, for the users. I love that. Last question for you, Pete, just because you've said Tomahawk seven times, and I'm thinking we're in Texas Stakes, there's a lot going on with with that making >>Me hungry. >>I know exactly. I'm sitting up here thinking, man, I did not have a big enough breakfast. How do you come up with the name Tomahawk? >>So Tomahawk, I think you just came, came from a list. So we had, we have a tri end product line. Ah, a missile product line. And Tomahawk is being kinda like, you know, the bigger and batter missile, so, oh, okay. >>Love this. Yeah, I, well, I >>Mean, so you let your engineers, you get to name it >>Had to ask. It's >>Collaborative. Oh good. I wanna make sure everyone's in sync with it. >>So just so we, it's not the Aquaman tried. Right, >>Right. >>The steak Tomahawk. I >>Think we're, we're good now. Now that we've cleared that up. Now we've cleared >>That up. >>Armando P, it was really nice to have both you. Thank you for teaching us about the future of ethernet N hpc. David Nicholson, always a pleasure to share the stage with you. And thank you all for tuning in to the Cube Live from Dallas. We're here talking all things HPC and Supercomputing all day long. We hope you'll continue to tune in. My name's Savannah Peterson, thanks for joining us.

Published Date : Nov 16 2022

SUMMARY :

how you doing? Ready to start off the Gentlemen, thank you for being here with us. why don't you start? So when you look at HPC and you know, infinite band's always been around, right? Pete, you wanna elaborate? I mean, I think, you know, one of the biggest things you find with internet for HPC is that, What, you know, what's, what's the leading edge, Trough here on the cube. So double the bandwidth have, you know, any other technology that's out there. Well, I, what I wanna know, please tell me that in your labs you have a poster on the wall that says T five with, So. When we talk about the end nodes, when we talk about creating a fabric, what, what's, what's the latest in terms of, So that would be, you know, four lanes, 50 gig. Yeah, Where do you think we are on the adoption curve and So if you have a customer that, Hey, I've been in fbe, but now I want to go ethernet, you know, there's gonna be some learning curves Or you just say, you know, what would be cool, what would be cool is we'll put this in the T six? on the operating system, you know, and they provide very valuable feedback for us on our roadmap. most powerful supercomputers that exist today, and you start looking at the specs and there might be So, you know, in a single hop you can have 256 end nodes connected through one switch. Is there just one in a chassis or you have, what does that look you know, in the past you would have line cards, either the fabric cards that the line cards are plugged into or interface if you wanna have DAK connectivity combined with, with optics. Is this exactly the same as what you think of as data So when you look at, you know, running, you know, a looking into the crystal ball type because you essentially get to see the future knowing what people are You wanna go first or you want me to go first? So I mean, what I see, I mean, ethernet, I mean, is what we've seen is that as far as on the starting off of the switch side, I love that. the roadmap we've had, I mean, you don't see that in any other networking technology So, you know, I see that, you know, that trajectory is gonna continue as far as the switches, I love how specific it is. So I see, you know, there's, you know, the bandwidth, I'm only gonna chase this one. How do you do So what we've announced for TAMA five is it would have fr four optics coming out. so, you know, a lot of people are gonna be looking at this and thinking in terms of InfiniBand versus know, so I think what, you know, one of the biggest things there is that if you look at Where do you see the future of ethernet here in So what you do is essentially you build a model, you choose whatever neural network you wanna utilize, It is for, for the users. How do you come up with the name Tomahawk? And Tomahawk is being kinda like, you know, the bigger and batter missile, Yeah, I, well, I Had to ask. I wanna make sure everyone's in sync with it. So just so we, it's not the Aquaman tried. I Now that we've cleared that up. And thank you all for tuning in to the

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavidPERSON

0.99+

David NicholsonPERSON

0.99+

Savannah PetersonPERSON

0.99+

AugustDATE

0.99+

2019DATE

0.99+

PetePERSON

0.99+

128QUANTITY

0.99+

PeterPERSON

0.99+

2 millionQUANTITY

0.99+

2020DATE

0.99+

400 gigQUANTITY

0.99+

200 gigQUANTITY

0.99+

BroadcomORGANIZATION

0.99+

DellORGANIZATION

0.99+

400 gigQUANTITY

0.99+

200 gigQUANTITY

0.99+

DallasLOCATION

0.99+

30 speedsQUANTITY

0.99+

50 gigQUANTITY

0.99+

one chipQUANTITY

0.99+

400 giggiQUANTITY

0.99+

512 channelsQUANTITY

0.99+

9,000QUANTITY

0.99+

seven timesQUANTITY

0.99+

800 gigQUANTITY

0.99+

ArmandoPERSON

0.99+

24 monthsQUANTITY

0.99+

oneQUANTITY

0.99+

50%QUANTITY

0.99+

9,000 plusQUANTITY

0.99+

bothQUANTITY

0.99+

Peter Del VecchioPERSON

0.99+

single sourceQUANTITY

0.99+

North AmericaLOCATION

0.98+

doubleQUANTITY

0.98+

todayDATE

0.98+

BothQUANTITY

0.98+

Hawk fourCOMMERCIAL_ITEM

0.98+

threeQUANTITY

0.98+

Day twoQUANTITY

0.97+

next yearDATE

0.97+

hpcORGANIZATION

0.97+

Tomahawk fiveCOMMERCIAL_ITEM

0.97+

Dell TechnologiesORGANIZATION

0.97+

T sixCOMMERCIAL_ITEM

0.96+

twoQUANTITY

0.96+

one switchQUANTITY

0.96+

TexasLOCATION

0.96+

six efficiencyQUANTITY

0.96+

25 pointQUANTITY

0.95+

ArmandoORGANIZATION

0.95+

50QUANTITY

0.93+

25.6 tets per secondQUANTITY

0.92+

51.2 terabytes per secondQUANTITY

0.92+

18QUANTITY

0.91+

512 fiber pairsQUANTITY

0.91+

two fascinating guestsQUANTITY

0.91+

hundred gigQUANTITY

0.91+

four lanesQUANTITY

0.9+

HPCORGANIZATION

0.9+

51.2 T.QUANTITY

0.9+

InfiniBandORGANIZATION

0.9+

256 endQUANTITY

0.89+

firstQUANTITY

0.89+

Armando AcostaPERSON

0.89+

two different network technologiesQUANTITY

0.88+

Breaking Analysis: Broadcom, Taming the VMware Beast


 

>> From theCUBE studios in Palo Alto in Boston, bringing you data driven insights from theCUBE and ETR. This is Breaking Analysis with Dave Vellante. >> In the words of my colleague CTO David Nicholson, Broadcom buys old cars, not to restore them to their original luster and beauty. Nope. They buy classic cars to extract the platinum that's inside the catalytic converter and monetize that. Broadcom's planned 61 billion acquisition of VMware will mark yet another new era and chapter for the virtualization pioneer, a mere seven months after finally getting spun out as an independent company by Dell. For VMware, this means a dramatically different operating model with financial performance and shareholder value creation as the dominant and perhaps the sole agenda item. For customers, it will mean a more focused portfolio, less aspirational vision pitches, and most certainly higher prices. Hello and welcome to this week's Wikibon CUBE Insights powered by ETR. In this Breaking Analysis, we'll share data, opinions and customer insights about this blockbuster deal and forecast the future of VMware, Broadcom and the broader ecosystem. Let's first look at the key deal points, it's been well covered in the press. But just for the record, $61 billion in a 50/50 cash and stock deal, resulting in a blended price of $138 per share, which is a 44% premium to the unaffected price, i.e. prior to the news breaking. Broadcom will assume 8 billion of VMware debt and promises that the acquisition will be immediately accretive and will generate 8.5 billion in EBITDA by year three. That's more than 4 billion in EBITDA relative to VMware's current performance today. In a classic Broadcom M&A approach, the company promises to dilever debt and maintain investment grade ratings. They will rebrand their software business as VMware, which will now comprise about 50% of revenues. There's a 40 day go shop and importantly, Broadcom promises to continue to return 60% of its free cash flow to shareholders in the form of dividends and buybacks. Okay, with that out of the way, we're going to get to the money slide literally in a moment that Broadcom shared on its investor call. Broadcom has more than 20 business units. It's CEO Hock Tan makes it really easy for his business unit managers to understand. Rule number one, you agreed to an operating plan with targets for revenue, growth, EBITDA, et cetera, hit your numbers consistently and we're good. You'll be very well compensated and life will be wonderful for you and your family. Miss the number, and we're going to have a frank and uncomfortable bottom line discussion. You'll four, perhaps five quarters to turn your business around, if you don't, we'll kill it or sell it if we can. Rule number two, refer to rule number one. Hello, VMware, here's the money slide. I'll interpret the bullet points on the left for clarity. Your fiscal year 2022 EBITDA was 4.7 billion. By year three, it will be 8.5 billion. And we Broadcom have four knobs to turn with you, VMware to help you get there. First knob, if it ain't recurring revenue with rubber stamp renewals, we're going to convert that revenue or kill it. Knob number two, we're going to focus R&D in the most profitable areas of the business. AKA expect the R&D budget to be cut. Number three, we're going to spend less on sales and marketing by focusing on existing customers. We're not going to lose money today and try to make it up many years down the road. And number four, we run Broadcom with 1% GNA. You will too. Any questions? Good. Now, just to give you a little sense of how Broadcom runs its business and how well run a company it is, let's do a little simple comparison with this financial snapshot. All we're doing here is taking the most recent quarterly earnings reports from Broadcom and VMware respectively. We take the quarterly revenue and multiply by four X to get the revenue run rate and then we calculate the ratios off of the most recent quarters revenue. It's worth spending some time on this to get a sense of how profitable the Broadcom business actually is and what the spreadsheet gurus at Broadcom are seeing with respect to the possibilities for VMware. So combined, we're talking about a 40 plus billion dollar company. Broadcom is growing at more than 20% per year. Whereas VMware's latest quarter showed a very disappointing 3% growth. Broadcom is mostly a hardware company, but its gross margin is in the high seventies. As a software company of course VMware has higher gross margins, but FYI, Broadcom's software business, the remains of Symantec and what they purchased as CA has 90% gross margin. But the I popper is operating margin. This is all non gap. So it excludes things like stock based compensation, but Broadcom had 61% operating margin last quarter. This is insanely off the charts compared to VMware's 25%. Oracle's non gap operating margin is 47% and Oracle is an incredibly profitable company. Now the red box is where the cuts are going to take place. Broadcom doesn't spend much on marketing. It doesn't have to. It's SG&A is 3% of revenue versus 18% for VMware and R&D spend is almost certainly going to get cut. The other eye popper is free cash flow as a percentage of revenue at 51% for Broadcom and 29% for VMware. 51%. That's incredible. And that my dear friends is why Broadcom a company with just under 30 billion in revenue has a market cap of 230 billion. Let's dig into the VMware portfolio a bit more and identify the possible areas that will be placed under the microscope by Hock Tan and his managers. The data from ETR's latest survey shows the net score or spending momentum across VMware's portfolio in this chart, net score essentially measures the net percent of customers that are spending more on a specific product or vendor. The yellow bar is the most recent survey and compares the April 22 survey data to April 21 and January of 22. Everything is down in the yellow from January, not surprising given the economic outlook and the change in spending patterns that we've reported. VMware Cloud on AWS remains the product in the ETR survey with the most momentum. It's the only offering in the portfolio with spending momentum above the 40% line, a level that we consider highly elevated. Unified Endpoint Management looks more than respectable, but that business is a rock fight with Microsoft. VMware Cloud is things like VMware Cloud foundation, VCF and VMware's cross cloud offerings. NSX came from the Nicira acquisition. Tanzu is not yet pervasive and one wonders if VMware is making any money there. Server is ESX and vSphere and is the bread and butter. That is where Broadcom is going to focus. It's going to look at VSAN and NSX, which is software probably profitable. And of course the other products and see if the investments are paying off, if they are Broadcom will keep, if they are not, you can bet your socks, they will be sold off or killed. Carbon Black is at the far right. VMware paid $2.1 billion for Carbon Black. And it's the lowest performer on this list in terms of net score or spending momentum. And that doesn't mean it's not profitable. It just doesn't have the momentum you'd like to see, so you can bet that is going to get scrutiny. Remember VMware's growth has been under pressure for the last several years. So it's been buying companies, dozens of them. It bought AirWatch, bought Heptio, Carbon Black, Nicira, SaltStack, Datrium, Versedo, Bitnami, and on and on and on. Many of these were to pick up engineering teams. Some of them were to drive new revenue. Now this is definitely going to be scrutinized by Broadcom. So that helps explain why Michael Dell would sell VMware. And where does VMware go from here? It's got great core product. It's an iconic name. It's got an awesome ecosystem, fantastic distribution channel, but its growth is slowing. It's got limited developer chops in a world that developers and cloud native is all the rage. It's got a far flung R&D agenda going at war with a lot of different places. And it's increasingly fighting this multi front war with cloud companies, companies like Cisco, IBM Red Hat, et cetera. VMware's kind of becoming a heavy lift. It's a perfect acquisition target for Broadcom and why the street loves this deal. And we titled this Breaking Analysis taming the VMware beast because VMware is a beast. It's ubiquitous. It's an epic software platform. EMC couldn't control it. Dell used it as a piggy bank, but really didn't change its operating model. Broadcom 100% will. Now one of the things that we get excited about is the future of systems architectures. We published a breaking analysis about a year ago, talking about AWS's secret weapon with Nitro and it's Annapurna custom Silicon efforts. Remember it acquired Annapurna for a measly $350 million. And we talked about how there's a new architecture and a new price performance curve emerging in the enterprise, driven by AWS and being followed by Microsoft, Google, Alibaba, a trend toward custom Silicon with the arm based Nitro and which is AWS's hypervisor and Nick strategy, enabling processor diversity with things like Graviton and Trainium and other diverse processors, really diversifying away from x86 and how this leads to much faster product cycles, faster tape out, lower costs. And our premise was that everyone in the data center is going to competes, is going to need a Nitro to be competitive long term. And customers are going to gravitate toward the most economically favorable platform. And as we describe the landscape with this chart, we've updated this for this Breaking Analysis and we'll come back to nitro in a moment. This is a two dimensional graphic with net score or spending momentum on the vertical axis and overlap formally known as market share or presence within the survey, pervasiveness that's on the horizontal axis. And we plot various companies and products and we've inserted VMware's net score breakdown. The granularity in those colored bars on the bottom right. Net score is essentially the green minus the red and a couple points on that. VMware in the latest survey has 6% new adoption. That's that lime green. It's interesting. The question Broadcom is going to ask is, how much does it cost you to acquire that 6% new. 32% of VMware customers in the survey are increasing spending, meaning they're increasing spending by 6% or more. That's the forest green. And the question Broadcom will dig into is what percent of that increased spend (chuckles) you're capturing is profitable spend? Whatever isn't profitable is going to be cut. Now that 52% gray area flat spending that is ripe for the Broadcom picking, that is the fat middle, and those customers are locked and loaded for future rent extraction via perpetual renewals and price increases. Only 8% of customers are spending less, that's the pinkish color and only 3% are defecting, that's the bright red. So very, very sticky profile. Perfect for Broadcom. Now the rest of the chart lays out some of the other competitor names and we've plotted many of the VMware products so you can see where they fit. They're all pretty respectable on the vertical axis, that's spending momentum. But what Broadcom wants is that core ESX vSphere base where we've superimposed the Broadcom logo. Broadcom doesn't care so much about spending momentum. It cares about profitability potential and then momentum. AWS and Azure, they're setting the pace in this business, in the upper right corner. Cisco very huge presence in the data center, as does Intel, they're not in the ETR survey, but we've superimposed them. Now, Intel of course, is in a dog fight within Nvidia, the Arm ecosystem, AMD, don't forget China. You see a Google cloud platform is in there. Oracle is also on the chart as well, somewhat lower on the vertical axis, but it doesn't have that spending momentum, but it has a big presence. And it owns a cloud as we've talked about many times and it's highly differentiated. It's got a strategy that allows it to differentiate from the pack. It's very financially driven. It knows how to extract lifetime value. Safra Catz operates in many ways, similar to what we're seeing from Hock Tan and company, different from a portfolio standpoint. Oracle's got the full stack, et cetera. So it's a different strategy. But very, very financially savvy. You could see IBM and IBM Red Hat in the mix and then Dell and HP. I want to come back to that momentarily to talk about where value is flowing. And then we plotted Nutanix, which with Acropolis could suck up some V tax avoidance business. Now notice Symantec and CA, relatively speaking in the ETR survey, they have horrible spending momentum. As we said, Broadcom doesn't care. Hock Tan is not going for growth at the expense of profitability. So we fully expect VMware to come down on the vertical axis over time and go up on the profit scale. Of course, ETR doesn't measure the profitability here. Now back to Nitro, VMware has this thing called Project Monterey. It's essentially their version of Nitro and will serve as their future architecture diversifying off x86 and accommodating alternative processors. And a much more efficient performance, price in energy consumption curve. Now, one of the things that we've advocated for, we said this about Dell and others, including VMware to take a page out of AWS and start developing custom Silicon to better integrate hardware and software and accelerate multi-cloud or what we call supercloud. That layer above the cloud, not just running on individual clouds. So this is all about efficiency and simplicity to own this space. And we've challenged organizations to do that because otherwise we feel like the cloud guys are just going to have consistently better costs, not necessarily price, but better cost structures, but it begs the question. What happens to Project Monterey? Hock Tan and Broadcom, they don't invest in something that is unproven and doesn't throw off free cash flow. If it's not going to pay off for years to come, they're probably not going to invest in it. And yet Project Monterey could help secure VMware's future in not only the data center, but at the edge and compete more effectively with cloud economics. So we think either Project Monterey is toast or the VMware team will knock on the door of one of Broadcom's 20 plus business units and say, guys, what if we work together with you to develop a version of Monterey that we can use and sell to everyone, it'd be the arms dealer to everyone and be competitive with the cloud and other players out there and create the de facto standard for data center performance and supercloud. I mean, it's not outrageously expensive to develop custom Silicon. Tesla is doing it for example. And Broadcom obviously is capable of doing it. It's got good relationships with semiconductor fabs. But I think this is going to be a tough sell to Broadcom, unless VMware can hide this in plain site and make it profitable fast, like AWS most likely has with Nitro and Graviton. Then Project Monterey and our pipe dream of alternatives to Nitro in the data center could happen but if it can't, it's going to be toast. Or maybe Intel or Nvidia will take it over or maybe the Monterey team will spin out a VMware and do a Pensando like deal and demonstrate the viability of this concept and then Broadcom will buy it back in 10 years. Here's a double click on that previous data that we put in tabular form. It's how the data on that previous slide was plotted. I just want to give you the background data here. So net score spending momentum is the sorted on the left. So it's sorted by net score in the left hand chart, that was the y-axis in the previous data set and then shared and or presence in the data set is the right hand chart. In other words, it's sorted on the right hand chart, right hand table. That right most column is shared and you can see it's sorted top to bottom, and that was the x-axis on the previous chart. The point is not many on the left hand side are above the 40% line. VMware Cloud on AWS is, it's expensive, so it's probably profitable and it's probably a keeper. We'll see about the rest of VMware's portfolio. Like what happens to Tanzu for example. On the right, we drew a red line, just arbitrarily at those companies and products with more than a hundred mentions in the survey, everything but Tanzu from VMware makes that cut. Again, this is no indication of profitability here, and that's what's going to matter to Broadcom. Now let's take a moment to address the question of Broadcom as a software company. What the heck do they know about software, right. Well, they're not dumb over there and they know how to run a business, but there is a strategic rationale to this move beyond just doing portfolios and extracting rents and cutting R&D, et cetera, et cetera. Why, for example, isn't Broadcom going after coming back to Dell or HPE, it could pick up for a lot less than VMware, and they got way more revenue than VMware. Well, it's obvious, software's more profitable of course, and Broadcom wants to move up the stack, but there's a trend going on, which Broadcom is very much in touch with. First, it sells to Dell and HPE and Cisco and all the OEM. so it's not going to disrupt that. But this chart shows that the value is flowing away from traditional servers and storage and networking to two places, merchant Silicon, which itself is morphing. Broadcom... We focus on the left hand side of this chart. Broadcom correctly believes that the world is shifting from a CPU centric center of gravity to a connectivity centric world. We've talked about this on theCUBE a lot. You should listen to Broadcom COO Charlie Kawwas speak about this. It's all that supporting infrastructure around the CPU where value is flowing, including of course, alternative GPUs and XPUs, and NPUs et cetera, that are sucking the value out of the traditional x86 architecture, offloading some of the security and networking and storage functions that traditionally have been done in x86 which are part of the waste right now in the data center. This is that shifting dynamic of Moore's law. Moore's law, not keeping pace. It's slowing down. It's slower relative to some of the combinatorial factors. When you add up in all the CPU and GPU and NPU and accelerators, et cetera. So we've talked about this a lot in Breaking Analysis episodes. So the value is shifting left within that middle circle. And it's shifting left within that left circle toward components, other than CPU, many of which Broadcom supplies. And then you go back to the middle, value is shifting from that middle section, that traditional data center up into hyperscale clouds, and then to the right toward infrastructure software to manage all that equipment in the data center and across clouds. And look Broadcom is an arms dealer. They simply sell to everyone, locking up key vectors of the value chain, cutting costs and raising prices. It's a pretty straightforward strategy, but not for the fate of heart. And Broadcom has become pretty good at it. Let's close with the customer feedback. I spoke with ETRs Eric Bradley this morning. He and I both reached out to VMware customers that we know and got their input. And here's a little snapshot of what they said. I'll just read this. Broadcom will be looking to invest in the core and divest of any underperforming assets, right on. It's just what we were saying. This doesn't bode well for future innovation, this is a CTO at a large travel company. Next comment, we're a Carbon Black customer. VMware didn't seem to interfere with Carbon Black, but now that we're concerned about short term disruption to their tech roadmap and long term, are they going to split and be sold off like Symantec was, this is a CISO at a large hospitality organization. Third comment, I got directly from a VMware practitioner, an IT director at a manufacturing firm. This individual said, moving off VMware would be very difficult for us. We have over 500 applications running on VMware, and it's really easy to manage. We're not going to move those into the cloud and we're worried Broadcom will raise prices and just extract rents. Last comment, we'll share as, Broadcom sees the cloud data center and IoT is their next revenue source. The VMware acquisition provides them immediate virtualization capabilities to support a lightweight IoT offering. Big concern for customers is what technology they will invest in and innovate, and which will be stripped off and sold. Interesting. I asked David Floyer to give me a back of napkin estimate for the following question. I said, David, if you're running mission critical applications on VMware, how much would it increase your operating cost moving those applications into the cloud? Or how much would it save? And he said, Dave, VMware's really easy to run. It can run any application pretty much anywhere, and you don't need an army of people to manage it. All your processes are tied to VMware, you're locked and loaded. Move that into the cloud and your operating cost would double by his estimates. Well, there you have it. Broadcom will pinpoint the optimal profit maximization strategy and raise prices to the point where customers say, you know what, we're still better off staying with VMware. And sadly, for many practitioners there aren't a lot of choices. You could move to the cloud and increase your cost for a lot of your applications. You could do it yourself with say Zen or OpenStack. Good luck with that. You could tap Nutanix. That will definitely work for some applications, but are you going to move your entire estate, your application portfolio to Nutanix? It's not likely. So you're going to pay more for VMware and that's the price you're going to pay for two decades of better IT. So our advice is get out ahead of this, do an application portfolio assessment. If you can move apps to the cloud for less, and you haven't yet, do it, start immediately. Definitely give Nutanix a call, but going to have to be selective as to what you actually can move, forget porting to OpenStack, or do it yourself Hypervisor, don't even go there. And start building new cloud native apps where it makes sense and let the VMware stuff go into manage decline. Let certain apps just die through attrition, shift your development resources to innovation in the cloud and build a brick wall around the stable apps with VMware. As Paul Maritz, the former CEO of VMware said, "We are building the software mainframe". Now marketing guys got a hold of that and said, Paul, stop saying that, but it's true. And with Broadcom's help that day we'll soon be here. That's it for today. Thanks to Stephanie Chan who helps research our topics for Breaking Analysis. Alex Myerson does the production and he also manages the Breaking Analysis podcast. Kristen Martin and Cheryl Knight help get the word out on social and thanks to Rob Hof, who was our editor in chief at siliconangle.com. Remember, these episodes are all available as podcast, wherever you listen, just search Breaking Analysis podcast. Check out ETRs website at etr.ai for all the survey action. We publish a full report every week on wikibon.com and siliconangle.com. You can email me directly at david.vellante@siliconangle.com. You can DM me at DVellante or comment on our LinkedIn posts. This is Dave Vellante for theCUBE Insights powered by ETR. Have a great week, stay safe, be well. And we'll see you next time. (upbeat music)

Published Date : May 28 2022

SUMMARY :

This is Breaking Analysis and promises that the acquisition

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavidPERSON

0.99+

Stephanie ChanPERSON

0.99+

CiscoORGANIZATION

0.99+

Dave VellantePERSON

0.99+

SymantecORGANIZATION

0.99+

Rob HofPERSON

0.99+

Alex MyersonPERSON

0.99+

April 22DATE

0.99+

HPORGANIZATION

0.99+

David FloyerPERSON

0.99+

AWSORGANIZATION

0.99+

DellORGANIZATION

0.99+

OracleORGANIZATION

0.99+

HPEORGANIZATION

0.99+

Paul MaritzPERSON

0.99+

BroadcomORGANIZATION

0.99+

VMwareORGANIZATION

0.99+

NvidiaORGANIZATION

0.99+

Eric BradleyPERSON

0.99+

April 21DATE

0.99+

NSXORGANIZATION

0.99+

IBMORGANIZATION

0.99+

Cheryl KnightPERSON

0.99+

DavePERSON

0.99+

JanuaryDATE

0.99+

$61 billionQUANTITY

0.99+

8.5 billionQUANTITY

0.99+

$2.1 billionQUANTITY

0.99+

MicrosoftORGANIZATION

0.99+

Palo AltoLOCATION

0.99+

EMCORGANIZATION

0.99+

AcropolisORGANIZATION

0.99+

Kristen MartinPERSON

0.99+

90%QUANTITY

0.99+

6%QUANTITY

0.99+

4.7 billionQUANTITY

0.99+

GoogleORGANIZATION

0.99+

Hock TanORGANIZATION

0.99+

60%QUANTITY

0.99+

44%QUANTITY

0.99+

40 dayQUANTITY

0.99+

61%QUANTITY

0.99+

8 billionQUANTITY

0.99+

Michael DellPERSON

0.99+

52%QUANTITY

0.99+

47%QUANTITY

0.99+

Andy Brown, Broadcom


 

hello and welcome to the cube i'm dave nicholson chief technology officer at thecube and we're here for a very special cube conversation with andy brown from broadcom andy welcome to the cube tell us a little about yourself a little bit my about myself my name is andy brown i'm currently the senior director of software architecture and performance analysis here within the data center solutions group at broadcom i've been doing that for about seven years prior to that i held various positions within the system architecture systems engineering and ic development organization but ultimately as well i spent some time in our support organization and managing our support team but ultimately have landed in the architecture organization as well as performance analysis great so a lot of what you do is around improving storage performance tell us more about that so let me give you a brief history of uh storage from from my perspective um you know i as i mentioned i go back about 30 years in my career and that would have started back in the ncr microelectronics days and originally with parallel scuzzy so that would be if anyone would remember the 5380 controller which was one of the original parallel scuzzy controllers that existed and built by ncr microelectronics at the time i've i've seen the advent of parallel scuzzy a stint of fiber channel ultimately leading into the serialization of those of the scuzzy standard into sas as well as sata and then ultimately leading to nvme protocols and the advent of flash moving from hard drives into a flash based media and as well on on that's on the storage side on the host side moving from parallel interfaces isa if everybody could remember that moving to pci pci express that's where we land today so andy we're square in the middle of the era of both nvme and sas what kinds of challenges does that overlap represent well i think you know obviously we've seen sas around for a while it was the conversion from parallel into a serial attached scuzzy and that really sas brings with it the ability to uh connect on really a high number of devices um and uh was was kind of the original scaling of devices and and really uh also enabled uh was was one of the things that enabled flash based media given the the speed and performance that came to the table of course nvme came in as well with the promise of of even higher speeds and as we saw flash media really really take a strong role in storage uh nvme came around and and really was focused on trying to address that whereas sas originated with hard drive technology nvme was really born out of how do we how do we most efficiently deal with flash based media you know sas with its but sas still carries a benefit on scalability nvme maybe has i don't want to say challenges there but it's definitely was not designed as much to be broadly scalable across many many say hundreds or thousands of devices but definitely addressed some of the performance issues that were coming up as flash media was becoming so uh uh was was increasing the overall storage performance that we could experience if you will let's talk about host interfaces like pcie what's the significance there um really uh the all the storage in the world all of the performance in the world and on the storage side is not of much use to you unless you can really feed it into the into the beast if you will into the cpu and into this the rest of the server subsystem and that's really where pci comes into play pci uh originally was in parallel form and then moved to serial with pci express as we know it today and and really has created a pathway to to to enable not not only storage performance but any other adapter or any other networking or other other types of technologies to just open up that pathway and feed the processor if and as we've moved through from pci to pci express pci 2.0 3.0 4.0 and just opening up those pipes has really enabled just a tremendous amount of flow of data into into the compute engine allowing it to be analyzed sorted used to analyze data big data uh ai type applications just those pipes are critical in those types of applications we know we've seen dramatic increases in performance going from one generation of pcie to the next but how does that translate into the worlds of sas sata and nvme um so from a performance perspective when we look at these different types of media whether it be sata sas or nvme um of course there are performance difference inherent in that media sata being probably the lowest performing with nvme uh topping out at higher performing although sas can perform quite well as a flash based you know as a protocol connected to flash based media and of course nvme from us an individual device scaling from a by one to a by four interface really that is where nvme kind of has enabled a bigger pipe directly to the storage media uh being able to scale up to buy four whereas sas is kind of limited to buy one maybe buy two in some cases although most servers only connect the sas device by one so from a difference perspective then you're really wanting to create a solution or or enable the infrastructure to be able to consume that performance that nvme is going to give you and i think that you know that is something where our solutions have really in in the recent generations shine at their ability to really now uh keep up with uh storage performance in nvme uh as well as provide that connectivity back down into the sas and sata world as well let's talk about your perspective on raid today so uh there's been a lot of uh views and opinions on raid over the years it's been a and those have been changing over time raid has been around for a very very long time uh probably about as long as again going back over my 30-year career uh it's been around for almost the entire time obviously raid originally was viewed as as something that was uh very very necessary uh devices fail they don't last forever but the data that's on them is very very important and people care about that so raid was brought about you know knowing that individual devices that are storing that data are going to fail and really took hold as a primary mechanism of protection but as time went on uh and and as performance moved up uh both in the server and both in in the media itself if we start talking about flash uh raid really took on people people started to look at traditional server storage raid uh but with maybe a more of a negative connotation i think that because uh to be quite honest it fell behind a little bit if you look at things like parity raid raid five and rate six very very effective and efficient means of protecting your data very storage efficient but ultimately had some penalties a primarily around wright performance random rights in raid 5 volumes was not keeping up with what really needed to be there and um i think that really shifted uh opinions of raid that hey it's just it's just not it's not going to keep up and we need to move on to other avenues and and we've seen that we've seen disaggregated storage and other solutions pop up to protect your data obviously in cloud environments and things like that it's shown up and uh and they have been successful so one of the drawbacks with raid is always the performance tax associated with generating parity for parity rate what has broadcom done to address those potential bottlenecks we've really solved the raid performance issue the right performance issue we're we're in our latest generation of controllers we're exceeding a million rate five right iops which is enough to satisfy many many many applications as a matter of fact even in virtual environments aggregated solutions we have multiple applications and then as well in the rebuild arena we really have through our architecture through our hardware automation have been able to move the bar on that to where the rebuild not only the rebuild times have been brought down dramatically in sas based or in i'm sorry in flash based solutions but the performance that you can observe while those rebuilds are going on is almost immeasurable so in most applications you would almost observe no performance deficiencies during a rebuild operation which is really night and day compared to where things were just a few short years ago so the fact that you've been able to dramatically decrease the time necessary for a raid rebuild is obviously extremely important but give us your overall performance philosophy from broadcom's point of view you know over the years we have recognized that performance is is obviously critically important for our products and the ability to analyze performance from many many angles is critically important there are literally infinite ways you can look at performance in a storage subsystem what we have done in our labs and in our solutions through not only hardware scaling in our in our in our labs but also through automation scripts and things like that allowed us to collect a substantial amount of data to look at the performance of our solutions from every angle you know iops bandwidth application level performance small topologies large topologies just just many many aspects it's still honestly only scratches the surface of all the possible uh performance points that you could gather but it it has we have moved the bar dramatically in that regard and and it's something that our customers really demanded of us um you know storage technology has gotten more complex and you have to look at it from a lot different angles especially on the performance front to make sure that there are no holes there that somebody's going to run into so based on specific customer needs and requests you look at performance from a variety of different angles um what are some of the trends that you're seeing specifically in storage performance today and moving into the future yeah emerging trends within the storage industry i think that to look at the emerging trends you really need to go back and look at where we started we started uh in compute where people were you would have basically your uh your server that would be under the desk in a small business operation and individual uh businesses would have their own uh set of set of servers and and the storage would really be localized to those obviously the industry has recognized that um that to some extent disaggregation of that we we see that obviously in what's happening in cloud uh in hyper-converged storage and things like that those afford a tremendous amount of flexibility uh and and are obviously uh great players in the storage world today but what with that flexibility is come some sacrifice in performance and actually quite substantial sacrifice and what we're observing is almost uh it comes back full circle the uh the need for inbox high performing server storage that is well protected uh and and with people with confidence that people have confidence that their data is protected and that they can uh extract the performance that they need for the demanding database applications that still exist today and they still operate in in the offices around the country and around the world that really need to protect their data on a local basis in the server and i think that from a trend perspective that's what we're seeing also from the standpoint of nvme store nvme itself is really started out with hey we'll just software rate that we'll just we'll just wrap software around that we can protect the data we had so many customers come back to us saying you know what we really need hardware raid on nvme and when they came to us we were ready we had a solution ready to go and we're able to provide that and now we're seeing going on demand we are we are complementary to other storage solutions out there server storage is not going to necessarily rule the world but it is surely has a place in the broader storage spectrum and we think we have the right solution for that speaking of servers and server-based storage why would for example a dell customer care about the broadcom components in that dell server so uh uh let's say you're configuring a dell server and you're going why does why does hardware raid matter what what what's important about that well i think when you look at today's hardware raid uh first of all you're going to see dramatically better performance you're going to see dramatically better performance in it's going to enable you to put raid 5 volumes a very effective and efficient mechanism for protecting your data a storage efficient mechanism you're going to use raid 5 volumes where you weren't able to do that before because when you're in the millions of iops range you really uh can satisfy a lot of application needs out there and and then you're going to also going to have rebuilt times that are lightning fast your performance is not going to degrade when you're when you're running those application especially database applications but not not only database but streaming applications uh bandwidth uh to to protected raid volumes is is almost almost imperceptibly different from just raw bandwidth to the media so the rate rate configurations in today's dell servers really afford you the opportunity to make use of that storage where you you may not have uh you may have already written it off as well ray just doesn't is not going to get me there quite frankly uh into this in in the storage servers that dell is providing uh with with raid technology uh there are huge windows open and what you can do today with applications well all of this is obviously good news for dell and dell customers thanks again andy for joining us for this cube conversation i'm dave nicholson for the cube [Music]

Published Date : May 5 2022

SUMMARY :

move the bar on that to where the

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Andy BrownPERSON

0.99+

dellORGANIZATION

0.99+

dave nicholsonPERSON

0.99+

andy brownPERSON

0.99+

twoQUANTITY

0.99+

hundredsQUANTITY

0.99+

todayDATE

0.99+

dave nicholsonPERSON

0.97+

millionsQUANTITY

0.97+

andy brownPERSON

0.96+

bothQUANTITY

0.96+

broadcomORGANIZATION

0.95+

about 30 yearsQUANTITY

0.93+

oneQUANTITY

0.92+

one generationQUANTITY

0.91+

30-yearQUANTITY

0.87+

BroadcomORGANIZATION

0.86+

about seven yearsQUANTITY

0.86+

fourQUANTITY

0.85+

thousands of devicesQUANTITY

0.84+

nvmeORGANIZATION

0.84+

few short years agoDATE

0.83+

5380COMMERCIAL_ITEM

0.81+

rate sixOTHER

0.81+

thingsQUANTITY

0.76+

raid fiveOTHER

0.74+

5 volumesQUANTITY

0.73+

one ofQUANTITY

0.72+

millionQUANTITY

0.72+

chiefPERSON

0.7+

windowsTITLE

0.64+

pci expressCOMMERCIAL_ITEM

0.64+

nvmeTITLE

0.64+

thecubeORGANIZATION

0.61+

rate fiveOTHER

0.6+

pciORGANIZATION

0.6+

angleQUANTITY

0.59+

a lotQUANTITY

0.56+

lotQUANTITY

0.55+

customersQUANTITY

0.54+

raidQUANTITY

0.54+

no holesQUANTITY

0.54+

anglesQUANTITY

0.53+

ncrORGANIZATION

0.52+

pciCOMMERCIAL_ITEM

0.52+

expressTITLE

0.41+

Jas Tremblay, Broadcom


 

[Music] for decades the technology industry had marched the cadence of moore's law it was a familiar pattern system oems would design in the next generation of intel microprocessors every couple of years or so maybe bump up the memory ranges periodically and the supporting hardware would kind of go along for the ride upgrading its performance and bandwidth system designers they might beef up the cash maybe throwing some more spinning disk spindles at the equation to create a balanced environment and this was pretty predictable and consistent in the pattern and was reasonably straightforward compared to today's challenges this is all changed the confluence of cloud distributed global networks the diversity of applications ai machine learning and the massive growth of data outside of the data center requires new architectures to keep up as we've reported the traditional moore's law curve is flattening and along with that we've seen new packages with alternative processors like gpus npus accelerators and the like and the rising importance of supporting hardware to offload tasks like storage and security and it's created a massive challenge to connect all these components together the storage the memories and all of the enabling hardware and do so securely at very low latency at scale and of course cost effectively this is the topic of today's segment the shift from a world that is cpu centric to one where the connectivity of the various hardware components is where much of the innovation is occurring and to talk about that there is no company who knows more about this topic than broadcom and with us today is jazz tremblay who was general manager data center solutions group at broadcom jazz welcome to thecube hey dave thanks for having me really appreciate it yeah you bet now broadcom is a company that a lot of people might not know about i mean but the vast majority of the internet traffic flows through broadcom products like pretty much all of it it's a company with trailing 12-month revenues of nearly 29 billion and a 240 billion dollar market cap jazz what else should people know about broadcom well they've uh 99 of the internet traffic goes through broadcom silicon or devices and i think what people are not often aware of is how breath it is it starts with the devices phones and tablets that use our wi-fi technology or rf filters and then those connect to access points either at home at work or public access points using your wi-fi technology and if you're working from home you're using a residential or broadband gateway and that uses broadcom technology also from there you go to access networks core networks and eventually you'll work your way into the data center all connected by bartcom so really we're at the heart of enabling this connectivity ecosystem and we're at the core of it we're a technology company we invest about five billion dollars a year in r d and as you were saying or last year we achieved 27.5 billion of revenue and our mission is really to connect the ecosystem to enable what you said this transformation around the data centric world so talk about your scope of responsibility what's your what's your role generally and specifically with storage so i've been with the company for 16 years and i head up the data center solutions group which includes three product franchises a pci fabric storage connectivity and broadcom ethernet nics so my chart and my team's charter is really server connectivity inside the data center and and what specifically is broadcom doing in storage jazz so it's been quite a journey uh over the past eight years we've made a series of acquisition and built up a pretty impressive storage portfolio this first started with lsi and that's where i came from and the team here came from lsi that had two product franchises around storage the first one was server connectivity hba raid expanders for ssds and hdds the second product group was actually chips that go inside the hard drives so socs and preamps so that was an acquisition that we made and actually that's how i came into the broadcom group through lsi the next acquisition we made was plx the industry's leader in pcie fabrics they've been doing pcie switches for about 15 years we acquired the company and really saw an acceleration in the requirements for nvme attach and ai ml fabrics very specialized low latency fabrics after that we acquired a large system and software company brocade and dave if you recall brocade they're the market leader in fiber channel switching this is where if you're a financial or government institution you want to build a mission critical ultra secure really best in class storage network following brocade acquisition we acquired mulx that is now the number one provider of fibre channel adapters inside servers and the last acquisition for this puzzle was was actually broadcom where avago acquired broadcom and took on the broadcom name and there we acquired um ethernet switching capabilities and ethernet adapters that go into storage servers or external storage systems so with all this it's been quite the journey to build up this portfolio uh we're number one in each of these storage product categories and we now have four divisions that are focused on storage connectivity you know that's quite remarkable when you think about i mean i know all these companies that you were talking about and they were they were very quality companies but they were kind of bespoke and the fact that you had the vision to kind of connect the dots and now take responsibility for that integration we're going to talk about what that means in terms of competitive advantage but but i wonder if we could zoom out and maybe you could talk about the key storage challenges and elaborate a little bit on why connectivity is now so so important like what are the trends that are driving that shift that we talked about earlier from a cpu-centric world the one that's connectivity-centric i think at broadcom we recognize the importance of storage and storage connectivity and if you look at data centers whether it be private public cloud or hybrid data centers they're getting inundated with data if you look at the digital universe it's growing at about 23 keger day so over a course of four to five years you're doubling the amount of new information and that poses really two key challenges for the infrastructure the first one is you have to take all this data and for a good chunk of it you have to store it be able to access it and protect it the second challenge is you actually have to go and analyze and process this data and doing this at scale that's the uh the key challenge and what we're seeing these data centers uh getting a tsunami of data and historically they've been cpu-centric architectures and what that means is the cpu is at the heart of the data center and a lot of the workloads are processed by software running on the cpu we believe that we're currently transforming the architecture from cpu centric to connectivity centric and what we mean by connectivity centric is you architect your data center thinking about the connectivity first and the goal of the connectivity is to use all the components inside the data center the memory the spinning media the flash storage the networking the specialized accelerators the fpga all these elements and use them for what they're best at to process all this data and the goal uh dave is really to drive down power and deliver the performance so that we can achieve all the innovation we want inside the data centers so it's really a shift from cpu centric to uh bringing in more specialized components and architecting the the connectivity inside the data center to help we think that's a really important part okay so you have this need for connectivity at scale you mentioned and you're dealing with massive massive amounts of data i mean we're going to look back the last decade and say oh that you've seen nothing compared to to the when we get to 2030. but you at the same time you have to control costs so what are the technical challenges to achieving that vision so it's really challenging it's it's not that complex to build up faster bigger solution if you have no cost or or power budget and really the key challenges that our team is facing working with customers is first i'd say it's architectural challenges so we would all like to have one fabric that i think to connect all the devices and bring us all the characteristics that we need but the reality is we can't we can't do that so you need distinct fabrics inside the data center and you need them to work together you'll need an ethernet backbone in some cases you'll need a fiber channel network in some cases you'll need a small fabric for thousands or hundreds of thousands of hdds you will need pcie fabrics for aiml servers and and one of the key architectural challenges is which fabric do you use when and how do you develop these fabrics to meet their purpose-built needs the that's one thing the second architectural challenge dave is what i challenge my team with is example how do i double bandwidth while reducing net power double bandwidth reducing that power how do i take a storage controller and increase the iops by 10x and will i locate only 50 more power budget so that equation requires tremendous uh innovation and that's really what we focus on and power is becoming more and more important in that equation so you've got decisions from an architecture perspective as to which fabric to use you've got this architectural challenge around we need to innovate and do things smarter better to drive down power while delivering more performance then if you take those things together the problem statement becomes more complex so you've had these silicon devices with complex firmware on them that need to interoperate with multiple devices they're getting more and more complex so there's execution challenges and what we need to do and what we're investing to do is shift left quality so to do these complex devices they come out tight time to market with high quality and one of the key things that we've invested in is emulation of the environment before you tape out your silicon so effectively taking the application software running it on an emulation environment making sure that works running your tests before you tape out and that ensures quality silicon so it's it's challenging but the team loves challenges and that's kind of what we're facing on one hand architectural challenges on the other hand a new level of execution challenges so you're you're compressing the time to final tape out you know versus maybe traditional techniques and then you mentioned architecture my am i right jazz that you're essentially from an architectural standpoint trying to minimize the because your latency is so important you're trying to minimize the amount of data that you have to move around and actually bringing you know compute to the data is that the right way to think about it well i think there's multiple parts of the problem one of them is you need to do more data transactions example data protection with rate algorithms we need to do millions of transactions per second and the only way to achieve this with the minimal power impact is to hardware accelerate these that's one piece of investment the other investment is um you're absolutely right dave so it's shuffling the data around the data center so in the data center in some cases you need to have multiple pieces of the puzzle multiple ingredients processing the same data at the same time and you need advanced methodologies to share the data and avoid moving it all over the data center so that's another big piece of investment that we're focused on okay yeah so let's let's stay on that because i see this as disruptive you talk about spending five billion dollars you know a year in r d um and talk a little bit more about the disruptive technologies or the supportive technologies that you're introducing specifically to support this vision so let's break it down in a couple couple big industry problems that our team is focused on so the first one is i'll take an enterprise workload database if you want the fastest running database you want to utilize local storage nvme based drives and you need to protect that data and raid is the mechanism of choice to protect your data in local environments and there what we need to do is really just do the transactions a lot faster historically the storage has been a bit of a bottleneck in these types of applications for example our newest generation product we're doubling the bandwidth increasing iops by 4x but more importantly we're accelerating rate rebuilds by 50x and that's important dave if you are using a database in some cases you limit the size of that database based on how fast you can do those rebuilds so this 50x acceleration and rebuilds is something we're getting a lot of good feedback on for customers the the last metric we're really focused on is right latency so how fast can the cpu send the right to the storage connectivity subsystem and commit it to drives and we're improving that by 60x generation of regeneration so we're talking fully loaded latency 10 microseconds so from an enterprise workload it's about data protection much much faster using nvme drives that's one big problem the other one is if you're um if you look at dave youtube facebook tick tock the amount of user generated content specifically video content that they're producing on an hour-by-hour basis is mind-boggling and the hyperscale customers are really counting on us to help them scale the connectivity of hundreds of thousands of hard drive to store and access all that data in a very reliable way so there we're leading the industry in the transition to 24 gig sas and multi-actuator drives third big problem is around aiml servers so these are some of the highest performance servers that they basically need super low latency connectivity between gp gpus networking nvme drives cpus and orchestrate that all together and the fabric of choice for that is pcie fabric so here we're talking about 150 nanosecond latency in a pcie fabric fully non-blocking very reliable and here we're helping the industry transition from pca gen 4 to pcie gen 5. and the last piece is okay i've got a aiml server i have a storage system with hard drives or a storage server in the enterprise space all these devices systems need to be connected to the ethernet backbone and my team is heavily investing in ethernet mix transitioning to 100 gig 200 gig 400 gig and putting capabilities optimized for uh for storage workloads so those are kind of the four big things that we're focused on at the industry level from a connectivity perspective dave yeah and that makes a lot of sense and really resonates uh particularly as we have that shift from a cpu centric to a connectivity center because the other thing you said i mean you talk about 50x raid rebuild times you know one thing a couple of things you know in storage is if you ask the question what happens when something goes wrong because it's all about recovery you can't lose data and the other thing you mentioned is write latency which has always been you know the problem okay reads i can read out of cash but ultimately you've got to get it to where it's persisted so some real technical challenges there that you guys are dealing with absolutely dave yeah and uh these these are these are the type of problems that gets the engineers excited give them really tough tough technical problems to go solve i wonder if we could take a couple of examples or an example of scaling with a large customer for instance obviously hyperscalers or take a company like dell i mean they're a big company big customer take us through that so i we use the word scale a lot at broadcom we work with some of the industry leaders and data centers and oems and scale means different things to them so example if i'm working with a hyperscaler that is getting inundated with data and they need half a million storage controllers to store all that that data well their scale problem is can you deliver and dave you know how much of a hot topic that is these days so they need a partner that can scale from a delivery perspective but if i take a company like example dell that's very focused on storage from storage servers their acquisition of emc they have a very broad portfolio of data center storage offerings and scale to them from a broadcom from a connected by broadcom perspective means that you need to have the investment scale to meet their end-to-end requirements all the way from a low end storage connectivity solution for booting a server all the way up to a very high-end all-flash array or high-density hdd system so they want a company a partner that can invest and has a scale to invest to meet their end-to-end requirements second thing is their different products are unique and have different requirements and you need to adapt your collaboration model for example some products within dell portfolio might say i just want a storage adapter plug it in the operating system will automatically recognize it i need this turnkey i want to do minimal investment this is not an area of high differentiation for me at the other end of the spectrum they may have applications where they want deep integration with their management and our silicon tools so that they can deliver the highest quality highest performance to their customers so they need a partner that can scale from an r d investment perspective from a silicon software and hardware perspective but they also need a company that can scale from support and business model perspective and give them the flexibility that their end customers need so dell is a great company to work with we have a long lasting relationship with them and the relationship is very deep in some areas example server storage and it's also quite broad they they are adopters of the vast majority of our storage uh connectivity products well and i imagine it was i want to talk about the uniqueness of broadcom and again it's i'm in awe of the fact that somebody had the vision you guys your team uh obviously your ceo was one of the visionaries in the industry had the sense to to look out and say okay we can put these these pieces together so i would imagine a company like dell you've they're able to consolidate their their vendor their supplier base and push you for integration in an innovation in innovation how unique is that is the broadcom model what's compelling you know to your customers about that model so i think what's unique from a storage perspective is the breadth of the portfolio and also the scale at which we can invest so you know if you look at some of the things we talked about from a scale perspective how data centers throughout the world are getting inundated with data dave they need help and we need to equip them with cutting edge technology to increase performance drive down power improve reliability so so they need partners that in each of the the product categories that they you partner with them on you know we can invest with scale so that's i think one of the first things the second thing is if you look at this connectivity-centric data center you need multiple types of fabric and whether it be cloud customers or large oems they are organizing themselves to be able to look at things holistically they're no longer product company they're very data center architecture companies and um so it's good for them to have a partner that can look across product groups across division says okay this is the innovation we need to bring to market these are the problems we need to go solve and they really appreciate that and i think the last thing is a flexible business model within example my division we're we offer different business models different engagement and collaboration models with technology but there's another division that if you want to innovate at the silicon level and build custom silicon for you like many of the hyper scalers or other companies are doing that division is just focus on that so i i feel like broadcom is unique from a storage perspective its ability to innovate breadth a portfolio and the flexibility in the collaboration model to help our customers solve their customers problems so you're saying you can deal with merchant products slash open products or you can do highly you know high customization where does software differentiation fit into this model so it's it's actually one of the most important elements uh i think a lot of our customers take it for granted that will take care of the silicon we'll anticipate the requirements and deliver the performance that they need but from a software firmware driver utilities that is where a lot of differentiation lies some cases will offer an sdk model where you know customers can build their entire applications on top of that in some cases they want a complete turnkey solution where you take technology integrate it into server and the operating system recognizes it and you have out of box drivers from broadcom so we need to offer them that flexibility because you know their needs are quite quite broad there so last question what's the future of the business look like to jazz tremblay give us your point of view on that well uh it's it's fun i gotta tell you dave we're having uh we're having a great time we've got i've got a great team they're uh the world's experts on storage connectivity and working with them is a pleasure and we've got a rich great set of customers that are giving us cool problems to go solve and we're excited about it so i think this is really you know with the acceleration of all this digital transformation that we're seeing we're excited we're having fun and i think there's a lot of problems to be solved and we also have a responsibility i think the ecosystem in the industry is counting on our team to deliver the innovation from a storage connectivity perspective and i'll tell you dave we're having fun it's great but we take that responsibility uh pretty seriously jazz great stuff i really appreciate you laying all that out very important role you guys are playing you have a really unique perspective thank you thank you dave and thank you for watching this is dave vellante for the cube and we'll see you next time you

Published Date : May 5 2022

SUMMARY :

and the goal of the connectivity is to

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
thousandsQUANTITY

0.99+

27.5 billionQUANTITY

0.99+

12-monthQUANTITY

0.99+

16 yearsQUANTITY

0.99+

60xQUANTITY

0.99+

100 gigQUANTITY

0.99+

davePERSON

0.99+

10 microsecondsQUANTITY

0.99+

five billion dollarsQUANTITY

0.99+

200 gigQUANTITY

0.99+

400 gigQUANTITY

0.99+

last yearDATE

0.99+

24 gigQUANTITY

0.99+

second challengeQUANTITY

0.99+

50xQUANTITY

0.99+

fourQUANTITY

0.99+

Jas TremblayPERSON

0.99+

lsiORGANIZATION

0.99+

five yearsQUANTITY

0.99+

10xQUANTITY

0.99+

first oneQUANTITY

0.99+

2030DATE

0.99+

two productQUANTITY

0.98+

second thingQUANTITY

0.98+

dave vellantePERSON

0.98+

4xQUANTITY

0.98+

hundreds of thousands of hddsQUANTITY

0.98+

firstQUANTITY

0.98+

nearly 29 billionQUANTITY

0.97+

dellORGANIZATION

0.97+

secondQUANTITY

0.97+

todayDATE

0.97+

oneQUANTITY

0.97+

99QUANTITY

0.96+

one pieceQUANTITY

0.96+

half a million storage controllersQUANTITY

0.96+

three productQUANTITY

0.96+

about 15 yearsQUANTITY

0.96+

every couple of yearsQUANTITY

0.95+

eachQUANTITY

0.95+

two key challengesQUANTITY

0.95+

daveORGANIZATION

0.95+

240 billion dollarQUANTITY

0.93+

about five billion dollars a yearQUANTITY

0.92+

youtubeORGANIZATION

0.92+

bartcomORGANIZATION

0.9+

second productQUANTITY

0.9+

one fabricQUANTITY

0.9+

avagoORGANIZATION

0.89+

50 moreQUANTITY

0.89+

four big thingsQUANTITY

0.88+

intelORGANIZATION

0.88+

one big problemQUANTITY

0.87+

last decadeDATE

0.87+

about 150 nanosecondQUANTITY

0.86+

BroadcomPERSON

0.85+

hundreds of thousands of hardQUANTITY

0.85+

facebookORGANIZATION

0.84+

a yearQUANTITY

0.83+

millions of transactions per secondQUANTITY

0.82+

one thingQUANTITY

0.8+

broadcomORGANIZATION

0.79+

one of themQUANTITY

0.79+

four divisionsQUANTITY

0.78+

one of the most important elementsQUANTITY

0.78+

third bigQUANTITY

0.78+

lot of peopleQUANTITY

0.74+

key thingsQUANTITY

0.72+

pcieORGANIZATION

0.7+

decadesQUANTITY

0.67+

coupleQUANTITY

0.61+

about 23QUANTITY

0.59+

piecesQUANTITY

0.57+

pca gen 4OTHER

0.51+

couple bigQUANTITY

0.5+

Jas Tremblay, Broadcom


 

(upbeat music) >> For decades the technology industry had marched the cadence of Moore's law. It was a familiar pattern. System OEMs would design in the next generation of Intel microprocessors, every couple of years or so maybe bump up the memory ranges periodically and the supporting hardware would kind of go along for the ride, upgrading its performance and bandwidth. System designers then they might beef up the cache, maybe throwing some more spinning disc spindles at the equation to create a balanced environment. And this was pretty predictable and consistent in the pattern and was reasonably straightforward compared to is challenges. This has all changed. The confluence of cloud, distributed global networks, the diversity of applications, AI, machine learning and the massive growth of data outside of the data center requires new architectures to keep up. As we've reported the traditional Moore's Law curve is flattening. And along with that we've seen new packages with alternative processors like GPUs, NPUs, accelerators and the like and the rising importance of supporting hardware to offload tasks like storage and security. And it's created a massive challenge to connect all these components together, the storage, the memories and all of the enabling hardware and do so securely at very low latency at scale and of course, cost effectively. This is the topic of today's segment. The shift from a world that is CPU centric to one where the connectivity of the various hardware components is where much of the innovation is occurring. And to talk about that, there is no company who knows more about out this topic than Broadcom. And with us today is Jas Tremblay, who is general manager, data center solutions group at Broadcom. Jas, welcome to theCUBE. >> Hey Dave, thanks for having me, really appreciate it. >> Yeah, you bet. Now Broadcom is a company that a lot of people might not know about. I mean, but the vast majority of the internet traffic flows through Broadcom products. (chuckles) Like pretty much all of it. It's a company with trailing 12 month revenues of nearly 29 billion and a 240 billion market cap. Jas, what else should people know about Broadcom? >> Well, Dave, 99% of the internet traffic goes through Broadcom silicon or devices. And I think what people are not often aware of is how breadth it is. It starts with the devices, phones and tablets that use our Wi-Fi technology or RF filters. And then those connect to access points either at home, at work or public access points using our Wi-Fi technology. And if you're working from home, you're using a residential or broadband gateway and that uses Broadcom technology also. From there you go to access networks, core networks and eventually you'll work your way into the data center, all connected by Broadcom. So really we're at the heart of enabling this connectivity ecosystem and we're at the core of it, we're a technology company. We invest about 5 billion a year in R&D. And as you were saying our last year we achieved 27.5 billion of revenue. And our mission is really to connect the ecosystem to enable what you said, this transformation around the data-centric world. >> So talk about your scope of responsibility. What's your role generally and specifically with storage? >> So I've been with the company for 16 years and I head up the data center solutions group which includes three product franchises PCA fabric, storage connectivity and Broadcom ethernet nics. So my charter, my team's charter is really server connectivity inside the data center. >> And what specifically is Broadcom doing in storage, Jas? >> So it's been quite a journey. Over the past eight years we've made a series of acquisition and built up a pretty impressive storage portfolio. This first started with LSI and that's where I came from. And the team here came from LSI that had two product franchises around storage. The first one was server connectivity, HBA raid, expanders for SSDs and HDDs. The second product group was actually chips that go inside the hard drives. So SOCs and pre amps. So that was an acquisition that we made and actually that's how I came into the Broadcom group through LSI. The next acquisition we made was PLX, the industry's leader in PCIe fabrics. They'd been doing PCIe switches for about 15 years. We acquired the company and really saw an acceleration in the requirements for NVMe attached and AI ML fabrics, very specialized, low latency fabrics. After that, we acquired a large system and software company, Brocade, and Dave if you recall, Brocade they're the market leader in fiber channel switching, this is where if you're financial or government institution you want to build a mission critical, ultra secure really best in class storage network. Following Brocade acquisition we acquired Emulex that is now the number one provider of fiber channel adapters inside servers. And the last acquisition for this puzzle was actually Broadcom where Avago acquired Broadcom and took on the Broadcom name. And there we acquired ethernet switching capabilities and ethernet adapters that go into storage servers or external storage systems. So with all this it's been quite the journey to build up this portfolio. We're number one in each of these storage product categories. And we now have four divisions that are focused on storage connectivity. >> That's quite remarkable when you think about it. I mean, I know all these companies that you were talking about and they were very quality companies but they were kind of bespoke in the fact that you had the vision to kind of connect the dots and now take responsibility for that integration. We're going to talk about what that means in terms of competitive advantage, but I wonder if we could zoom out and maybe you could talk about the key storage challenges and elaborate a little bit on why connectivity is now so important. Like what are the trends that are driving that shift that we talked about earlier from a CPU centric world to one that's connectivity centric? >> I think at Broadcom, we recognize the importance of storage and storage connectivity. And if you look at data centers whether it be private, public cloud or hybrid data centers, they're getting inundated with data. If you look at the digital universe it's growing at about 23% a day. So over a course of four to five years you're doubling the amount of new information and that poses really two key challenges for the infrastructure. The first one is you have to take all this data and for a good chunk of it, you have to store it, be able to access it and protect it. The second challenge is you actually have to go and analyze and process this data and doing this at scale that's the key challenge and what we're seeing these data centers getting a tsunami of data. And historically they've been CPU centric architectures. And what that means is the CPU's at the heart of the data center. And a lot of the workloads are processed by software running on the CPU. We believe that we're currently transforming the architecture from CPU centric to connectivity centric. And what we mean by connectivity centric is you architect your data center thinking about the connectivity first. And the goal of the connectivity is to use all the components inside the data center, the memory, the spinning media, the flash storage, the networking, the specialized accelerators, the FPGA all these elements and use them for what they're best at to process all this data. And the goal Dave is really to drive down power and deliver the performance so that we can achieve all the innovation we want inside the data centers. So it's really a shift from CPU centric to bringing in more specialized components and architecting the connectivity inside the data center to help. We think that's a really important part. >> So you have this need for connectivity at scale, you mentioned, and you're dealing with massive, massive amounts of data. I mean, we're going to look back to the last decade and say, oh, you've seen nothing compared to when we get to 2030, but at the same time you have to control costs. So what are the technical challenges to achieving that vision? >> So it's really challenging. It's not that complex to build up faster, bigger solution, if you have no cost or power budget. And really the key challenges that our team is facing working with customers is first, I'd say it's architectural challenges. So we would all like to have one fabric that aim to connect all the devices and bring us all the characteristics that we need. But the reality is, we can't do that. So you need distinct fabrics inside the data center and you need them to work together. You'll need an ethernet backbone. In some cases, you'll need a fiber channel network. In some cases, you'll need a small fabric for thousands or hundreds of thousands of HDDs. You will need PCIe fabrics for AI ML servers. And one of the key architectural challenges is which fabric do you use when and how do you develop these fabrics to meet their purpose built needs. That's one thing. The second architectural challenge, Dave is what I challenge my team with is example, how do I double bandwidth while reducing net power, double bandwidth, reducing net power? How do I take a storage controller and increase the IOPS by 10X and will allocate only 50% more power budget? So that equation requires tremendous innovation. And that's really what we focus on and power is becoming more and more important in that equation. So you've got decisions from an architecture perspective as to which fabric to use. You've got this architectural challenge around we need to innovate and do things smarter, better, to drive down power while delivering more performance. Then if you take those things together the problem statement becomes more complex. So you've had these silicon devices with complex firmware on them that need to inter-operate with multiple devices. They're getting more and more complex. So there's execution challenges and what we need to do. And what we're we're investing to do is shift left quality. So to do these complex devices that they come out time to market with high quality. And one of the key things Dave that we've invested in is emulation of the environment before you tape out your silicon. So effectively taking the application software, running it on an emulation environment, making sure that works, running your tests before you tape out and that ensures quality silicon. So it's challenging, but the team loves challenges. And that's kind of what we're facing, on one hand architectural challenges, on the other hand a new level of execution challenges. >> So you're compressing the time to final tape out versus maybe traditional techniques. And then, you mentioned architecture, am I right Jas that you're essentially from an architectural standpoint trying to minimize the... 'cause your latency's so important you're trying to minimize the amount of data that you have to move around and actually bringing compute to the data. Is that the right way to think about it? >> Well, I think that there's multiple parts of the problem. One of them is you need to do more data transactions, example data protection with rate algorithms. We need to do millions of transactions per second. And the only way to achieve this with the minimal power impact is to hardware accelerate these. That's one piece of investment. The other investment is, you're absolutely right, Dave. So it's shuffling the data around the data center. So in the data center in some cases you need to have multiple pieces of the puzzle, multiple ingredients processing the same data at the same time and you need advanced methodologies to share the data and avoid moving it all over the data center. So that's another big piece of investment that we're focused on. >> So let's stay on that because I see this as disruptive. You talk about spending $5 billion a year in R&D and talk a little bit more about the disruptive technologies or the supportive technologies that you're introducing specifically to support this vision. >> So let's break it down in a couple big industry problems that our team is focused on. So the first one is I'll take an enterprise workload database. If you want the fastest running database you want to utilize local storage and NVMe based drives and you need to protect that data. And raid is the mechanism of choice to protect your data in local environments. And there what we need to do is really just do the transactions a lot faster. Historically the storage has been a bit of a bottleneck in these types of applications. So example our newest generation product. We're doubling the bandwidth, increasing IOPS by four X, but more importantly we're accelerating raid rebuilds by 50X. And that's an important Dave, if you are using a database in some cases, you limit the size of that database based on how fast you can do those rebuilds. So this 50X acceleration in rebuilds is something we're getting a lot of good feedback on for customers. The last metric we're really focused on is write latency. So how fast can the CPU send the write to the storage connectivity subsystem and committed to drives? And we're improving that by 60X generation over generation. So we're talking fully loaded latency, 10 microseconds. So from an enterprise workload it's about data protection, much, much faster using NVMe drives. That's one big problem. The other one is if you look at Dave YouTube, Facebook, TikTok the amount of user generated content specifically video content that they're producing on an hour by hour basis is mind boggling. And the hyperscale customers are really counting on us to help them scale the connectivity of hundreds of thousands of hard drive to store and access all that data in a very reliable way. So there we're leading the industry in the transition to 24 gig SaaS and multi actuator drives. Third big problem is around AI ML servers. So these are some of the highest performance servers, that they basically need super low latency connectivity between GPGPUs, networking, NVMe drives, CPUs and orchestrate that all together. And the fabric of choice for that is PCIe fabric. So here, we're talking about 115 nanosecond latency in a PCIe fabric, fully nonblocking, very reliable. And here we're helping the industry transition from PCA gen four to PCIe gen five. And the last piece is okay, I've got a AI ML server, I have a storage system with hard drives or a storage server in the enterprise space. All these devices, systems need to be connected to the ethernet backbone. And my team is heavily investing in ethernet mix transitioning to 100 gig, 200 gig, 400 gig and putting capabilities optimized for storage workloads. So those are kind of the four big things that we're focused on at the industry level, from a connectivity perspective, Dave. >> And that makes a lot of sense and really resonates particularly as we have that shift from a CPU centric to a connectivity centric. And the other thing you said, I mean, you're talking about 50X rate rebuild times, a couple of things you know in storage is if you ask the question, what happens when something goes wrong? 'Cause it's all about recovery, you can't lose data. And the other thing you mentioned is write latency, which has always been the problem. Okay, reads, I can read out cache but ultimately you've got to get it to where it's persisted. So some real technical challenges there that you guys are dealing with. >> Absolutely, Dave. And these are the type of problems that gets the engineers excited. Give them really tough technical problems to go solve. >> I wonder if we could take a couple of examples or an example of scaling with a large customer, for instance obviously hyperscalers or take a company like Dell. I mean they're big company, big customer. Take us through that. >> So we use the word scale a lot at Broadcom. We work with some of the industry leaders and data centers and OEMs and scale means different things to them. So example, if I'm working with a hyperscaler that is getting inundated with data and they need half a million storage controllers to store all that data, well their scale problem is, can you deliver? And Dave, you know how much of a hot topic that is these days. So they need a partner that can scale from a delivery perspective. But if I take a company like example Dell that's very focused on storage, from storage servers, their acquisition of EMC. They have a very broad portfolio of data center storage offerings and scale to them from a connected by Broadcom perspective means that you need to have the investment scale to meet their end to end requirements. All the way from a low end storage connectivity solution for booting a server all the way up to a very high end all flash array or high density HDD system. So they want a company a partner that can invest and has a scale to invest to meet their end to end requirements. Second thing is their different products are unique and have different requirements and you need to adapt your collaboration model. So example, some products within Dell portfolio might say, I just want a storage adaptor, plug it in, the operating system will automatically recognize it. I need this turnkey. I want to do minimal investment, is not an area of high differentiation for me. At the other end of the spectrum they may have applications where they want deep integration with their management and our silicon tools so that they can deliver the highest quality, highest performance to their customers. So they need a partner that can scale from an R&D investment perspective from silicon software and hardware perspective but they also need a company that can scale from support and business model perspective and give them the flexibility that their end customers need. So Dell is a great company to work with. We have a long lasting relationship with them and the relationship is very deep in some areas, example server storage, and is also quite broad. They are adopters of the vast majority of our storage connectivity products. >> Well, and I imagine it was. Well I want to talk about the uniqueness of Broadcom again, I'm in awe of the fact that somebody had the vision, you guys, your team obviously your CEO was one of the visionaries of the industry, had the sense to look out and say, okay, we can put these pieces together. So I would imagine a company like Dell, they're able to consolidate their vendor their supplier base and push you for integration and innovation. How unique is the Broadcom model? What's compelling to your customer about that model? >> So I think what's unique from a storage perspective is the breadth of the portfolio and also the scale at which we can invest. So if you look at some of the things we talked about from a scale perspective how data centers throughout the world are getting inundated with data, Dave, they need help. And we need to equip them with cutting edge technology to increase performance, drive down power, improve reliability. So they need partners that in each of the product categories that you partner with them on, we can invest with scale. So that's, I think one of the first things. The second thing is, if you look at this connectivity centric data center you need multiple types of fabric. And whether it be cloud customers or large OEMs they are organizing themselves to be able to look at things holistically. They're no longer product company, they're very data center architecture companies. And so it's good for them to have a partner that can look across product groups across divisions says, okay this is the innovation we need to bring to market. These are the problems we need to go solve and they really appreciate that. And I think the last thing is a flexible business model. Within example, my division, we offer different business models, different engagement and collaboration models with technology. But there's another division that if you want to innovate at the silicon level and build custom silicon for you like many of the hyperscalers or other companies are doing that division is just focus on that. So I feel like Broadcom is unique from a storage perspective it's ability to innovate, breadth of portfolio and the flexibility in the collaboration model to help our customers solve their customers problems. >> So you're saying you can deal with merchant products slash open products or you can do high customization. Where does software differentiation fit into this model? >> So it's actually one of the most important elements. I think a lot of our customers take it for granted that will take care of the silicon will anticipate the requirements and deliver the performance that they need, but from a software, firmware, driver, utilities that is where a lot of differentiation lies. Some cases we'll offer an SDK model where customers can build their entire applications on top of that. In some cases they want to complete turnkey solution where you take technology, integrate it into server and the operating system recognizes it and you have outer box drivers from Broadcom. So we need to offer them that flexibility because their needs are quite broad there. >> So last question, what's the future of the business look like to Jas Tremblay? Give us your point of view on that. >> Well, it's fun. I got to tell you, Dave, we're having a great time. I've got a great team, they're the world's experts on storage connectivity and working with them is a pleasure. And we've got a rich, great set of customers that are giving us cool problems to go solve and we're excited about it. So I think this is really, with the acceleration of all this digital transformation that we're seeing, we're excited, we're having fun. And I think there's a lot of problems to be solved. And we also have a responsibility. I think the ecosystem and the industry is counting on our team to deliver the innovation from a storage connectivity perspective. And I'll tell you, Dave, we're having fun. It's great but we take that responsibility pretty seriously. >> Jas, great stuff. I really appreciate you laying all that out. Very important role you guys are playing. You have a really unique perspective. Thank you. >> Thank you, Dave. >> And thank you for watching. This is Dave Vellante for theCUBE and we'll see you next time.

Published Date : Apr 28 2022

SUMMARY :

and all of the enabling hardware me, really appreciate it. of the internet traffic flows Well, Dave, 99% of the internet traffic and specifically with storage? inside the data center. And the last acquisition for this puzzle kind of connect the dots And a lot of the workloads are processed but at the same time you And one of the key things Dave the time to final tape out So in the data center or the supportive technologies So how fast can the CPU send the write And the other thing you said, that gets the engineers excited. or an example of scaling with and the relationship is that somebody had the vision, and also the scale at which we can invest. So you're saying you can and the operating system recognizes it look like to Jas Tremblay? of problems to be solved. I really appreciate you and we'll see you next time.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavePERSON

0.99+

DellORGANIZATION

0.99+

Dave VellantePERSON

0.99+

BroadcomORGANIZATION

0.99+

27.5 billionQUANTITY

0.99+

fourQUANTITY

0.99+

BrocadeORGANIZATION

0.99+

16 yearsQUANTITY

0.99+

100 gigQUANTITY

0.99+

12 monthQUANTITY

0.99+

thousandsQUANTITY

0.99+

200 gigQUANTITY

0.99+

400 gigQUANTITY

0.99+

24 gigQUANTITY

0.99+

five yearsQUANTITY

0.99+

EmulexORGANIZATION

0.99+

second challengeQUANTITY

0.99+

Jas TremblayPERSON

0.99+

60XQUANTITY

0.99+

99%QUANTITY

0.99+

240 billionQUANTITY

0.99+

FacebookORGANIZATION

0.99+

50XQUANTITY

0.99+

eachQUANTITY

0.99+

AvagoORGANIZATION

0.99+

10 microsecondsQUANTITY

0.99+

one pieceQUANTITY

0.99+

YouTubeORGANIZATION

0.99+

LSIORGANIZATION

0.99+

JasPERSON

0.99+

oneQUANTITY

0.99+

first oneQUANTITY

0.99+

firstQUANTITY

0.99+

second thingQUANTITY

0.99+

PLXORGANIZATION

0.99+

two key challengesQUANTITY

0.98+

last yearDATE

0.98+

10XQUANTITY

0.98+

2030DATE

0.98+

about 15 yearsQUANTITY

0.98+

EMCORGANIZATION

0.98+

hundreds of thousandsQUANTITY

0.98+

nearly 29 billionQUANTITY

0.97+

about 23% a dayQUANTITY

0.97+

IntelORGANIZATION

0.97+

todayDATE

0.97+

TikTokORGANIZATION

0.97+

$5 billion a yearQUANTITY

0.96+

Moore'sTITLE

0.95+

first thingsQUANTITY

0.94+

Andy Brown, Broadcom


 

(upbeat music) >> Hello and welcome to theCUBE. I'm Dave Nicholson, Chief Technology Officer at theCUBE and we are here for a very special Cube Conversation with Andy Brown from Broadcom. Andy, welcome to theCUBE, tell us a little about yourself. >> Well, a little bit about myself, my name is Andy Brown, I'm currently the Senior Director of Software Architecture and Performance Analysis here within the Data Center Solutions Group at Broadcom. I've been doing that for about seven years prior to that, I held various positions within the system architecture, systems engineering, and IC development organization, but ultimately as well as spent some time in our support organization and managing our support team. But ultimately have landed in the architecture organization as well as performance analysis. >> Great, so a lot of what you do is around improving storage performance, tell us more about that. >> So let me give you a brief history of storage from my perspective. As I mentioned, I go back about 30 years in my career and that would've started back in the NCR Microelectronics days. And originally with Parallel SCSI, so that would be, if anyone would remember the the 5380 Controller, which was one of the original Parallel SCSI controllers that existed in built by NCR Microelectronics at the time, I've seen the advent of Parallel SCSI, a stint of fiber channel, ultimately leading into the serialization of the SCSI standard into SaaS, as well as SATA, and then ultimately leading to NVMe protocols and the advent of flash moving from hard drives into a flash based media and as well on that's on the storage side on the host side, moving from parallel interfaces, ISA if everybody could remember that, moving to PCI, PCI Express and that's where we land today. >> So Andy, we are square in the middle of the era of both NVMe and SaaS. What kinds of challenges does that overlap represent? >> Well, I think obviously we've seen SaaS around for a while, it was the conversion from parallel into a serial attached SCSI and that really SaaS brings with it, the ability to connect really a high number of devices and was kind of the original scaling of devices. And really also enabled was one of the things that enabled flash based media, given the the speed and performance that came to the table. Of course NVMe came in as well with the promise of even higher speeds. And as we saw flash media really, really take a strong role in storage. NVMe came around and really was focused on trying to address that, whereas SaaS originated with hard drive technology. NVMe was really born out of how do we most efficiently deal with flash based media, SaaS with its. But SaaS still carries a benefit on scalability and NVMe maybe has, I don't want to say challenges there, but it's definitely was not designed as much to be broadly scale across many, many, say high hundreds or thousands of devices. But definitely addressed some of the performance issues that were coming up as flash media was becoming. So it was increasing the overall storage performance that we could experience if you will. >> Let's talk about host interfaces, PCIe. What's the significance there? >> Really all the storage in the world, all the performance in the world on the storage side, is not of much use to you unless you can really feed it into the beast, if you will, into the CPU and into the the rest of the service subsystem. And that's really where PCI comes into play. PCI originally was in parallel form and then moved to serial with the PCI Express as we know it today, and really has created a pathway to enable not only storage performance but any other adapter or any other networking or other types of technologies to just open up that pathway and feed the processor. And as we've moved through from PCI to PCI Express PCI 2.0 3.0 4.0, and just opening up those pipes has really enabled just a tremendous amount of flow of data into the compute engine, allowing it to be analyzed, sorted used to analyze data, big data, AI type applications. Just those pipes are critical in those types of applications. >> We know we've seen dramatic increases in performance, going from one generation of PCIe to the next. But how does that translate into the worlds of SaaS, SATA and NVMe? >> So from a performance perspective when we look at these different types of media whether it be SATA, SaaS or NVMe, of course, there are performance difference inherent in that media, SATA being probably the lowest performing with NVMe topping out at higher performing although SaaS can perform quite well as a flash based as protocol connected to flash based media. And of course, NVMe from an individual device scaling, from a by one to a by four interface, really that is where NVMe kind of has enabled a bigger pipe directly to the storage media, being able to scale up to by four whereas SaaS can limit it to by one, maybe by two in some cases, although most servers only connect the SaaS device of by one. So from a different perspective then you're really wanting to create a solution or enable the infrastructure to be able to consume that performance at NVMe is going to give you. And I think that is something where our solutions have really in the recent generation shined, at their ability to really now keep up with storage performance and NVMe, as well as provide that connectivity back down into the SaaS and SATA world as well. >> Let's talk about your perspective on RAID today. >> So there've been a lot of views and opinions on RAID over the years, it's been and those have been changing over time. RAID has been around for a very, very long time, probably about as long as again, going back over my 30 year career, it's been around for almost the entire time. Obviously RAID originally was viewed as some thing that was very, very necessary devices fail. They don't last forever, but the data that's on them is very, very important and people care about that. So RAID was brought about knowing that individual devices that are storing that data are going to fail, and really took cold as a primary mechanism of protection. But as time went on and as performance moved up both in the server and both in the media itself if we start talking about flash. RAID really took on, people started to look at traditional server storage RAID, well, maybe a more of a negative connotation. I think that because to be quite honest, it fell behind a little bit. If you look at things like parity RAID 5 and RAID 6, very, very effective efficient means of protecting your data, very storage efficient, but ultimately had some penalty a primarily around right performance, random rights in RAID 5 volumes was not keeping up with what really needed to be there. And I think that really shifted opinions of RAID that, "Hey it's just not, it's not going to keep up and we need to move on to other avenues." And we've seen that, we've seen disaggregated storage and other solutions pop up and protect your data obviously in cloud environments and things like that have shown up and they have been successful, but. >> So one of the drawbacks with RAID is always the performance tax associated with generating parody for parody RAID. What has Broadcom done to address those potential bottlenecks? >> We've really solved the RAID performance issue the right performance issue. We're in our latest generation of controllers we're exceeding a million RAID 5 right IOPS which is enough to satisfy many, many, many applications as a matter of fact, even in virtual environments aggregated solutions, we have multiple applications. And then as well in the rebuild arena, we really have through our architecture, through our hardware automation have been able to move the bar on that to where the rebuild not only the rebuild times have been brought down dramatically in SaaS based or in I'm sorry in flash based solutions. But the performance that you can observe while those rebuilds are going on is almost immeasurable. So in most applications you would almost observe no performance deficiencies during a rebuild operation which is really night and day compared to where things were just few short years ago. >> So the fact that you've been able to, dramatically decrease the time necessary for a RAID rebuild is obviously extremely important. But give us your overall performance philosophy from Broadcom's point of view. >> Over the years we have recognized that performance is obviously a critically important for our products, and the ability to analyze performance from many many angles is critically important. There are literally infinite ways you can look at performance in a storage subsystem. What we have done in our labs and in our solutions through not only hardware scaling in our labs, but also through automation scripts and things like that, have allowed us to collect a substantial amount of data to look at the performance of our solutions from every angle. IOPS, bandwidth application level performance, small topologies, large topologies, just many, many aspects. It still honestly only scratches the surface of all the possible performance points that you could gather, but we have moved them bar dramatically in that regard. And it's something that our customers really demanded of us. Storage technology has gotten more complex, and you have to look at it from a lot different angles, especially on the performance front to make sure that there are no holes there that somebody's going to run into. >> So based on specific customer needs and requests, you look at performance from a variety of different angles. What are some of the trends that you're seeing specifically in storage per performance today and moving into the future? >> Yeah, emerging trends within the storage industry. I think that to look at the emerging trends, you really need to go back and look at where we started. We started in compute where people were you would have basically your server that would be under the desk in a small business operation and individual businesses would have their own set of servers, and the storage would really be localized to those. Obviously the industry has recognized that to some extent, disaggregation of that, we see that obviously in what's happening in cloud, in hyper-converged storage and things like that. Those afford a tremendous amount of flexibility and are obviously great players in the storage world today. But with that flexibility has come some sacrifice and performance and actually quite substantial sacrifice. And what we're observing is almost, it comes back full circle. The need for inbox high performing server storage that is well protected. And with people with confidence that people have confidence that their data is protected and that they can extract the performance that they need for the demanding database applications, that still exists today, and that still operate in the offices around the country and around the world, that really need to protect their data on a local basis in the server. And I think that from a trend perspective that's what we're seeing. We also, from the standpoint of NVMe itself is really started out with, "Hey, we'll just software rate that. We'll just wrap software around that, we can protect the data." We had so many customers come back to us saying, you know what? We really need hardware RAID on NVMe. And when they came to us, we were ready. We had a solution ready to go, and we're able to provide that, and now we're seeing ongoing on demand. We are complimentary to other storage solutions out there. Server storage is not going to necessarily rule a world but it is surely has a place in the broader storage spectrum. And we think we have the right solution for that. >> Speaking of servers and server-based storage. Why would, for example, a Dell customer care about the Broadcom components in that Dell server. >> So let's say you're configuring a Dell server and you're going, why does hardware where RAID matter? What's important about that? Well, I think when you look at today's hardware RAID, first of all, you're going to see a dramatically better performance. You're going to see dramatically better performance it's going to enable you to put RAID 5 volumes a very effective and efficient mechanism for protecting your data, a storage efficient mechanism. You're going to use RAID 5 volumes where you weren't able to do that before, because when you're in the millions of IOPS range you really can satisfy a lot of application needs out there. And then you're going to also going to have rebuilt times that are lightning fast. Your performance is not going to degrade, when you're running those application, especially database applications, but not only database, but streaming applications, bandwidth to protected RAID volumes is almost imperceptively different from just raw bandwidth to the media. So the RAID configurations in today's Dell servers really afford you the opportunity to make use of that storage where you may not have already written it off as well RAID just doesn't, is not going to get me there. Quite frankly, into this in the storage servers that Dell is providing with RAID technology, there are huge windows open in what you can do today with applications. >> Well, all of this is obviously good news for Dell and Dell customers, thanks again, Andy for joining us, for this Cube Conversation, I'm Dave Nicholson for theCUBE. (upbeat music)

Published Date : Apr 28 2022

SUMMARY :

and we are here for a very I'm currently the Senior Great, so a lot of what you do and the advent of flash in the middle of the era and performance that came to the table. What's the significance there? and into the the rest of of PCIe to the next. have really in the Let's talk about your both in the server and So one of the drawbacks with RAID on that to where the rebuild So the fact that you've been able to, and the ability to analyze performance and moving into the future? and the storage would really about the Broadcom components in the storage servers and Dell customers, thanks

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave NicholsonPERSON

0.99+

AndyPERSON

0.99+

Andy BrownPERSON

0.99+

DellORGANIZATION

0.99+

twoQUANTITY

0.99+

Data Center Solutions GroupORGANIZATION

0.99+

BroadcomORGANIZATION

0.99+

millionsQUANTITY

0.99+

todayDATE

0.98+

oneQUANTITY

0.98+

hundredsQUANTITY

0.97+

bothQUANTITY

0.97+

theCUBEORGANIZATION

0.97+

PCI 2.0 3.0 4.0OTHER

0.97+

NCR MicroelectronicsORGANIZATION

0.94+

about 30 yearsQUANTITY

0.94+

PCI ExpressOTHER

0.94+

one generationQUANTITY

0.93+

a millionQUANTITY

0.92+

thousands of devicesQUANTITY

0.9+

fourQUANTITY

0.88+

few short years agoDATE

0.87+

Parallel SCSIOTHER

0.85+

RAID 5OTHER

0.84+

RAID 5TITLE

0.77+

30 yearQUANTITY

0.73+

NCRORGANIZATION

0.72+

RAID 6TITLE

0.71+

5380 ControllerCOMMERCIAL_ITEM

0.71+

Parallel SCSIOTHER

0.71+

one ofQUANTITY

0.7+

RAIDTITLE

0.68+

NVMeTITLE

0.64+

SaaSTITLE

0.63+

about seven yearsQUANTITY

0.6+

thingsQUANTITY

0.57+

PCIOTHER

0.54+

IOPSQUANTITY

0.47+

Kimberly Leyenaar, Broadcom


 

(upbeat music) >> Hello everyone, and welcome to this CUBE conversation where we're going to go deep into system performance. We're here with an expert. Kim Leyenaar is the Principal Performance Architect at Broadcom. Kim. Great to see you. Thanks so much for coming on. >> Thanks so much too. >> So you have a deep background in performance, performance assessment, benchmarking, modeling. Tell us a little bit about your background, your role. >> Thanks. So I've been a storage performance engineer and architect for about 22 years. And I'm specifically been for abroad with Broadcom for I think next month is going to be my 14 year mark. So what I do there is initially I built and I manage their international performance team, but about six years ago I moved back into architecture, and what my roles right now are is I generate performance projections for all of our next generation products. And then I also work on marketing material and I interface with a lot of the customers and debugging customer issues, and looking at how our customers are actually using our storage. >> Great. Now we have a graphic that we want to share. It talks to how storage has evolved over the past decade. So my question is what changes have you seen in storage and how has that impacted the way you approach benchmarking. In this graphic we got sort of big four items that impact performance, memory processor, IO pathways, and the storage media itself, but walk us through this data if you would. >> Sure. So what I put together is a little bit of what we've seen over the past 15 to 20 years. So I've been doing this for about 22 years and kind of going back and focusing a little bit on the storage, we looked back at hard disk, they ruled for. And nearly they had almost 50 years of ruling. And our first hard drive that came out back in the 1950s was only capable of five megabytes in capacity. and one and a half iOS per second. It had almost a full second in terms of seat time. So we've come a long way since then. But when I first came on, we were looking at Ultra 320 SCSI. And one of the biggest memories that I have of that was my office is located close to our tech support. And I could hear the first question was always, what's your termination like? And so we had some challenges with SCSI, and then we moved on into SAS and data protocols. And we continued to move on. But right now, back in the early 2000s when I came on board, the best drives really could do maybe 400 iOS per second. Maybe two 250 megabytes per second, with millisecond response times. And so when I was benchmarking way back when it was always like, well, IOPS are IOPS. We were always faster than what the drives to do. And that was just how it was. The drives were always the bottleneck in the system. And so things started changing though by the early 2000s, mid 2000s. We started seeing different technologies come out. We started seeing that virtualization and multi-tenant infrastructures becoming really popular. And then we had cloud computing that was well on the horizon. And so at this point, we're like, well, wait a minute, we really can't make processors that much faster. And so everybody got excited to include (indistinct) and the home came out but, they had two cores per processor and four cores per processor. And so we saw a little time period where actually the processing capability kind of pulled ahead of everybody else. And memory was falling behind. We had good old DVR, 2, 6, 67. It was new with the time, but we only had maybe one or two memory channels per processor. And then in 2007 we saw disk capacity hit one terabyte. And we started seeing a little bit of an imbalance because we were seeing these drives are getting massive, but their performance per drive was not really kind of keeping up. So now we see a revolution around 2010. And my co-worker and I at the time, we have these little USB discs, if you recall, we would put them in. They were so fast. We were joking at the time. "Hey, you know what, wonder if we could make a raid array out of these little USB disks?" They were just so fast. The idea was actually kind of crazy until we started seeing it actually happen. So in 2010 SSD started revolutionizing storage. And the first SSDs that we really worked with these plaint LS-300 and they were amazing because they were so over-provisioned that they had almost the same reader, right performance. But to go from a drive that could do maybe 400 IOS per second to a drive like 40,000 plus iOS per second, really changed our thought process about how our storage controller could actually try and keep up with the rest of the system. So we started falling behind. That was a big challenge for us. And then in 2014, NVMe came around as well. So now we've got these drives, they're 30 terabytes. They can do one and a half million iOS per second, and over 6,000 megabytes per second. But they were expensive. So people start relegating SSDs more towards tiered storage or cash. And as the prices of these drives kind of came down, they became a lot more mainstream. And then the memory channels started picking up. And they started doubling every few years. And we're looking now at DVR 5 4800. And now we're looking at cores that used to go from two to four cores per processor up to 48 with some of the latest different processes that are out there. So our ability to consume the computing and the storage resources, it's astounding, you know, it's like that whole saying, 'build it and they will come.' Because I'm always amazed, I'm like, how are we going to possibly utilize all this memory bandwidth? How are we going to utilize all these cores? But we do. And the trick to this is having just a balanced infrastructure. It's really critical. Because if you have a performance mismatch between your server and your storage, you really lose a lot of productivity and it does impact your revenue. >> So that's such a key point. Pardon, begin that slide up again with the four points. And that last point that you made Kim about balance. And so here you have these, electronic speeds with memory and IO, and then you've got the spinning disc, this mechanical disc. You mentioned that SSD kind of changed the game, but it used to be, when I looked at benchmarks, it was always the D stage bandwidth of the cash out to the spinning disc was always the bottleneck. And, you go back to the days of you it's symmetrics, right? The huge backend disk bandwidth was how they dealt with that. But, and then you had things the oxymoron of the day was high spin speed disks of a high performance disk. Compared to memories. And, so the next chart that we have is show some really amazing performance increases over the years. And so you see these bars on the left-hand side, it looks at historical performance for 4k random IOPS. And on the right-hand side, it's the storage controller performance for sequential bandwidth from 2008 to 2022. That's 22 is that yellow line. It's astounding the increases. I wonder if you could tell us what we're looking at here, when did SSD come in and how did that affect your thinking? (laughs) >> So I remember back in 2007, we were kind of on the precipice of SSDs. We saw it, the writing was on the wall. We had our first three gig SAS and SATA capable HPAs that had come out. And it was a shock because we were like, wow, we're going to really quickly become the bottleneck once this becomes more mainstream. And you're so right though about people work in, building these massive hard drive based back ends in order to handle kind of that tiered architecture that we were seeing that back in the early 2010s kind of when the pricing was just so sky high. And I remember looking at our SAS controllers, our very first one, and that was when I first came in at 2007. We had just launched our first SAS controller. We're so proud of ourselves. And I started going how many IOPS can this thing, even handled? We couldn't even attach enough drives to figure it out. So what we would do is we'd do these little tricks where we would do a five 12 byte read, and we would do it on a 4k boundary, so that it was actually reading sequentially from the disc, but we were handling these discrete IOPS. So we were like, oh, we can do around 35,000. Well, that's just not going to hit it anymore. Bandwidth wise we were doing great. Really our limitation and our bottleneck on bandwidth was always either the host or the backend. So, our controllers are there basically, there were three bottlenecks for our storage controllers. The first one is the bottleneck from the host to the controller. So that is typically a PCIe connection. And then there's another bottleneck on the controller to the disc. And that's really the number of ports that we have. And then the third one is the discs themselves. So in typical storage, that's what we look at. And we say, well, how do we improve this? So some of these are just kind of evolutionary, such as PCIE generations. And we're going to talk a little bit about that, but some of them are really revolutionary, and those are some of the things that we've been doing over the last five or six years to try and make sure that we are no longer the bottleneck. And we can enable these really, really fast drives. >> So can I ask a question? I'm sorry to interrupted but on these blue bars here. So these all spinning disks, I presume, out years they're not. Like when did flash come in to these blue bars? is that..you said 27 you started looking at it, but on these benchmarks, is it all spinning disc? Is it all flash? How should we interpret that? >> No, no. Initially they were actually all hard drives. And the way that we would identify, the max iOS would be by doing very small sequential reads to these hard drives. We just didn't have SSDs at that point. And then somewhere around 2010 is where we.. it was very early in that chart, we were able to start incorporating SSD technology into our benchmarking. And so what you're looking at here is really the max that our controller is capable of. So we would throw as many drives as we could and do what we needed to do in order to just make sure our controller was the bottleneck and what can we expose. >> So the drive then when SSD came in was no longer the bottleneck. So you guys had to sort of invent and rethink sort of how, what your innovation and your technology, because, I mean, these are astounding increases in performance. I mean, I think in the left-hand side, we've built this out pad, you got 170 X increase for the 4k random IOPS, and you've got a 20 X increase for the sequential bandwidth. How were you able to achieve that level of performance over time? >> Well, in terms of the sequential bandwidth, really those come naturally by increases in the PCIe or the SAS generation. So we just make sure we stay out of the way, and we enable that bandwidth. But the IOPS that's where it got really, really tricky. So we had to start thinking about different things. So, first of all, we started optimizing all of our pathways, all of our IO management, we increased the processing capabilities on our IO controllers. We added more on-chip memory. We started putting in IO accelerators, these hardware accelerators. We put in SAS poor kind of enhancements. We even went and improved our driver to make sure that our driver was as thin as possible. So we can make sure that we can enable all the IOPS on systems. But a big thing happening a few couple of generations ago was we started introducing something called tri capable controllers, which means that you could attach NVMe. You could attach SAS or you could attach SATA. So you could have this really amazing deployment of storage infrastructure based around your customized needs and your cost requirements by using one controller. >> Yeah. So anybody who's ever been to a trade show where they were displaying a glass case with a Winchester disc drive, for example, you see it's spinning and its actuators is moving, wow, that's so fast. Well, no. That's like a tourist slower. It's like a snail compared to the system's speed. So it's, in a way life was easy back in those days, because when you did a right to a disk, you had plenty of time to do stuff, right. And now it's changed. And so I want to talk about Gen3 versus Gen4, and how all this relates to what's new in Gen4 and the impacts of PCIe here, you have a chart here that you've shared with us that talks to that. And I wonder if you could elaborate on that, Kim. >> Sure. But first, you said something that kind of hit my funny bone there. And I remember I made a visit once about 15 or 20 years ago to IBM. And this gentleman actually had one of those old ones in his office and he referred to them as disk files. And he never until the day he retired, he'd never stopped calling them disc files. And it's kind of funny to be a part of that history. >> Yeah. DASD. They used to call it. (both laughing) >> SD, DASD. I used to get all kinds of, you know, you don't know what it was like back then, but yeah. But now nowadays we've got it quite easily enabled because back then, we had, SD DASD and all that. And then, ATA and then SCSI, well now we've got PCIe. And what's fabulous about PCIe is that it just has the generations are already planned out. It's incredible. You know, we're looking at right now, Gen3 moving to Gen4, and that's a lot about what we're going to be talking about. And that's what we're trying to test out. What is Gen4 PCIe when to bias? And it really is. It's fantastic. And PCIe came around about 18 years ago and Broadcom is, and we do participate and contribute to the PCIe SIG, which is, who develops the standards for PCIe, but the host in both our host interface in our NVMe desk and utilize the standards. So this is really, really a big deal, really critical for us. But if you take a look here, you can see that in terms of the capabilities of it, it's really is buying us a lot. So most of our drives right now NVMe drives tend to be by four. And a lot of people will connect them. And what that means is four lanes of NVMe and a lot of people that will connect them either at by one or by two kind of depending on what their storage infrastructure will allow. But the majority of them you could buy, or there are so, as you can see right now, we've gone from eight gig transfers per second to 16 gig of transfers per second. What that means is for a by four, we're going from one drive being able to do 4,000 to do an almost 8,000 megabytes per second. And in terms of those 4k IOPS that really evade us, they were really really tough sometimes to squeeze out of these drives, but now we're got 1 million, all we have to 2 million, it's just, it's insane. You know, just the increase in performance. And there's a lot of other standards that are going to be sitting on top of PCIe. So it's not going away anytime soon. We've got to open standards like CXL and things like that, but we also have graphics cards. You've got all of your hosts connections, they're also sitting on PCIe. So it's fantastic. It's backwards, it's orbits compatible, and it really is going to be our future. >> So this is all well and good. And I think I really believe that a lot of times in our industry, the challenges in the plumbing are underappreciated. But let's make it real for the audience because we have all these new workloads coming out, AI, heavily data oriented. So I want to get your thoughts on what types of workloads are going to benefit from Gen4 performance increases. In other words, what does it mean for application performance? You shared a chart that lists some of the key workloads, and I wonder if we could go through those. >> Yeah, yeah. I could have a large list of different workloads that are able to consume large amounts of data, whether or not it's in small or large kind of bytes of data. But as you know right now, and I said earlier, our ability to consume these compute and storage resources is amazing. So you build it and we'll use it. And the world's data we're expected to grow 61% to 175 zettabytes by the year 2025, according to IDC. So that's just a lot of data to manage. It's a lot of data to have, and it's something that's sitting around, but to be useful, you have to actually be able to access it. And that's kind of where we come in. So who is accessing it? What kind of applications? I spend a lot of time trying to understand that. And recently I attended a virtual conference SDC and what I like to do when I attend these conferences is to try to figure out what the buzz words are. What's everybody talking about? Because every year it's a little bit different, but this year was edge, edge everything. And so I kind of put edge on there first in, even you can ask anybody what's edge computing and it's going to mean a lot of different things, but basically it's all the computing outside of the cloud. That's happening typically at the edge of the network. So it tends to encompass a lot of real time processing on those instant data. So in the data is usually coming from either users or different sensors. It's that last mile. It's where we kind of put a lot of our content caching. And, I uncovered some interesting stuff when I was attending this virtual conference and they say only about 25% of all the usable data actually even reach the data center. The rest is ephemeral and it's localized, locally and in real time. So what it does is in the goal of edge computing is to try and reduce the bandwidth costs for these kinds of IOT devices that go over a long distance. But the reality is the growth of real-time applications that require these kinds of local processing are going to drive this technology forward over the coming years. So Dave, your toaster and your dishwasher they're, IOT edge devices probably in the next year, if they're not already. So edge is a really big one and consumes a lot of the data. >> The buzzword does your now is met the metaverse, it's almost like the movie, the matrix is going to come in real time. But the fact is it's all this data, a lot of videos, some of the ones that I would call out here, you mentioned facial recognition, real-time analytics. A lot of the edge is going to be real-time inferencing, applying AI. And these are just a massive, massive data sets that you again, you and of course your customers are enabling. >> When we first came out with our very first Gen3 product, our marketing team actually asked me, "Hey, how can we show users how they can consume this?" So I actually set up a head to environment. I decided I'm going to learn how to do this. I set up this massive environment with Hadoop, and at the time they called big data, the 3V's, I don't know if you remember these big 3Vs, the volume, velocity and variety. Well Dave, did you know, there are now 10 Vs? So besides those three, we got velocity, we got valued, we got variability, validity, vulnerability, volatility, visualization. So I'm thinking we need just to add another beat of that. >> Yeah. (both laughing) Well, that's interesting. You mentioned that, and that sort of came out of the big data world, a dupe world, which was very centralized. You're seeing the cloud is expanding, the world's getting, you know, data is by its very nature decentralized. And so you've got to have the ability to do an analysis in place. A lot of the edge analytics are going to be done in real time. Yes, sure. Some of it's going to go back in the cloud for detailed modeling, but we are the next decade Kim, ain't going to be like the last I often say. (laughing) I'll give you the last word. I mean, how do you see this sort of evolving, who's going to be adopting this stuff. Give us a sort of a timeframe for this kind of rollout in your world. >> In terms of the timeframe. I mean really nobody knows, but we feel like Gen5, that it's coming out next year. It may not be a full rollout, but we're going to start seeing Gen5 devices and Gen5 infrastructure is being built out over the next year. And then follow very, very, very quickly by Gen6. And so what we're seeing though is, we're starting to see these graphics processors, These GPU's, and I'm coming out as well, that are going to be connecting, using PCIe interfaces as well. So being able to access lots and lots and lots of data locally is going to be a really, really big deal and order because worldwide, all of our companies they're using business analytics. Data is money. And the person that actually can improve their operational efficiency, bolster those sales and increase your customer satisfaction. Those are the companies that are going on to win. And those are the companies that are going to be able to effectively store, retrieve and analyze all the data that they're collecting over the years. And that requires an abundance of data. >> Data is money and it's interesting. It kind of all goes back to when Steve jobs decided to put flash inside of an iPhone and the industry exploded, consumer economics kicked in 5G now edge AI, a lot of the things you talked about, GPU's the neural processing unit. It's all going to be coming together in this decade. Very exciting. Kim, thanks so much for sharing this data and your perspectives. I'd love to have you back when you got some new perspectives, new benchmark data. Let's do that. Okay. >> I look forward to it. Thanks so much. >> You're very welcome. And thank you for watching this CUBE conversation. This is Dave Vellante and we'll see you next time. (upbeat music)

Published Date : Nov 11 2021

SUMMARY :

Kim Leyenaar is the Principal So you have a deep a lot of the customers and how has that impacted the And I could hear the And, so the next chart that we have And it was a shock because we were like, in to these blue bars? And the way that we would identify, So the drive then when SSD came in Well, in terms of the And I wonder if you could And it's kind of funny to They used to call it. and a lot of people that will But let's make it real for the audience and consumes a lot of the data. the matrix is going to come in real time. and at the time they the ability to do an analysis And the person that actually can improve a lot of the things you talked about, I look forward to it. And thank you for watching

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

2007DATE

0.99+

BroadcomORGANIZATION

0.99+

2008DATE

0.99+

Kim LeyenaarPERSON

0.99+

2014DATE

0.99+

61%QUANTITY

0.99+

Kimberly LeyenaarPERSON

0.99+

4,000QUANTITY

0.99+

14 yearQUANTITY

0.99+

2010DATE

0.99+

20 XQUANTITY

0.99+

DavePERSON

0.99+

1 millionQUANTITY

0.99+

KimPERSON

0.99+

IBMORGANIZATION

0.99+

two coresQUANTITY

0.99+

third oneQUANTITY

0.99+

2022DATE

0.99+

2 millionQUANTITY

0.99+

16 gigQUANTITY

0.99+

first questionQUANTITY

0.99+

twoQUANTITY

0.99+

iPhoneCOMMERCIAL_ITEM

0.99+

five megabytesQUANTITY

0.99+

10 VsQUANTITY

0.99+

oneQUANTITY

0.99+

170 XQUANTITY

0.99+

eight gigQUANTITY

0.99+

30 terabytesQUANTITY

0.99+

threeQUANTITY

0.99+

mid 2000sDATE

0.99+

400QUANTITY

0.99+

early 2000sDATE

0.99+

one and a half millionQUANTITY

0.99+

one terabyteQUANTITY

0.99+

firstQUANTITY

0.99+

four coresQUANTITY

0.99+

175 zettabytesQUANTITY

0.99+

next yearDATE

0.99+

three bottlenecksQUANTITY

0.99+

early 2010sDATE

0.99+

next decadeDATE

0.99+

early 2000sDATE

0.99+

4kQUANTITY

0.99+

one driveQUANTITY

0.99+

first oneQUANTITY

0.99+

IDCORGANIZATION

0.99+

LS-300COMMERCIAL_ITEM

0.98+

next monthDATE

0.98+

1950sDATE

0.98+

about 22 yearsQUANTITY

0.98+

one controllerQUANTITY

0.98+

2025DATE

0.98+

iOSTITLE

0.98+

StevePERSON

0.98+

WinchesterORGANIZATION

0.98+

fiveQUANTITY

0.98+

DVR 5 4800COMMERCIAL_ITEM

0.98+

bothQUANTITY

0.98+

four lanesQUANTITY

0.98+

around 35,000QUANTITY

0.97+

first three gigQUANTITY

0.97+

about six years agoDATE

0.96+

about 25%QUANTITY

0.96+

first hard driveQUANTITY

0.96+

over 6,000 megabytes per secondQUANTITY

0.96+

20 years agoDATE

0.96+

almost 50 yearsQUANTITY

0.96+

22QUANTITY

0.95+

one and a halfQUANTITY

0.95+

Laureen Knudsen, Broadcom Inc. | BizOps Chaos to Clarity 2021


 

(bright upbeat music) >> Welcome back. Lisa Martin here talking with Laureen Knudsen, a CUBE alumni. She's the Chief Transformation Officer at Broadcom and a founding member of the BizOps Coalition. Laureen, I'm excited to talk to you about an interesting topic today. Welcome back to the program. >> Thank you so much. Glad to be here. >> So we're going to be, yeah, we're going to be talking about the pros and cons of adding a Chief Digital Officer. You say, there may be some friction there, but it's going to be temporary as the benefits will be long lasting. So let's dive right in. Talk to me about what the role of a Chief Digital Officer is. Is this something that a CIO can take on? >> In some organizations, I think the CIO is taking on this role. And it's primarily focusing on what we're calling the digitization of the organization. So it's across more than just IT though. So it's looking at what kind of digital marketing should you be doing? What are your competitors doing? How can you make the most bang for your buck essentially across your entire organization? So it also includes parts that generally haven't been included in digital transformations, like your legal team or your finance team and the interactions with them. Can your contracts be digitized? Can they be made more efficient and more automated, right? So it's looking at the entire organization both internally and externally and looking at the strategy for how do you accomplish that and how do you truly make your organization as effective as it can be. >> Is this person almost like a bridge between the different lines of business and IT to get that external, internal focus? >> Yes, yeah, many people in IT don't have that business knowledge. That's a really good point. And so this person will need to have not only business knowledge but technical knowledge so they can essentially translate, right, the verbiage that is used in the different organizations and the jargon that's used to make it, to make the understanding between the two of what's needed more smooth, you know, the communication more smooth within the organization. Also focusing on customer value and making sure that, that both sides are saying the same, you know, when they use the same words, they're saying the same things. So doing that translation in that organization, across the entire company. >> Looking at it from the holistic perspective, you know, I know that the BizOps Coalition survey also showed that something that we hear that digital transformation isn't just about the technology. It's got to be all of the factors coming together aligned on business outcomes, aligned on what's the impact and the value to the customer. How is the Chief Digital Officer role going to facilitate that, not just understanding, but putting in practice that digital transformation is not just about technology? >> Well, and again, 95% of companies are confirming that, that right now they're focusing much more on business outcomes than just on technology. And so, there really is that need to, you know, what does that mean, right? When you're focusing on business outcomes, it often includes a lot of technology, but it's, you know, there's a different path to take to make sure that you're focusing on your customer outcomes. There's a lot of organizations that are looking at their apps and realizing their customers find the most value when they never have to use them. So how do you accomplish that, right? That's not adding new features in, that's not doing something new for the customer other than making it, making sure everything runs so smoothly that they never have to access your app. You know, we're running into that with a lot of business organizations like insurance companies or banking, phone, you know, telco companies, things like that where people really don't want to use the products you're creating for them if they don't have to. >> Right, adoption is always something that we talk about that can be a KPI but also a challenge. One of the things that I noticed that information that, that Broadcom provided was that Gartner says, in the next 12 months, 67% of organizations are going to be looking at hiring a Chief Digital Officer. Let's have you talk us through what are some of the forcing functions behind that? Obviously the last year has been quite, filled with quite a bit of uncertainty but we look back a couple of decades, there wasn't talk of a Chief Digital Officer. So, why this, why is there such a big uptick in the need for this role? >> Well, it's interesting 'cause Gartner originally talked about the Chief Digital Officer in about 2010 to 2012 timeframe where they were talking about the need for it. And it was a lot of, I think fast moving companies and the companies that really have made a lot of advancements in their effectiveness and their customer centricity have really grabbed onto this concept whether they've called it a Chief Digital Officer or not, but in the last year, it's forced everyone to have a digital footprint in the market. If you'll notice even your local restaurants that are family owned now have some sort of way to order their food digitally, right? So we're digitizing the entire thing and COVID is really, required every company to look at much more how they can do things electronically, any type of, you know, digitization whether it's like I've said before the marketing, or even how do you handle all of your contracts when there's no in-person signature and no, you know, fax machines to send things back and forth, right? It's all about making sure that all of that's secure and protected. So it's going across the entire organization. And that's really creating that need for somebody to be able to look at how your company can do all of those different things. Because quite frankly, the CIO already has a day job, right? Your Chief Marketing Officer already has a day job. So trying to look at how to be really innovative in these areas creates a gap, right? And people aren't finding that extra time to be able to do that and to look at how to be really streamlining their organizations and taking that innovation in with both internal and external viewpoints. >> Well, it'd be, imagine you mentioned, you know, the CIO, the CMO, the CFO having day jobs, but also one of the things it sounds to me like is important for this CDO role is to have objectivity. To be able to rise above all the different functions, the different technology stocks and probably silos that are there and really look holistically across the organization. So talk to me about some of the skills that are really required from the Chief Digital Officer. Is this someone that needs to have both an IT background and a business background, does it matter? >> I think as long as they have the knowledge of either side, that where they came from, isn't going to matter but you're going to, the problem is going to be finding the people with those dual skill sets, right? Because you're going to need somebody that can understand your business and your technology side to marry the two together. But they're also going to need to understand all the intricacies of the legal aspects that need to go into creating your products or the financial aspects of tracking what happens with your products. So they're really going to need to be not only very well educated and have a lot of experience, but the other thing they're going to need is that emotional empathy and that ability to work with everybody in the organization. Essentially if they do their job right, they'll be coming in and working with every other Vice President or chief in your organization. So there'll be helping to influence all of those people. And that can create a lot of conflict at first because you're having somebody else come in to give the CIO insights into how they can innovate technologically or to give the Chief Marketing Officer information on new ways that they can do their jobs, that they can digitize the marketing to be more effective and the right frame of mind to be able to do that. You know, hiring is going to be another place where these people will have a large imprint because they're going to need the knowledge to be able to interview all across the board for people that can help them get these new innovations into place. For example, if marketing needs to expand into more of a digital footprint to actually get the, the imprints that they need, right? How do you interview for that, when as a marketing leader, you've never run a digital part, a digital organization before. So it's really having the ability to partner with every other department in the organization and work with them, which, you know, to your point that can cause some conflicts to start off with but in the long run, it'll, it should be well worth it. >> Well, it sounds like that friction is probably unavoidable in the beginning as this person really works to understand all of the inner machinations of the organization and really identify what's best for the overall business. You mentioned empathy. And I think that's something that we've heard a lot about in the last year as leaders really needing to adopt that. And it sounds like this role for it to be such a catalyst of IT and business alignment, as it sounds like it really can be, that empathetic gene really needs to be turned on pretty high, I think. >> A 100%, right? They need to be able to be really understanding of the organization and the other people that they're working with, that those people do have a great bit of knowledge about the company that they're joining, right, generally and that they'll understand their jobs on a day-to-day basis. But the innovation parts, right, is where the Chief Digital Officer will come in. And if the Chief Digital Officer does this well, they can actually have a really big impact on the corporate culture as well which is a huge area that people are focusing on these days especially as every employee is remote. So it's a big job and a big ask and it's going to be really important for companies to hire the person with the best fit for their organization in this new role. >> You mentioned culture and that's something that is imperative but digital transformations won't be successful without the right cultural transformation. But that's easier said than done especially for organizations that have been around a while. And they're so used to the way they've done business for decades that it's hard to change that mindset, but it sounds like the Chief Digital Officer role should be one that is an influencer of that cultural change. How do you see them being able to do that within a, you know, stodgy, legacy institution? What are some of the things that they would be able to unlock? >> They should be able to re-energize portions of the company, right? If you're bringing in innovative ideas into a company that has had some difficulty hiring, right? There's a lot of companies that before the pandemic hit, were only starting to look at agile practices and things because quite frankly they couldn't hire anyone out of college to work there and they were afraid most of their workforce would retire out. So they're trying to get those people that want to be innovative, the high, the people that graduated top of their class. You're going to need the organization to change. And this is a perfect example of somebody that can come in and be a catalyst for all of that. So if they're coming up with new innovative ideas, if your marketing department wasn't transforming into a highly digital marketing department, they can come help invigorate that, right? And come up with a plan to get people in but also to train the people that are there that do want to learn these new skills in order bring the whole organization along with them. And I think they can have a huge impact if they, and get those innovative culture cycles changing. >> I'm curious if you think that, you know, given the last year and the amount of uncertainty that the pandemic has brought to the market, to the economy, now some of the challenges that leaders say, we're still going to have similar challenges in 2021. We still have a good percentage of our workforce remote. Is the role that the Chief Digital Officer can play, is that potentially going to help companies, really, is it going to help make a difference between those companies that really, not just survive this time but thrive like the winners versus the losers of tomorrow? >> I think it can, right? And a lot of this is going to be how the people that hire in the Chief Digital Officer and how much that team is willing to work with them. One of the things that we notice is the companies that do advance their culture a lot and advance in their customer centricity, the leadership level of the organization acts as a team as much as they expect to the frontline crews to act as teams. So you've got to be working together. And that goes all the way through, right? Your HR departments can't be incenting one group to work against another. You can't incent two people to have a goal, you know, to reach a goal in a different way and incent them differently so that they end up working against each other, right? This has to start being a real collaborative effort and it'll end up impacting the entire organization. But it's those companies that start looking at their leadership organization as a team, where they're all playing to make the same goals, to make their customers the most successful they can be. That's when you really start getting those changes and you really see a Chief Digital Officer having an impact versus those organizations where, you know, they'll be on the job for two to three years and it'll just go away because they've, you know, fought against themselves and not form that team culture. >> The impact is, can be tremendous from what I'm hearing. When we think about digital transformation, you know, people, processes, technology, that culture that's so important, we're also talking about that in the context of how do organizations use all their data and make the most sense of it. As more data sources become available, data's coming in faster, how does the Chief Digital Officer align with all of the data folks within an organization so that they can all have access to the right information to make data-driven decisions that are really for internal and externally looking benefits? >> Right, they can help make sense of the data that the company is collecting. One of the main things we're hearing right now is a lot of organizations are collecting a ton of data and they're either, you know, having some organization that creates metrics out of it. And that group just doesn't know really what the business does. They're relatively new to the business as a lot of data organizations are. So they go grab standard metrics and just provide, you know, shove as many metrics out. That's their output point, right? Where they get brownie points for every metric they create. And so we're hearing from a lot of leaders that, that they're getting literally hundreds of metrics a month and they have no idea what they're supposed to be doing with them or what this data is supposed to be showing them. And that's really of no benefit to anybody, right? It's a waste of time all through the organization. So the Chief Digital Officer, again, will be looking at what are the right business metrics to be tracking for that business and be working with those data officers to get the right innovation in so that you can see how well you're transforming, how well your company is actually doing, how much your customers actually do like what you're creating and the impact of the changes that you're making. So another thing we're being asked a lot of is, you know, I'm funding things and I'm being told they'll provide my customers value but when they get released I have no idea if they are, right? And the Chief Data Officer will help, be putting all the metrics that tie that in and showing telemetry gets built in. So that they've got the metrics that you need to truly run your business well. And so again, that'll be another part of the organization that the Chief Digital Officer would be working with. Along with the CIO, they'll be working with the data organizations as well. >> Well, there's so much opportunity that the chief Digital Officer role can deliver and unlock value in an organization as you've talked about. It'll be interesting, Laureen to see what happens in the next 12 months. Do we see what Gartner's predicting, 67% of companies are going to be adopting this role. I'm curious to see what the BizOps Coalition finds in the next year or so but thank you for sharing this insight. And this definitely sounds like a role where every day will be interesting, unique and not boring. (gentle upbeat music)

Published Date : Apr 21 2021

SUMMARY :

of the BizOps Coalition. Glad to be here. but it's going to be temporary and the interactions with them. that both sides are saying the same, I know that the BizOps Coalition survey that they never have to access your app. that Gartner says, in the next 12 months, that extra time to be able to do that and probably silos that are there and that ability to work of the organization and it's going to be really important that it's hard to change that mindset, that before the pandemic hit, that the pandemic has brought And that goes all the way through, right? that in the context that the Chief Digital that the chief Digital

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
LaureenPERSON

0.99+

Lisa MartinPERSON

0.99+

Laureen KnudsenPERSON

0.99+

2021DATE

0.99+

twoQUANTITY

0.99+

GartnerORGANIZATION

0.99+

BroadcomORGANIZATION

0.99+

95%QUANTITY

0.99+

last yearDATE

0.99+

BizOps CoalitionORGANIZATION

0.99+

67%QUANTITY

0.99+

both sidesQUANTITY

0.99+

2012DATE

0.99+

bothQUANTITY

0.99+

three yearsQUANTITY

0.99+

OneQUANTITY

0.99+

CUBEORGANIZATION

0.98+

next yearDATE

0.98+

todayDATE

0.98+

two peopleQUANTITY

0.97+

100%QUANTITY

0.97+

tomorrowDATE

0.96+

pandemicEVENT

0.96+

hundreds of metricsQUANTITY

0.96+

oneQUANTITY

0.96+

Broadcom Inc.ORGANIZATION

0.95+

BizOpsORGANIZATION

0.93+

next 12 monthsDATE

0.91+

one groupQUANTITY

0.88+

firstQUANTITY

0.82+

2010DATE

0.77+

decadesQUANTITY

0.73+

a monthQUANTITY

0.69+

Digital OfficerPERSON

0.67+

aboutDATE

0.61+

telcoORGANIZATION

0.53+

dataQUANTITY

0.5+

coupleQUANTITY

0.45+

COVIDTITLE

0.4+

tonQUANTITY

0.37+

ClarityORGANIZATION

0.33+

Sreenivasan Rajagopal, Broadcom | AIOps Virtual Forum


 

>>From around the globe. It's the cube with digital coverage of an AI ops virtual forum brought to you by Broadcom. >>Welcome to this preview of Broadcom's AI ops virtual forum on your host, Lisa Martin. And joining me to give you a sneak peek of this event. That's on December 3rd is streaming of Boston Rajagopal or Raj, the head of AI ops at Broadcom. Raj. This has meant it's coming up in a couple of weeks. Excited, >>Good to be here. I am excited, Lisa, uh, you know, um, customers are poised for growth, uh, in 2021. And, uh, they, uh, we believe they all, they will also come out of the pandemic to grow their business and serve, uh, their customers. Uh, you know, well, they have two key challenges. How do you grow at the same time, operate with efficiency, right? These two challenges are decision-makers are struggling with every day at scale. That is why they do digital transformation at scale. And our key influencers like it, operators and SRE personas are helping our decision-makers in our customers to drive the efficiency. They are trying to, uh, focus on converting outputs to outcomes. That's what AI ops is all about. And you're going to hear from us. >>Yeah. And we've got a panel of experts here. Rich lane, senior research analyst for Forrester is going to be joining us as well as nastier, the global product management at Verizon. And of course, Raj, you're going to be hearing some of the latest trends for AI ops. Why now is the time Raj, what are some of the key takeaways that you think those key influencers and those decision makers are going to walk away from this event? >>So the, you know, our decision makers and key influencers have a single question in mind when they deal with enterprise large enterprise scenarios, the questions that they get asked by their skill level execs are, are you ready? Are you ready? When remote work is the norm, are you ready when you have to optimize your investments? And are you ready when you have to accelerate your transformation at scale to operate as a digital enterprise, all of this requires them to think and act differently from people process technology. And how do you bring all of this together? Under the ages of what we call AI ops is what they're going to learn about. >>Another thing too, is you're going to hear the latest industry trends on AI ops from Raj and the panel of experts that mentioned a minute ago, how organizations like yours are finding value from AI ops and something that Raj talked about a minute ago is understanding why now is the time to be ready for AI ops. So Raj and I look forward to you joining us along with our other panelists, December 3rd, register for the Broadcom AI ops virtual forum today.

Published Date : Nov 25 2020

SUMMARY :

ops virtual forum brought to you by Broadcom. And joining me to give you a sneak peek of this event. the pandemic to grow their business and serve, uh, their customers. is the time Raj, what are some of the key takeaways that you think those key influencers and And are you ready when you have to accelerate your transformation at scale to operate So Raj and I look forward to you joining us along

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
RajPERSON

0.99+

DavidPERSON

0.99+

Dave VellantePERSON

0.99+

CaitlynPERSON

0.99+

Pierluca ChiodelliPERSON

0.99+

JonathanPERSON

0.99+

JohnPERSON

0.99+

JimPERSON

0.99+

AdamPERSON

0.99+

Lisa MartinPERSON

0.99+

Lynn LucasPERSON

0.99+

Caitlyn HalfertyPERSON

0.99+

$3QUANTITY

0.99+

Jonathan EbingerPERSON

0.99+

Munyeb MinhazuddinPERSON

0.99+

Michael DellPERSON

0.99+

Christy ParrishPERSON

0.99+

MicrosoftORGANIZATION

0.99+

Ed AmorosoPERSON

0.99+

Adam SchmittPERSON

0.99+

SoftBankORGANIZATION

0.99+

Sanjay GhemawatPERSON

0.99+

DellORGANIZATION

0.99+

VerizonORGANIZATION

0.99+

AshleyPERSON

0.99+

AmazonORGANIZATION

0.99+

Greg SandsPERSON

0.99+

Craig SandersonPERSON

0.99+

LisaPERSON

0.99+

Cockroach LabsORGANIZATION

0.99+

Jim WalkerPERSON

0.99+

GoogleORGANIZATION

0.99+

Blue Run VenturesORGANIZATION

0.99+

Ashley GaarePERSON

0.99+

DavePERSON

0.99+

2014DATE

0.99+

IBMORGANIZATION

0.99+

Rob EmsleyPERSON

0.99+

CaliforniaLOCATION

0.99+

LynnPERSON

0.99+

AWSORGANIZATION

0.99+

Allen CranePERSON

0.99+

Sreenivasan Rajagopal, Broadcom | AIOps Virtual Forum 2020


 

>>from around the globe. It's the Cube with digital coverage of AI ops Virtual Forum Brought to you by Broadcom Welcome back to the AI Ops Virtual Forum. Lisa Martin here with Srinivasan Rajagopal, the head of products and strategy at Broadcom Raj Welcome. >>Good to be here, Lisa. >>I'm excited for our conversation, so I wanted to dive right into a term that we hear all the time. Operational excellence, right? We hear it everywhere in marketing, etcetera. But why is it so important to organizations as they head into 2021 tell us how ai ops as a platform can help? >>Yeah. Thank you. First off, I wanna I wanna welcome our viewers back and I'm very excited Toe share more info on this topic. You know, here's what we believe. As we work with large organizations, we see all our organizations are poised toe get out off the pandemic and look for growth for their own business and helping customers get through this tough time. So fiscal year 2021 we believe, is going to be a combination off, you know, resiliency and agility at the at the same time. So operational excellence is critical because the business has become mawr digital, right? There are gonna be three things that are gonna be more sticky. You know, remote work is gonna be more sticky. Um, cost savings and efficiency is going to be an imperative for organizations. And the continued acceleration of digital transformation off enterprises at scale is going to be in reality. So when you put all these three things together as a team, that is, you know, that's working behind the scenes toe help the businesses succeed. Operational excellence is going to be make or break for organizations. >>Russia with that said, if we kind of strip it down to the key capabilities, what are those key capabilities that companies need to be looking for in an AI ops solution? >>Yeah, you know. So first foremost AI ops means many things to many, many folks. So let's take a moment to simply define it. The way we defined AI ops is it's a system off intelligence human augmented system that brings together full visibility across app, infra and network elements that brings together despite of data sources on provides actionable intelligence and uniquely offers intelligent automation. Now the technology many folks draw is the self driving car, right? I mean, we are in the world of Tesla's, but, you know, but self driving data center is is too far away, right? Autonomous systems are still far away. However, you know, application off the I M l techniques toe help deal with volume velocity, veracity of information. Eyes is critical. So that's how we look at AI ops and some of the key capabilities that we that we that we work with our customers to help them around 48 years. Right? First one is eyes and years. What we call full stack, observe ability. If you do not know what is happening in your systems, you know that that serve up your business services, it's gonna be pretty hard to do anything in terms of responsiveness, right? So from stack of their ability, the second piece is what we call actionable insights. So when you have disparaged data sources, tool sprawls, data coming at you from, you know from database systems, I T systems, customer management systems, ticketing systems, how do you find the needle from the haystack? And how do you respond rapidly from a myriad off problems? A sea off read The third area is what we call intelligent automation. Well, Identifying the problem toe Act on is important and then acting on. Automating that and creating a recommendation system where you know you could be proactive about that is even more important. And finally, all of this focuses on efficiency. What about effectiveness? Effectiveness comes when you create a feedback loop when what happens in production is related to your support systems and your developers so that they can respond rapidly. So we call that continuous feedback. So these are the four key capabilities that you know you should look for in an AI ops system. And that's what we offer us. >>Alright, Russia. There's four key capabilities that businesses need to be looking for. I'm wondering how those help to align business and i t. It's again like operational excellence. It's something that we talk about a lot is the alignment of business and I t a lot more challenging. Is your something done right? But I want you to explain how can a iob help with that alignment and align? I t outputs to business outcomes. >>So you know, one of the things I'm going to say something that this, that is that is simple. But it's harder. Alignment is not on systems. Alignment is with people, right? So when people align when organizations aligned, when cultures align, dramatic things can happen. So in the context off AI ops, we see when when saris aligned with the develops engineers and information architects. And, uh, you know, I t operators, you know, they enable organizations to reduce the gap between intent and outcome or output an outcome that said, you know, these personas need mechanisms toe help them better align, right, help them Better visual. I see the you know what we call single source of truth, right? So there are four key things that I wanna call out when we work with large enterprises. We find that customer journey alignment with the you know what we call I T systems is critical. So how do you understand your business imperatives and your customer journey goals? Whether it is card toe purchase or whether it is, you know, Bill shock scenarios and swan alignment on customer journey to your I T systems is one area that you can reduce the gap. The second area is how do you create a scenario where your teams can find problems before your customers do right out. It's scenarios and so on. So that's the second area off alignment. The third area off alignment is how can you measure business impact driven services right? There are several services that an organization off course as the 19 system. Some services are more critical to the business. Well, then, others and thes change in a dynamic environment. So how do you How do you understand that? How do you measure that? And how? How do you find the gaps there? So that that's the 3rd 80 off alignment that we that we help. And last but not least, there are. There are things like NPS scores and others that that help us understand alignment. But those are more long term. But in the in the context off, you know, operating digitally. You want to use customer experience and, you know single business outcome as as a key alignment factor, and then work with your systems of engagement and systems of interaction, along with your key personas to create that alignment. It's a people process technology challenge, actually. >>So where is one of the things that you said there is that it's imperative for the business toe. Find a problem before a customer does. And you talked about outages there. That's always a goal for businesses, right to prevent those outages. How can Ai ops help with that? >>Yeah, so, you know, out they just talk, you know, go to resiliency off a system, right? And they also goto have, you know, agility off the same system. You know, if you are a customer and if you're ripping up your mobile happened, it takes more than you know, three milliseconds. You know, you're probably losing that customer, right? So I would just mean different things, you know? And there's an interesting website called don't detector dot com that actually tracks all the outages of publicly available services, whether it's your bank or your, you know, telecom service or mobile service and so on and so forth. In fact, the key question around outages for from from you know, executives are the question of Are you ready? Right? Are you ready to respond to the needs off your customers and your business? Are you ready toe rapidly to solve an issue that is impacting customer experience and therefore satisfaction. Are you creating a digital trust system where customers can be, You know, you know, customers can feel that their information is secure when they transact with you. All of these getting toe the notion of resiliency and outages. Now, you know, one of the things that I often you know work with customers around, you know, that we find is the radius off. Impact is important when you deal with outages. What I mean by that is problems occur, right? How do you respond? How quickly do you take? Two seconds? Two minutes, 20 minutes. Two hours, 20 hours. Right To resolve that problem. That radius of impact is important. That's where you know you have to bring again. People process technology together to solve that. And the key thing is, you need a system of intelligence that can aid you your teams, you know, look at the same set of parameters so that you can respond faster. That's the key here. >>But as we look at digital transformation at scale, Raj, how does a apps help influence that? >>You know, I'm gonna take a slightly long winded way to answer this question. See, when it comes to digital transformation at scale, the focus on business purpose and business outcome becomes extremely critical. And then the alignment off that to your digital supply chain right are the are the are the key factors that differentiate vintners in the in their digital transformation game. Really? What we have seen with with winners is they operate very differently. Like, for example, you know, 19 assures its digital business outcomes by shoes per second, right apple buy iPhones per per minute. Tesla by model threes per month. Are you getting getting it right? I mean, you wanna have, ah, clear business outcome, which is a measure off your business. In effect, I mean, easy right, which which my daughter use. And I use very well, right? You know, they measured by revenue per hour, right? I mean, so these are key measures, and when you have a key business outcome measure like that, you can align everything else because you know what these measures you know, for a bank, it may be deposits per month. Right now, when you move money from checking account to savings account or when you do direct deposits, those are you know, banks need liquidity and so on and so forth. But, you know, the key thing is that single business outcome has a starburst effect inside the I T. Organization that touches a single money movement from checking account to savings account can touch about 75 disparage systems internally. Right? So those think about right. I mean, all we're doing is moving money from checking accounts savings account. Now that goats in tow, a IittIe production system, there are several applications. There is a database there is there are infrastructures, their load balancers, that our webs, you know, the Web server components, which then touches your your middleware component, which is a queuing system right, which then touches your transactional system on. Do you know which may be on your mainframes what we call mobile toe mainframe scenario, right? And we're not done yet. Then you have a security and regulatory compliance system that you have to touch a fraud prevention system that you have to touch right, a State Department regulation that you may have to meet and on and on and on, right? This is the challenge that I t operation teams phase. And when you have millions of customers transacting right? Certainly this challenge cannot be, you know, managed by, you know, human beings alone. So therefore, you need a system off intelligence that augments human intelligence and acts as you, you know, your your eyes and ears in of a toe point pinpoint. Their problems are right. So digital transformation at scale really requires a well thought out ai ops system a platform and open extensible platform that you know, that is heterogeneous in nature because their stools problems in organizations. There is, uh, you know, a lot of data bases in systems. There are million's off, you know, customers and hundreds off partners and vendors, you know, making up that digital supply chain. So, you know, AI ops is at the center off, enabling an organization achieved digital up, you know, transformation at scale. Last but not least, you need continuous feedback loop. Continuous feedback loop is the ability for a production system toe. Inform your develops teams your finance teams, your customer experience teams your cost Modeling teams about what is going on say that they can so that they can reduce the intent outcome gap. All of this need to come together. What we call biz obs for ideal abs. >>That was a great example of how you talked about the Starburst effect. Actually never thought about it in that way. When you give the banking example but what you should is the magnitude of systems, the fact that people alone really need help with that and why intelligent automation and air ops could be transformative and enable that scale. Raj, it's always a pleasure to talk with you. Thanks for joining me today. Yeah, >>great to be here >>and we'll be right back with our next segment.

Published Date : Nov 23 2020

SUMMARY :

AI ops Virtual Forum Brought to you by Broadcom Welcome the time. that is, you know, that's working behind the scenes toe help the businesses So when you have disparaged data sources, But I want you to explain how can a iob help with that alignment So you know, one of the things I'm going to say something that this, that is that So where is one of the things that you said there is that it's imperative for the business toe. the key question around outages for from from you know, that our webs, you know, the Web server components, which then touches your your middleware component, When you give the banking example but what you should is the magnitude of

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Srinivasan RajagopalPERSON

0.99+

Lisa MartinPERSON

0.99+

Two hoursQUANTITY

0.99+

BroadcomORGANIZATION

0.99+

2021DATE

0.99+

Two secondsQUANTITY

0.99+

20 hoursQUANTITY

0.99+

Sreenivasan RajagopalPERSON

0.99+

Two minutesQUANTITY

0.99+

second areaQUANTITY

0.99+

third areaQUANTITY

0.99+

second pieceQUANTITY

0.99+

TeslaORGANIZATION

0.99+

20 minutesQUANTITY

0.99+

appleORGANIZATION

0.99+

LisaPERSON

0.99+

todayDATE

0.99+

RajPERSON

0.99+

oneQUANTITY

0.98+

fiscal year 2021DATE

0.98+

RussiaLOCATION

0.98+

firstQUANTITY

0.97+

three thingsQUANTITY

0.97+

one areaQUANTITY

0.97+

FirstQUANTITY

0.97+

singleQUANTITY

0.96+

First oneQUANTITY

0.95+

iPhonesCOMMERCIAL_ITEM

0.95+

around 48 yearsQUANTITY

0.95+

about 75 disparage systemsQUANTITY

0.94+

three millisecondsQUANTITY

0.94+

3rd 80QUANTITY

0.93+

pandemicEVENT

0.92+

single sourceQUANTITY

0.92+

2020DATE

0.92+

I T.ORGANIZATION

0.89+

single moneyQUANTITY

0.88+

single businessQUANTITY

0.85+

four key capabilitiesQUANTITY

0.82+

Broadcom RajORGANIZATION

0.81+

millions of customersQUANTITY

0.8+

million'sQUANTITY

0.79+

four key thingsQUANTITY

0.79+

threesQUANTITY

0.77+

StarburstOTHER

0.76+

secondQUANTITY

0.75+

19 systemQUANTITY

0.74+

hundreds offQUANTITY

0.72+

19QUANTITY

0.68+

thingsQUANTITY

0.59+

ForumEVENT

0.57+

State DepartmentORGANIZATION

0.55+

AIOpsEVENT

0.53+

doORGANIZATION

0.51+

opsORGANIZATION

0.42+

AI Ops Virtual ForumEVENT

0.4+

ForumORGANIZATION

0.33+

Sreenivasan Rajagopal, Broadcom | AIOps Virtual Forum 2020 Promo


 

>>from around the globe. It's the Cube with digital coverage of AI ops. Virtual Forum Brought to You by Broadcom >>Welcome to this preview of Broadcom's AI Ops Virtual Forum on your host Lisa Martin, and joining me to give you a sneak peek of this event that's on December 3rd is Srinivasan, Rajagopal or Raj, the head of a I Ops at Broadcom Raj, this event is coming up in a couple of weeks. Excited. >>Good to be here. I am excited, Lisa. You know, um, customers are poised for growth in 2021 and, uh, they are we believe they all. They will also come out off the pandemic toe, grow their business and serve their customers, you know? Well, they have to key challenges. How do you grow at the same time, operate with efficiency, right. These two challenges our decision makers are struggling with every day at scale. That is why they do digital transformation at scale. And our key influencers like I t operators and SRE personas are helping our decision makers in our customers to drive the efficiency they are trying toe focus on converting outputs to outcomes. That's what the eye ops is all about and you're gonna hear it from us. >>Yeah, We've got a panel of experts here. Rich Lane, senior research analyst for Forrester, is going to be joining us as well as Guzman nastier the global product management at Verizon. And, of course, Raj, you're gonna be hearing some of the latest trends for AI ops. Why, now is the time, Raj, What are some of the key takeaways that you think those key influencers and those decision makers are gonna walk away from this event empowered with >>So the You know, our decision makers and a key influencers have a single question in mind when they deal with enterprise large enterprise scenarios, the questions that they get asked by their C level execs are Are you ready? Are you ready when remote work is the norm? Are you ready when you have to optimize your investments and are you ready when you have to accelerate your transformation at scale toe operate as a digital enterprise? All of this requires them to think and act differently from people process technology. And how do you bring all of this together under the ages off what we call a I ops is what they're gonna learn about. >>Another thing, too, is you're going to hear the latest industry trends on AI ops from Raj and the panel of experts that we mentioned a minute ago. How organizations like yours are finding value from a I ops and something that Raj talked about a minute ago is understanding why Now is the time to be ready for I also Raj and I look forward to you joining us along with our other Panelists. December 3rd register for the Broadcom AI Ops Virtual form today.

Published Date : Nov 23 2020

SUMMARY :

It's the Cube with digital coverage of Martin, and joining me to give you a sneak peek of this event that's on December 3rd is the same time, operate with efficiency, now is the time, Raj, What are some of the key takeaways that you think those key influencers the questions that they get asked by their C level execs are Are you ready? is the time to be ready for I also Raj and I look forward to you joining us along

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
MichielPERSON

0.99+

AnnaPERSON

0.99+

DavidPERSON

0.99+

BryanPERSON

0.99+

JohnPERSON

0.99+

IBMORGANIZATION

0.99+

MichaelPERSON

0.99+

ChrisPERSON

0.99+

NECORGANIZATION

0.99+

EricssonORGANIZATION

0.99+

KevinPERSON

0.99+

Dave FramptonPERSON

0.99+

MicrosoftORGANIZATION

0.99+

Kerim AkgonulPERSON

0.99+

Dave NicholsonPERSON

0.99+

JaredPERSON

0.99+

Steve WoodPERSON

0.99+

PeterPERSON

0.99+

Lisa MartinPERSON

0.99+

NECJORGANIZATION

0.99+

Lisa MartinPERSON

0.99+

Mike OlsonPERSON

0.99+

AmazonORGANIZATION

0.99+

DavePERSON

0.99+

Michiel BakkerPERSON

0.99+

FCAORGANIZATION

0.99+

NASAORGANIZATION

0.99+

NokiaORGANIZATION

0.99+

Lee CaswellPERSON

0.99+

ECECTORGANIZATION

0.99+

Peter BurrisPERSON

0.99+

OTELORGANIZATION

0.99+

David FloyerPERSON

0.99+

Bryan PijanowskiPERSON

0.99+

Rich LanePERSON

0.99+

KerimPERSON

0.99+

Kevin BoguszPERSON

0.99+

Jeff FrickPERSON

0.99+

Jared WoodreyPERSON

0.99+

LincolnshireLOCATION

0.99+

KeithPERSON

0.99+

Dave NicholsonPERSON

0.99+

ChuckPERSON

0.99+

JeffPERSON

0.99+

National Health ServicesORGANIZATION

0.99+

Keith TownsendPERSON

0.99+

WANdiscoORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

MarchDATE

0.99+

NutanixORGANIZATION

0.99+

San FranciscoLOCATION

0.99+

IrelandLOCATION

0.99+

Dave VellantePERSON

0.99+

Michael DellPERSON

0.99+

RajagopalPERSON

0.99+

Dave AllantePERSON

0.99+

EuropeLOCATION

0.99+

March of 2012DATE

0.99+

Anna GleissPERSON

0.99+

SamsungORGANIZATION

0.99+

Ritika GunnarPERSON

0.99+

Mandy DhaliwalPERSON

0.99+

Serge Lucio, Broadcom | DevOps Virtual Forum 2020


 

>> From around the globe it's the CUBE with digital coverage of Devops Virtual Forum, brought to you by Broadcom. >> Continuing our conversations here at Broadcom's DevOps Virtual Forum. Lisa Martin here, please do welcome back to the program. Serge Lucio, the general manager of the Enterprise Software Division at Broadcom. Hey Serge welcome. >> Thank you. Good to be here. >> So I know you were just participating with the BizOps manifesto that just happened recently. I just had the chance to talk with Jeffrey Hammond and he unlocked this really interesting concept but I wanted to get your thoughts on, spiritual co-location as really a necessity for BizOps to succeed in this unusual time in which we're living. What are your thoughts on spiritual co-location in terms of cultural change versus adoption of technologies? >> Yeah, it's quite interesting, right. When we think about the major impediments for DevOps implementation, that means all about culture, right? And swore over the last 20 years we've been talking about silos. We'd be talking about the paradox for these teams too when it goes to align. And in many ways it's not so much about these teams aligning but about being in the same car, in the same books, right? It's really about fusing those teams around kind of the common purpose, a common objective. So to me this is really about kind of changing this culture where people start to look at kind of OKRs instead of the key objective that drives the entire team. Now, what it means in practice is really that's we need to change a lot of behaviors, right? It's not about the ER key, it's not about roles. It's about, you know, who can do what and when, and, you know, driving a bias towards action. It's also means that we need, I mean, especially in this COVID times it becomes very difficult, right? To drive kind of a kind of collaboration and affinity between these teams. And so I think there's a significant role that especially tools can play in terms of providing this conference feedback from teams to be in that preface spiritual qualification. >> Well, and it talked about culture being it's something that, you know, we're so used to talking about DevOps with respect to velocity, all about speed here. But of course this time everything changed so quickly but going from the physical spaces to everybody being remote really does take. It's very different than you can't replicate it digitally but there are collaboration tools that can kind of really be essential to help that cultural shift, right? >> Yeah, so to me we tend to talk about collaboration in a very mundane way, right? Of course we can use zoom. We can all get into the same room. But the point when I think when Jeff says spiritual co-location, it's really about, we all share the same objective. Do we have a means for instance, our pipeline, right? When you talk about DevOps probably we all started thinking about this continuous delivery pipeline that basically drives the automation, the orchestration across the team. But just thinking about a pipeline, right? At the end of the day, it's all about what is the meantime to feed back to these teams. If I'm a developer and I commit code, how long does it take for, you know, that code to be processed through pipeline or quick and I get feedback? If I am a finance person, who's funding a product or a project, what is my meantime to beat back? And so a lot of, kind of a, when we think about the pipeline, I think what's been really inspiring to me in the last year or so is that there is much more of an adoption of that door effect metrics. There is way more of a focus around value stream management. And to me, this is really when we talk about collaboration it's really a balance. How do you provide that feedback to the different stakeholders across the life cycle in a very timely matter? And that's what we would need to get to in terms of kind of this notion of collaboration. It's not so much about people being in the same physical space. It's about, you know, when checking code, you know, to do I guess the system to automatically identify what I'm going to break. If I'm about to release some allocation, how can the system help me reduce my change builder rate? Because it's able to predict that some issue was introduced in the application or the product. So I think there's a great role of technology and AI candidates to actually provide kind of that new level of collaboration. >> So we'll get to AI in a second but I'm curious, what are some of the metrics you think that really matter right now is organizations are still in some form probably of transformation to this new almost 100% remote workforce. >> So I'll just say first I'm not a big fan of metrics. And the reason being that, you know, you can look at a change failure rate, right, or a leak time or cycle time. And those are interesting metrics, right? The trend on metric is absolutely critical. But what's more important these I'll do get to the root cause. What is taught to you lead to that metric to degrade or improve over time. And so I'm much more interested and we, you know, fruit for Broadcom. Are we more interested in understanding what are the patterns that contribute to this? So I'll use a very mundane example. You know, we know that cycle time is heavily influenced by organizational boundaries. So, you know, we talk a lot about silos, but we we've worked with many of our customers doing value stream mapping. And oftentimes what you see is that really the boundaries of your organization creates a lot of idle time, right? So to me, it's less about the metrics. I think the door metrics are pretty, you know, valid set metrics but what's way more important is to understand, what are the anti parents? What are the things that we can detect through the data that actually are affecting those metrics? And I mean, over the last 10, 20 years, we've learned a lot about kind of what are the anti parents within our large enterprise customers? And there are plenty of them. >> What are some of the things that you're seeing now with respect to patterns that have developed over the last seven to eight months? >> So I think the two areas which clearly are evolving very quickly are on kind of the front end of the life cycle where DevOps is more and more embracing value stream management, value stream mapping. And I think what's interesting is that in many ways, the product is becoming the new silo. The notion of a product is very difficult by itself to actually define. People are starting to recognize that a value stream is not its own little kind of island. That in reality, when I did find a product this product, oftentimes as dependencies on our products and that in fact, you're looking at kind of a network of value streams, if you will. So on that and there is clearly kind of a new sets if you will of anti-patterns where, you know, products are being defined as a set of OTRs. They have interdependencies and you have to have a new set of silos. On the other hand the other kind of key movement to ease around the SRE space, where I think there is a cultural clash. While the DevOps side is very much embracing this notion of OTRs and value stream mapping and value management. On the other end, you have IT operations teams. We still think business services, right? For them they think about configure items, think about infrastructure. And so, you know, it's not uncommon to see, you know, teams where, you know, the operations team is still thinking about hundreds of thousands, tens of thousands of business services. And so there is this boundary where I think, well, SRE has been put in place, and there's lots of thinking about what kind of metrics can be defined. I think, you know, going back to culture, I think there's a lot of cultural evolution that's still required for, you know, true operations teams. >> And that's a hard thing. Cultural transformation in any industry pandemic or not is a challenging thing. You now talked about AI and automation of minutes ago. How do you think those technologies can be leveraged by DevOps leaders to influence their successes and their ability to collaborate and maybe see eye to eye with the SREs? >> Yeah, so there're kind of too, so even for myself, right? As a leader of , you know, 1500 people organization, there's a number of things I don't see, right, on a daily basis. And I think the technologies that we have at our disposal today from the AI are able to mine a lot of data and expose a lot of issues that as leaders we may not be aware of. And some of these are pretty kind of easy to understand, right? We all think we're agile. And yet when you start to understand, for instance, what is the is a work in progress, right, during the sprint? When you start to analyze the data you can detect for instance, that maybe the teams are over committed, that there is too much work in profits. You can start to identify kind of interprocess either from a technology or from a people point of view, which were hidden. You can start to understand that maybe the change failure rate is dragging. So I believe that there is a fundamental role to be played by the tools to expose again these anti parents. To make these things visible to the teams to be able to even compare teams, right? One of the things that's amazing is now we have access to tons of data not just from a given customer, but across a large number of customers. And so we start to compare all of these teams kind of operate and what's working, what's not working. >> Thoughts on AI and automation as a facilitator of spiritual co-location? >> Yeah, absolutely. It's, you know, there's a the problem we all face is the unknown, right? The velocity, the volume, variety of the data, every day we don't really necessarily completely appreciate what is the impact of our actions, right? And so AI can really act as a safety net that enables us to understand what is the impact of our actions. And so, yeah, in many ways, the ability to be informed in a timely matter to be able to interact with people on the basis of data and collaborate on the data in the actual matter, I think is a very powerful enabler on, in that respect. I mean, I've seen countless of times that for instance at the SRE boundary to basically show that we'll turn the quality attributes of an incoming release, right? And exposing that to an operations person, an SRE person and enabling that collaboration dialogue through there is a very, very powerful tool. >> Do you have any recommendations for how teams can use, you know, the SRE folks, the DevOps says can use AI and automation in the right ways to be successful rather than some ways that aren't going to be non-productive. >> Yeah, so to me there's a part that the question really is when we talk about data. There are different ways you can use data, right? So you can do a lot of analytics, predictive analytics. So I think there is a tendency to look at, let's say a specific KPI, like an availability KPI or change failure rate. And to basically do a regression analysis and projecting all these things is going to happen in the future. To me that's a bad approach. The reason why I fundamentally think it's a better approach is because we, our systems the way we develop software is a non-leader kind of system, right? Software development is not linear in nature. And so I think there's a, this is probably the worst approach is to actually focus on metrics. On the other hand if you start to actually understand at a more granular level, what are the things which are contributing to this, right? So if you start to understand, for instance that whenever maybe, you know, you affect a specific part of the application that translates into production issues. So we have, I've actually a customer who identified that over 50% of their unplanned outages were related to specific components in your architecture. And whenever these components were changed this resulted in this implant outages. So if you start to be able to basically establish causality, right? Cause an effect between kind of data across the last cycle. I think this is the right way to use AI. And so pharma to be, I think it's way more about kind of a classification problem. What are the causes of problems that do exist and affect things as opposed to an hourly predictive which I don't think is as powerful? >> So I mentioned in the beginning of our conversation that just came off the BizOps manifesto. You're one of the authors of that. I want to get your thoughts on DevOps and BizOps overlapping, complimenting each other. What, from the BizOps perspective, what does it mean to the future of DevOps? >> Yeah, so it's interesting, right? If you think about DevOps, there's no founding document, right? We can refer to the Phoenix project. I mean, there are a set of documents which have been written, but in many ways there is no clear definition of what DevOps is. If you go to the DevOps Institute today you'll see that, you know, they are specific trainings for instance on value management on SRE. And so in many ways, the problem we have as an industry is that there are set practices between agile, DevOps, SRE, value stream management, Ital, right? And we all basically talk about the same things, right? We all talk about essentially accelerating in the meantime to feedback, but yet we don't have a common framework to talk about that. The other key thing is that we add to wait for genius, Jean Kim's last book to really start to get into the business aspect, right? And for value mapping to start to emerge for us to start as an industry, right? IT to start to think about what is our connection with the business aspect, what's our purpose, right? And ultimately it's all about kind of driving these business outcomes. And so to me, BizOps is really about kind of putting a lens on kind of this critical element that it's not business and IT that we in fact need to fuse business and IT. That I need needs to transform itself to recognize that it's this value generator, right? It's not a cost center. And so the relationship to me, it's more than BizOps provides kind of this over all kind of framework, if you will. That set the context for what is the reason for IT to exist. What are the core values and principles that IT needs to embrace to, again, change from cost center to value center. And then we need to start to use this as a way to start to unify some of, again, the core practices, whether it's agile, DevOps, value stream mapping, SRE. So, I think over time, my hope is that we start to organize a lot of our practices, language and cultural elements. >> Last question Serge in the last few seconds we have here, talking about this, the relation between BizOps and DevOps. What do you think as DevOps evolves? And as you talked to circle some of your insights, what should our audience keep their eyes on in the next six to 12 months? >> So to me the key challenge for the industry is really around. So we were seeing a very rapid shift towards kind of project to product, right? Which we don't want to do is to recreate kind of these new silos, these hard silos. So that's one of the big changes that I think we need to be really careful about. Because it is ultimately, it is about culture. It's not about kind of how we segment the work, right? And any true culture that we can overcome kind of silos. So back to, I guess, with Jeffrey's concept of kind of the spiritual co-location, I think it's really about that too. It's really about kind of focusing on the business outcomes on kind of aligning, on driving engagement across the teams, but not for create kind of a new set of silos which instead of being vertical are going to be these horizontal products. >> Great advice Serge that looking at culture as kind of a way of really addressing and helping to reduce, replace challenges. We thank you so much for sharing your insights and your time at today's DevOps Virtual Forum. >> Thank you. Thanks for your time. Serge Lucio, Lisa Martin, we'll be right back. (upbeat music)

Published Date : Nov 20 2020

SUMMARY :

brought to you by Broadcom. of the Enterprise Software Good to be here. I just had the chance to around kind of the common of really be essential to help I guess the system to automatically what are some of the metrics you think What is taught to you lead On the other end, you and maybe see eye to eye with the SREs? the AI are able to mine the ability to be informed and automation in the right of data across the last cycle. that just came off the BizOps manifesto. in the meantime to feedback, on in the next six to 12 months? of the spiritual co-location, as kind of a way of really Thanks for your time.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JeffPERSON

0.99+

Serge LucioPERSON

0.99+

SergePERSON

0.99+

Lisa MartinPERSON

0.99+

Jeffrey HammondPERSON

0.99+

Jean KimPERSON

0.99+

BroadcomORGANIZATION

0.99+

JeffreyPERSON

0.99+

DevOps InstituteORGANIZATION

0.99+

two areasQUANTITY

0.99+

over 50%QUANTITY

0.99+

last yearDATE

0.98+

DevOpsTITLE

0.98+

1500 peopleQUANTITY

0.98+

tens of thousandsQUANTITY

0.98+

OneQUANTITY

0.96+

todayDATE

0.96+

agileTITLE

0.96+

firstQUANTITY

0.95+

oneQUANTITY

0.94+

almost 100%QUANTITY

0.93+

12 monthsQUANTITY

0.92+

BizOpsTITLE

0.89+

secondQUANTITY

0.88+

hundreds of thousandsQUANTITY

0.85+

eight monthsQUANTITY

0.83+

PhoenixLOCATION

0.81+

Enterprise Software DivisionORGANIZATION

0.81+

last 20 yearsDATE

0.77+

2020DATE

0.75+

minutesDATE

0.74+

SRETITLE

0.67+

10, 20 yearsQUANTITY

0.66+

aboutQUANTITY

0.65+

tons of dataQUANTITY

0.62+

ItalTITLE

0.61+

sixQUANTITY

0.57+

DevOps Virtual ForumEVENT

0.55+

ForumEVENT

0.52+

BizOpsORGANIZATION

0.52+

Virtual ForumEVENT

0.51+

DevOpsORGANIZATION

0.5+

COVIDOTHER

0.5+

lastQUANTITY

0.47+

lastDATE

0.45+

sevenQUANTITY

0.43+

DevopsORGANIZATION

0.38+

DevOps Virtual Forum 2020 | Broadcom


 

>>From around the globe. It's the queue with digital coverage of dev ops virtual forum brought to you by Broadcom. >>Hi, Lisa Martin here covering the Broadcom dev ops virtual forum. I'm very pleased to be joined today by a cube alumni, Jeffrey Hammond, the vice president and principal analyst serving CIO is at Forester. Jeffrey. Nice to talk with you today. >>Good morning. It's good to be here. Yeah. >>So a virtual forum, great opportunity to engage with our audiences so much has changed in the last it's an understatement, right? Or it's an overstated thing, but it's an obvious, so much has changed when we think of dev ops. One of the things that we think of is speed, you know, enabling organizations to be able to better serve customers or adapt to changing markets like we're in now, speaking of the need to adapt, talk to us about what you're seeing with respect to dev ops and agile in the age of COVID, what are things looking like? >>Yeah, I think that, um, for most organizations, we're in a, uh, a period of adjustment, uh, when we initially started, it was essentially a sprint, you know, you run as hard as you can for as fast as you can for as long as you can and you just kind of power through it. And, and that's actually what, um, the folks that get hub saw in may when they ran an analysis of how developers, uh, commit times and a level of work that they were committing and how they were working, uh, in the first couple of months of COVID was, was progressing. They found that developers, at least in the Pacific time zone were actually increasing their work volume, maybe because they didn't have two hour commutes or maybe because they were stuck away in their homes, but for whatever reason, they were doing more work. >>And it's almost like, you know, if you've ever run a marathon the first mile or two in the marathon, you feel great and you just want to run and you want to power through it and you want to go hard. And if you do that by the time you get to mile 18 or 19, you're going to be gassed. It's sucking for wind. Uh, and, and that's, I think where we're starting to hit. So as we start to, um, gear our development chops out for the reality that most of us won't be returning into an office until 2021 at the earliest and many organizations will, will be fundamentally changing, uh, their remote workforce, uh, policies. We have to make sure that the agile processes that we use and the dev ops processes and tools that we use to support these teams are essentially aligned to help developers run that marathon instead of just kind of power through. >>So, um, let me give you a couple of specifics for many organizations, they have been in an environment where they will, um, tolerate Rover remote work and what I would call remote work around the edges like developers can be remote, but product managers and, um, you know, essentially scrum masters and all the administrators that are running the, uh, uh, the SCM repositories and, and the dev ops pipelines are all in the office. And it's essentially centralized work. That's not, we are anymore. We're moving from remote workers at the edge to remote workers at the center of what we do. And so one of the implications of that is that, um, we have to think about all the activities that you need to do from a dev ops perspective or from an agile perspective, they have to be remote people. One of the things I found with some of the organizations I talked to early on was there were things that administrators had to do that required them to go into the office to reboot the SCM server as an example, or to make sure that the final approvals for production, uh, were made. >>And so the code could be moved into the production environment. And so it actually was a little bit difficult because they had to get specific approval from the HR organizations to actually be allowed to go into the office in some States. And so one of the, the results of that is that while we've traditionally said, you know, tools are important, but they're not as important as culture as structure as organization as process. I think we have to rethink that a little bit because to the extent that tools enable us to be more digitally organized and to hiring, you know, achieve higher levels of digitization in our processes and be able to support the idea of remote workers in the center. They're now on an equal footing with so many of the other levers, uh, that, that, um, uh, that organizations have at their disposal. Um, I'll give you another example for years. >>We've said that the key to success with agile at the team level is cross-functional co located teams that are working together physically co located. It's the easiest way to show agile success. We can't do that anymore. We can't be physically located at least for the foreseeable future. So, you know, how do you take the low hanging fruits of an agile transformation and apply it in, in, in, in the time of COVID? Well, I think what you have to do is that you have to look at what physical co-location has enabled in the past and understand that it's not so much the fact that we're together looking at each other across the table. It's the fact that we're able to get into a shared mindspace, uh, from, um, uh, from a measurement perspective, we can have shared purpose. We can engage in high bandwidth communications. It's the spiritual aspect of that physical co-location that is actually important. So one of the biggest things that organizations need to start to ask themselves is how do we achieve spiritual colocation with our agile teams? Because we don't have the, the ease of physical co-location available to us anymore? >>Well, the spiritual co-location is such an interesting kind of provocative phrase there, but something that probably was a challenge here, we are seven, eight months in for many organizations, as you say, going from, you know, physical workspaces, co-location being able to collaborate face to face to a, a light switch flip overnight. And this undefined period of time where all we were living with with was uncertainty, how does spiritual, what do you, when you talk about spiritual co-location in terms of collaboration and processes and technology help us unpack that, and how are you seeing organizations adopted? >>Yeah, it's, it's, um, it's a great question. And, and I think it goes to the very root of how organizations are trying to transform themselves to be more agile and to embrace dev ops. Um, if you go all the way back to the, to the original, uh, agile manifesto, you know, there were four principles that were espoused individuals and interactions over processes and tools. That's still important. Individuals and interactions are at the core of software development, processes and tools that support those individual and interact. Uh, those individuals in those interactions are more important than ever working software over comprehensive documentation. Working software is still more important, but when you are trying to onboard employees and they can't come into the office and they can't do the two day training session and kind of understand how things work and they can't just holler over the cube, uh, to ask a question, you may need to invest a little bit more in documentation to help that onboarding process be successful in a remote context, uh, customer collaboration over contract negotiation. >>Absolutely still important, but employee collaboration is equally as important if you want to be spiritually, spiritually co-located. And if you want to have a shared purpose and then, um, responding to change over following a plan. I think one of the things that's happened in a lot of organizations is we have focused so much of our dev ops effort around velocity getting faster. We need to run as fast as we can like that sprinter. Okay. You know, trying to just power through it as quickly as possible. But as we shift to, to the, to the marathon way of thinking, um, velocity is still important, but agility becomes even more important. So when you have to create an application in three weeks to do track and trace for your employees, agility is more important. Um, and then just flat out velocity. Um, and so changing some of the ways that we think about dev ops practices, um, is, is important to make sure that that agility is there for one thing, you have to defer decisions as far down the chain to the team level as possible. >>So those teams have to be empowered to make decisions because you can't have a program level meeting of six or seven teams and one large hall and say, here's the lay of the land. Here's what we're going to do here are our processes. And here are our guardrails. Those teams have to make decisions much more quickly that developers are actually developing code in smaller chunks of flow. They have to be able to take two hours here or 50 minutes there and do something useful. And so the tools that support us have to become tolerant of the reality of, of, of, of how we're working. So if they work in a way that it allows the team together to take as much autonomy as they can handle, um, to, uh, allow them to communicate in a way that, that, that delivers shared purpose and allows them to adapt and master new technologies, then they're in the zone in their spiritual, they'll get spiritually connected. I hope that makes sense. >>It does. I think we all could use some of that, but, you know, you talked about in the beginning and I've, I've talked to numerous companies during the pandemic on the cube about the productivity, or rather the number of hours of work has gone way up for many roles, you know, and, and, and times that they normally late at night on the weekends. So, but it's a cultural, it's a mind shift to your point about dev ops focused on velocity, sprints, sprints, sprints, and now we have to, so that cultural shift is not an easy one for developers. And even at this folks to flip so quickly, what have you seen in terms of the velocity at which businesses are able to get more of that balance between the velocity, the sprint and the agility? >>I think, I think at the core, this really comes down to management sensitivity. Um, when everybody was in the office, you could kind of see the mental health of development teams by, by watching how they work. You know, you call it management by walking around, right. We can't do that. Managers have to, um, to, to be more aware of what their teams are doing, because they're not going to see that, that developer doing a check-in at 9:00 PM on a Friday, uh, because that's what they had to do, uh, to meet the objectives. And, um, and, and they're going to have to, to, um, to find new ways to measure engagement and also potential burnout. Um, friend of mine once had, uh, had a great metric that he called the parking lot metric. It was helpful as the parking lot at nine. And how full was it at five? >>And that gives you an indication of how engaged your developers are. Um, what's the digital equivalent equivalent to the parking lot metric in the time of COVID it's commit stats, it's commit rates. It's, um, you know, the, uh, the turn rate, uh, that we have in our code. So we have this information, we may not be collecting it, but then the next question becomes, how do we use that information? Do we use that information to say, well, this team isn't delivering as at the same level of productivity as another team, do we weaponize that data or do we use that data to identify impedances in the process? Um, why isn't a team working effectively? Is it because they have higher levels of family obligations and they've got kids that, that are at home? Um, is it because they're working with, um, you know, hardware technology, and guess what, they, it's not easy to get the hardware technology into their home office because it's in the lab at the, uh, at the corporate office, uh, or they're trying to communicate, uh, you know, halfway around the world. >>And, uh, they're communicating with a, with an office lab that is also shut down and, and, and the bandwidth just doesn't enable the, the level of high bandwidth communications. So from a dev ops perspective, managers have to get much more sensitive to the, the exhaust that the dev ops tools are throwing off, but also how they're going to use that in a constructive way to, to prevent burnout. And then they also need to, if they're not already managing or monitoring or measuring the level of developer engagement, they have, they really need to start whether that's surveys around developer satisfaction, um, whether it's, you know, more regular social events, uh, where developers can kind of just get together and drink a beer and talk about what's going on in the project, uh, and monitoring who checks in and who doesn't, uh, they have to, to, um, work harder, I think, than they ever have before. >>Well, and you mentioned burnout, and that's something that I think we've all faced in this time at varying levels and it changes. And it's a real, there's a tension in the air, regardless of where you are. There's a challenge, as you mentioned, people having, you know, coworker, their kids as coworkers and fighting for bandwidth, because everyone is forced in this situation. I'd love to get your perspective on some businesses that are, that have done this well, this adaptation, what can you share in terms of some real-world examples that might inspire the audience? >>Yeah. Uh, I'll start with, uh, stack overflow. Uh, they recently published a piece in the journal of the ACM around some of the things that they had discovered. Um, you know, first of all, just a cultural philosophy. If one person is remote, everybody is remote. And you just think that way from an executive level, um, social spaces. One of the things that they talk about doing is leaving a video conference room open at a team level all day long, and the team members, you know, we'll go on mute, you know, so that they don't have to, that they don't necessarily have to be there with somebody else listening to them. But if they have a question, they can just pop off mute really quickly and ask the question. And if anybody else knows the answer, it's kind of like being in that virtual pod. Uh, if you, uh, if you will, um, even here at Forrester, one of the things that we've done is we've invested in social ceremonies. >>We've actually moved our to our team meetings on, on my analyst team from, from once every two weeks to weekly. And we have built more time in for social Ajay socialization, just so we can see, uh, how, how, how we're doing. Um, I think Microsoft has really made some good, uh, information available in how they've managed things like the onboarding process. I think I'm Amanda silver over there mentioned that a couple of weeks ago when, uh, uh, a presentation they did that, uh, uh, Microsoft onboarded over 150,000 people since the start of COVID, if you don't have good remote onboarding processes, that's going to be a disaster. Now they're not all developers, but if you think about it, um, everything from how you do the interviewing process, uh, to how you get people, their badges, to how they get their equipment. Um, security is a, is another issue that they called out typically, uh, it security, um, the security of, of developers machines ends at, at, at the corporate desktop. >>But, you know, since we're increasingly using our own machines, our own hardware, um, security organizations kind of have to extend their security policies to cover, uh, employee devices, and that's caused them to scramble a little bit. Uh, so, so the examples are out there. It's not a lot of, like, we have to do everything completely differently, but it's a lot of subtle changes that, that have to be made. Um, I'll give you another example. Um, one of the things that, that we are seeing is that, um, more and more organizations to deal with the challenges around agility, with respect to delivering software, embracing low-code tools. In fact, uh, we see about 50% of firms are using low-code tools right now. We predict it's going to be 75% by the end of next year. So figuring out how your dev ops processes support an organization that might be using Mendix or OutSystems, or, you know, the power platform building the front end of an application, like a track and trace application really, really quickly, but then hooking it up to your backend infrastructure. Does that happen completely outside the dev ops investments that you're making and the agile processes that you're making, or do you adapt your organization? Um, our hybrid teams now teams that not just have professional developers, but also have business users that are doing some development with a low-code tool. Those are the kinds of things that we have to be, um, willing to, um, to entertain in order to shift the focus a little bit more toward the agility side, I think >>Lot of obstacles, but also a lot of opportunities for businesses to really learn, pay attention here, pivot and grow, and hopefully some good opportunities for the developers and the business folks to just get better at what they're doing and learning to embrace spiritual co-location Jeffrey, thank you so much for joining us on the program today. Very insightful conversation. >>My pleasure. It's it's, it's an important thing. Just remember if you're going to run that marathon, break it into 26, 10 minute runs, take a walk break in between each and you'll find that you'll get there. >>Digestible components, wise advice. Jeffery Hammond. Thank you so much for joining for Jeffrey I'm Lisa Martin, you're watching Broadcom's dev ops virtual forum >>From around the globe. It's the queue with digital coverage of dev ops virtual forum brought to you by Broadcom, >>Continuing our conversations here at Broadcom's dev ops virtual forum. Lisa Martin here, please. To welcome back to the program, Serge Lucio, the general manager of the enterprise software division at Broadcom. Hey, Serge. Welcome. Thank you. Good to be here. So I know you were just, uh, participating with the biz ops manifesto that just happened recently. I just had the chance to talk with Jeffrey Hammond and he unlocked this really interesting concept, but I wanted to get your thoughts on spiritual co-location as really a necessity for biz ops to succeed in this unusual time in which we're living. What are your thoughts on spiritual colocation in terms of cultural change versus adoption of technologies? >>Yeah, it's a, it's, it's quite interesting, right? When we, when we think about the major impediments for, uh, for dev ops implementation, it's all about culture, right? And swore over the last 20 years, we've been talking about silos. We'd be talking about the paradox for these teams to when it went to align in many ways, it's not so much about these teams aligning, but about being in the same car in the same books, right? It's really about fusing those teams around kind of the common purpose, a common objective. So to me, the, this, this is really about kind of changing this culture where people start to look at a kind of OKR is instead of the key objective, um, that, that drives the entire team. Now, what it means in practice is really that's, uh, we need to change a lot of behaviors, right? It's not about the Yarki, it's not about roles. It's about, you know, who can do what and when, and, uh, you know, driving a bias towards action. It also means that we need, I mean, especially in this school times, it becomes very difficult, right? To drive kind of a kind of collaboration between these teams. And so I think there there's a significant role that especially tools can play in terms of providing this complex feedback from teams to, uh, to be in that preface spiritual qualification. >>Well, and it talked about culture being, it's something that, you know, we're so used to talking about dev ops with respect to velocity, all about speed here. But of course this time everything changed so quickly, but going from the physical spaces to everybody being remote really does take it. It's very different than you can't replicate it digitally, but there are collaboration tools that can kind of really be essential to help that cultural shift. Right? >>Yeah. So 2020, we, we touch to talk about collaboration in a very mundane way. Like, of course we can use zoom. We can all get into, into the same room. But the point when I think when Jeff says spiritual, co-location, it's really about, we all share the same objective. Do we, do we have a niece who, for instance, our pipeline, right? When you talk about dev ops, probably we all started thinking about this continuous delivery pipeline that basically drives the automation, the orchestration across the team, but just thinking about a pipeline, right, at the end of the day, it's all about what is the meantime to beat back to these teams. If I'm a developer and a commit code, I don't, does it take where, you know, that code to be processed through pipeline pushy? Can I get feedback if I am a finance person who is funding a product or a project, what is my meantime to beat back? >>And so a lot of, kind of a, when we think about the pipeline, I think what's been really inspiring to me in the last year or so is that there is much more of an adoption of the Dora metrics. There is way more of a focus around value stream management. And to me, this is really when we talk about collaboration, it's really a balance. How do you provide the feedback to the different stakeholders across the life cycle in a very timely matter? And that's what we would need to get to in terms of kind of this, this notion of collaboration. It's not so much about people being in the same physical space. It's about, you know, when I checked in code, you know, to do I guess the system to automatically identify what I'm going to break. If I'm about to release some allegation, how can the system help me reduce my change pillar rates? Because it's, it's able to predict that some issue was introduced in the outpatient or work product. Um, so I think there's, there's a great role of technology and AI candidate Lynch to, to actually provide that new level of collaboration. >>So we'll get to AI in a second, but I'm curious, what are some of the, of the metrics you think that really matter right now is organizations are still in some form of transformation to this new almost 100% remote workforce. >>So I'll just say first, I'm not a big fan of metrics. Um, and the reason being that, you know, you can look at a change killer rate, right, or a lead time or cycle time. And those are, those are interesting metrics, right? The trend on metric is absolutely critical, but what's more important is you get to the root cause what is taught to you lean to that metric to degrade or improve or time. And so I'm much more interested and we, you know, fruit for Broadcom. Are we more interested in understanding what are the patterns that contribute to this? So I'll give you a very mundane example. You know, we know that cycle time is heavily influenced by, um, organizational boundaries. So, you know, we talk a lot about silos, but, uh, we we've worked with many of our customers doing value stream mapping. And oftentimes what you see is that really the boundaries of your organization creates a lot of idle time, right? So to me, it's less about the metrics. I think the door metrics are a pretty, you know, valid set metrics, but what's way more important is to understand what are the antiperspirants, what are the things that we can detect through the data that actually are affecting those metrics. And, uh, I mean, over the last 10, 20 years, we've learned a lot about kind of what are, what are the antiperspirants within our large enterprise customers. And there are plenty of them. >>What are some of the things that you're seeing now with respect to patterns that have developed over the last seven to eight months? >>So I think the two areas which clearly are evolving very quickly are on kind of the front end of the life cycle, where DevOps is more and more embracing value stream management value stream mapping. Um, and I think what's interesting is that in many ways the product is becoming the new silo. Uh, the notion of a product is very difficult by itself to actually define people are starting to recognize that a value stream is not its own little kind of Island. That in reality, when I define a product, this product, oftentimes as dependencies on our products and that in fact, you're looking at kind of a network of value streams, if you will. So, so even on that, and there is clearly kind of a new sets, if you will, of anti-patterns where products are being defined as a set of OTRs, they have interdependencies and you have have a new set of silos on the operands, uh, the Abra key movement to Israel and the SRE space where, um, I think there is a cultural clash while the dev ops side is very much embracing this notion of OTRs and value stream mapping and Belgium management. >>On the other end, you have the it operations teams. We still think business services, right? For them, they think about configure items, think about infrastructure. And so, you know, it's not uncommon to see, you know, teams where, you know, the operations team is still thinking about hundreds of thousands, tens of thousands of business services. And so the, the, there is there's this boundary where, um, I think, well, SRE is being put in place. And there's lots of thinking about what kind of metrics can be fined. I think, you know, going back to culture, I think there's a lot of cultural evolution that's still required for true operations team. >>And that's a hard thing. Cultural transformation in any industry pandemic or not is a challenging thing. You talked about, uh, AI and automation of minutes ago. How do you think those technologies can be leveraged by DevOps leaders to influence their successes and their ability to collaborate, maybe see eye to eye with the SRS? >>Yeah. Um, so th you're kind of too. So even for myself, as a leader of a, you know, 1500 people organization, there's a number of things I don't see right. On a daily basis. And, um, I think the, the, the, the technologies that we have at our disposal today from the AI are able to mind a lot of data and expose a lot of, uh, issues that's as leaders we may not be aware of. And some of the, some of these are pretty kind of easy to understand, right? We all think we're agile. And yet when you, when you start to understand, for instance, uh, what is the, what is the working progress right to during the sprint? Um, when you start to analyze the data you can detect, for instance, that maybe the teams are over committed, that there is too much work in progress. >>You can start to identify kind of, interdepencies either from a technology, from a people point of view, which were hidden, uh, you can start to understand maybe the change filler rates he's he is dragging. So I believe that there is a, there's a fundamental role to be played by the tools to, to expose again, these anti parents, to, to make these things visible to the teams, to be able to even compare teams. Right. One of the things that's, that's, uh, that's amazing is now we have access to tons of data, not just from a given customer, but across a large number of customers. And so we start to compare all of these teams kind of operate, and what's working, what's not working >>Thoughts on AI and automation as, as a facilitator of spiritual co-location. >>Yeah, absolutely. Absolutely. It's um, you know, th there's, uh, the problem we all face is the unknown, right? The, the law city, but volume variety of the data, uh, everyday we don't really necessarily completely appreciate what is the impact of our actions, right? And so, um, AI can really act as a safety net that enables us to, to understand what is the impact of our actions. Um, and so, yeah, in many ways, the ability to be informed in a timely matter to be able to interact with people on the basis of data, um, and collaborate on the data. And the actual matter, I think is, is a, is a very powerful enabler, uh, on, in that respect. I mean, I, I've seen, um, I've seen countless of times that, uh, for instance, at the SRE boundary, um, to basically show that we'll turn the quality attributes, so an incoming release, right. And exposing that to, uh, an operations person and a sorry person, and enabling that collaboration dialogue through data is a very, very powerful tool. >>Do you have any recommendations for how teams can use, you know, the SRE folks, the dev ops says can use AI and automation in the right ways to be successful rather than some ways that aren't going to be nonproductive. >>Yeah. So to me, the th there, there's a part of the question really is when, when we talk about data, there are there different ways you can use data, right? Um, so you can, you can do a lot of an analytics, predictive analytics. So I think there is a, there's a tendency, uh, to look at, let's say a, um, a specific KPI, like a, an availability KPI, or change filler rate, and to basically do a regression analysis and projecting all these things, going to happen in the future. To me, that that's, that's a, that's a bad approach. The reason why I fundamentally think it's a better approach is because we are systems. The way we develop software is, is a, is a non-leader kind of system, right? Software development is not linear nature. And so I think there's a D this is probably the worst approach is to actually focus on metrics on the other end. >>Um, if you, if you start to actually understand at a more granular level, what har, uh, which are the things which are contributing to this, right? So if you start to understand, for instance, that whenever maybe, you know, you affect a specific part of the application that translates into production issues. So we, we have, I've actually, uh, a customer who, uh, identified that, uh, over 50% of their unplanned outages were related to specific components in your architecture. And whenever these components were changed, this resulted in these plant outages. So if you start to be able to basically establish causality, right, cause an effect between kind of data across the last cycle. I think, I think this is the right way to, uh, to, to use AI. And so pharma to be, I think it's way more God could have a classification problem. What are the classes of problems that do exist and affect things as opposed to analytics, predictive, which I don't think is as powerful. >>So I mentioned in the beginning of our conversation, that just came off the biz ops manifesto. You're one of the authors of that. I want to get your thoughts on dev ops and biz ops overlapping, complimenting each other, what, from a, the biz ops perspective, what does it mean to the future of dev ops? >>Yeah, so, so it's interesting, right? If you think about DevOps, um, there's no felony document, right? Can we, we can refer to the Phoenix project. I mean, there are a set of documents which have been written, but in many ways, there's no clear definition of what dev ops is. Uh, if you go to the dev ops Institute today, you'll see that they are specific, um, trainings for instance, on value management on SRE. And so in many ways, the problem we have as an industry is that, um, there are set practices between agile dev ops, SRE Valley should management. I told, right. And we all basically talk about the same things, right. We all talk about essentially, um, accelerating in the meantime fee to feedback, but yet we don't have the common framework to talk about that. The other key thing is that we add to wait, uh, for, uh, for jeans, Jean Kim's Lascaux, um, to, uh, to really start to get into the business aspect, right? >>And for value stream mapping to start to emerge for us to start as an industry, right. It, to start to think about what is our connection with the business aspect, what's our purpose, right? And ultimately it's all about driving these business outcomes. And so to me, these ops is really about kind of, uh, putting a lens on this critical element that it's not business and it, that we in fact need to fuse business 19 that I need needs to transform itself to recognize that it's, it's this value generator, right. It's not a cost center. And so the relationship to me, it's more than BizOps provides kind of this Oliver or kind of framework, if you will. That set the context for what is the reason, uh, for it to exist. What's part of the core values and principles that it needs to embrace to, again, change from a cost center to a value center. And then we need to start to use this as a way to start to unify some of the, again, the core practices, whether it's agile, DevOps value, stream mapping SRE. Um, so, so I think over time, my hope is that we start to optimize a lot of our practices, language, um, and, uh, and cultural elements. >>Last question surgeon, the last few seconds we have here talking about this, the relation between biz ops and dev ops, um, what do you think as DevOps evolves? And as you talked to circle some of your insights, what should our audience keep their eyes on in the next six to 12 months? >>So to me, the key, the key, um, challenge for, for the industry is really around. So we were seeing a very rapid shift towards kind of, uh, product to product, right. Which we don't want to do is to recreate kind of these new silos, these hard silos. Um, so that, that's one of the big changes, uh, that I think we need to be, uh, to be really careful about, um, because it is ultimately, it is about culture. It's not about, uh, it's not about, um, kind of how we segment the work, right. And, uh, any true culture that we can overcome kind of silos. So back to, I guess, with Jeffrey's concept of, um, kind of the spiritual co-location, I think it's, it's really about that too. It's really about kind of, uh, uh, focusing on the business outcomes on kind of aligning on driving engagement across the teams, but, but not for create a, kind of a new set of silos, which instead of being vertical are going to be these horizontal products >>Crazy by surge that looking at culture as kind of a way of really, uh, uh, addressing and helping to, uh, re re reduce, replace challenges. We thank you so much for sharing your insights and your time at today's DevOps virtual forum. >>Thank you. Thanks for your time. >>I'll be right back >>From around the globe it's the cube with digital coverage of devops virtual forum brought to you by Broadcom. >>Welcome to Broadcom's DevOps virtual forum, I'm Lisa Martin, and I'm joined by another Martin, very socially distanced from me all the way coming from Birmingham, England is Glynn Martin, the head of QA transformation at BT. Glynn, it's great to have you on the program. Thank you, Lisa. I'm looking forward to it. As we said before, we went live to Martins for the person one in one segment. So this is going to be an interesting segment guys, what we're going to do is Glynn's going to give us a really kind of deep inside out view of devops from an evolution perspective. So Glynn, let's start. Transformation is at the heart of what you do. It's obviously been a very transformative year. How have the events of this year affected the >> transformation that you are still responsible for driving? Yeah. Thank you, Lisa. I mean, yeah, it has been a difficult year. >>Um, and although working for BT, which is a global telecommunications company, um, I'm relatively resilient, I suppose, as a, an industry, um, through COVID obviously still has been affected and has got its challenges. And if anything, it's actually caused us to accelerate our transformation journey. Um, you know, we had to do some great things during this time around, um, you know, in the UK for our emergency and, um, health workers give them unlimited data and for vulnerable people to support them. And that's spent that we've had to deliver changes quickly. Um, but what we want to be able to do is deliver those kinds of changes quickly, but sustainably for everything that we do, not just because there's an emergency. Um, so we were already on the kind of journey to agile, but ever more important now that we are, we are able to do those, that kind of work, do it more quickly. >>Um, and that it works because the, the implications of it not working is, can be terrible in terms of you know, we've been supporting testing centers,  new hospitals to treat COVID patients. So we need to get it right. And then therefore the coverage of what we do, the quality of what we do and how quickly we do it really has taken on a new scale and what was already a very competitive market within the telco industry within the UK. Um, you know, what I would say is that, you know, we are under pressure to deliver more value, but we have small cost challenges. We have to obviously, um, deal with the fact that, you know, COVID 19 has hit most industries kind of revenues and profits. So we've got this kind of paradox between having less costs, but having to deliver more value quicker and  to higher quality. So yeah, certainly the finances is, um, on our minds and that's why we need flexible models, cost models that allow us to kind of do growth, but we get that growth by showing that we're delivering value. Um, especially in these times when there are financial challenges on companies. So one of the things that I want to ask you about, I'm again, looking at DevOps from the inside >>Out and the evolution that you've seen, you talked about the speed of things really accelerating in this last nine months or so. When we think dev ops, we think speed. But one of the things I'd love to get your perspective on is we've talked about in a number of the segments that we've done for this event is cultural change. What are some of the things that you've seen there as, as needing to get, as you said, get things right, but done so quickly to support essential businesses, essential workers. How have you seen that cultural shift? >>Yeah, I think, you know, before test teams for themselves at this part of the software delivery cycle, um, and actually now really our customers are expecting that quality and to deliver for our customers what they want, quality has to be ingrained throughout the life cycle. Obviously, you know, there's lots of buzzwords like shift left. Um, how do we do shift left testing? Um, but for me, that's really instilling quality and given capabilities shared capabilities throughout the life cycle that drive automation, drive improvements. I always say that, you know, you're only as good as your lowest common denominator. And one thing that we were finding on our dev ops journey was that we  would be trying to do certain things quick, we had automated build, automated tests. But if we were taking a weeks to create test scripts, or we were taking weeks to manually craft data, and even then when we had taken so long to do it, that the coverage was quite poor and that led to lots of defects later on in the life cycle, or even in our production environment, we just couldn't afford to do that. >>And actually, focusing on continuous testing over the last nine to 12 months has really given us the ability to deliver quickly across the whole life cycle. And therefore actually go from doing a kind of semi agile kind of thing, where we did the user stories, we did a few of the kind of agile ceremonies, but we weren't really deploying any quicker into production because our stakeholders were scared that we didn't have the same control that we had when we had more waterfall releases. And, you know, when we didn't think of ourselves. So we've done a lot of work on every aspect, um, especially from a testing point of view, every aspect of every activity, rather than just looking at automated tests, you know, whether it is actually creating the test in the first place, whether it's doing security testing earlier in the lot and performance testing in the life cycle, et cetera. So, yeah,  it's been a real key thing that for CT, for us to drive DevOps, >>Talk to me a little bit about your team. What are some of the shifts in terms of expectations that you're experiencing and how your team interacts with the internal folks from pipeline through life cycle? >>Yeah, we've done a lot of work on this. Um, you know, there's a thing that I think people will probably call it a customer experience gap, and it reminds me of a Gilbert cartoon, where we start with the requirements here and you're almost like a Chinese whisper effects and what we deliver is completely different. So we think the testing team or the delivery teams, um, know in our teeth has done a great job. This is what it said in the acceptance criteria, but then our customers are saying, well, actually that's not working this isn't working and there's this kind of gap. Um, we had a great launch this year of agile requirements, it's one of the Broadcom tools. And that was the first time in, ever since I remember actually working within BT, I had customers saying to me, wow, you know, we want more of this. >>We want more projects to have extra requirements design on it because it allowed us to actually work with the business collaboratively. I mean, we talk about collaboration, but how do we actually, you know, do that and have something that both the business and technical people can understand. And we've actually been working with the business , using agile requirements designer to really look at what the requirements are, tease out requirements we hadn't even thought of and making sure that we've got high levels of test coverage. And what we actually deliver at the end of it, not only have we been able to generate tests more quickly, but we've got much higher test coverage and also can more smartly, using the kind of AI within the tool and then some of the other kinds of pipeline tools, actually deliver to choose the right tasks, and actually doing a risk based testing approach. So that's been a great launch this year, but just the start of many kinds of things that we're doing >>Well, what I hear in that, Glynn is a lot of positives that have come out of a very challenging situation. Talk to me about it. And I liked that perspective. This is a very challenging time for everybody in the world, but it sounds like from a collaboration perspective you're right, we talk about that a lot critical with devops. But those challenges there, you guys were able to overcome those pretty quickly. What other challenges did you face and figure out quickly enough to be able to pivot so fast? >>I mean, you talked about culture. You know, BT is like most companies  So it's very siloed. You know we're still trying to work to become closer as a company. So I think there's a lot of challenges around how would you integrate with other tools? How would you integrate with the various different technologies. And BT, we have 58 different IT stacks. That's not systems, that's stacks, all of those stacks can have hundreds of systems. And we're trying to, we've got a drive at the moment, a simplified program where we're trying to you know, reduce that number to 14 stacks. And even then there'll be complexity behind the scenes that we will be challenged more and more as we go forward. How do we actually highlight that to our users? And as an it organization, how do we make ourselves leaner, so that even when we've still got some of that legacy, and we'll never fully get rid of it and that's the kind of trade off that we have to make, how do we actually deal with that and hide that from our users and drive those programs, so we can, as I say, accelerate change,  reduce that kind of waste and that kind of legacy costs out of our business. You know, the other thing as well, I'm sure telecoms is probably no different to insurance or finance. When you take the number of products that we do, and then you combine them, the permutations are tens and hundreds of thousands of products. So we, as a business are trying to simplify, we are trying to do that in an agile way. >>And haven't tried to do agile in the proper way and really actually work at pace, really deliver value. So I think what we're looking more and more at the moment is actually  more value focused. Before we used to deliver changes sometimes into production. Someone had a great idea, or it was a great idea nine months ago or 12 months ago, but actually then we ended up deploying it and then we'd look at the users, the usage of that product or that application or whatever it is, and it's not being used for six months. So we haven't got, you know, the cost of the last 12 months. We certainly haven't gotten room for that kind of waste and, you know, for not really understanding the value of changes that we are doing. So I think that's the most important thing of the moment, it's really taking that waste out. You know, there's lots of focus on things like flow management, what bits of our process are actually taking too long. And we've started on that journey, but we've got a hell of a long way to go. But that involves looking at every aspect of the software delivery cycle. >> Going from, what 58 IT stacks down to 14 or whatever it's going to be, simplifying sounds magical to everybody. It's a big challenge. What are some of the core technology capabilities that you see really as kind of essential for enabling that with this new way that you're working? >>Yeah. I mean, I think we were started on a continuous testing journey, and I think that's just the start. I mean as I say, looking at every aspect of, you know, from a QA point of view is every aspect of what we do. And it's also looking at, you know, we've started to branch into more like AI, uh, AI ops and, you know, really the full life cycle. Um, and you know, that's just a stepping stone to, you know, I think autonomics is the way forward, right. You know, all of this kind of stuff that happens, um, you know, monitoring, uh, you know, watching the systems what's happening in production, how do we feed that back? How'd you get to a point where actually we think about change and then suddenly it's in production safely, or if it's not going to safety, it's automatically backing out. So, you know, it's a very, very long journey, but if we want to, you know, in a world where the pace is in ever-increasing and the demands for the team, and, you know, with the pressures on, at the moment where we're being asked to do things, uh, you know, more efficiently and as lean as possible, we need to be thinking about every part of the process and how we put the kind of stepping stones in place to lead us to a more automated kind of, um, you know, um, the future. >>Do you feel that that planned outcomes are starting to align with what's delivered, given this massive shift that you're experiencing? >>I think it's starting to, and I think, you know, as I say, as we look at more of a value based approach, um, and, um, you know, as I say, print, this was a kind of flow management. I think that that will become ever, uh, ever more important. So, um, I think it starting to people certainly realize that, you know, teams need to work together, you know, the kind of the cousin between business and it, especially as we go to more kind of SAS based solutions, low code solutions, you know, there's not such a gap anymore, actually, some of our business partners that expense to be much more tech savvy. Um, so I think, you know, this is what we have to kind of appreciate what is its role, how do we give the capabilities, um, become more of a centers of excellence rather than actually doing mounds amounts of work. And for me, and from a testing point of view, you know, mounds and mounds of testing, actually, how do we automate that? How do we actually generate that instead of, um, create it? I think that's the kind of challenge going forward. >>What are some, as we look forward, what are some of the things that you would like to see implemented or deployed in the next, say six to 12 months as we hopefully round a corner with this pandemic? >>Yeah, I think, um, you know, certainly for, for where we are as a company from a QA perspective, we are, um, you let's start in bits that we do well, you know, we've started creating, um, continuous delivery and DevOps pipelines. Um, there's still manual aspects of that. So, you know, certainly for me, I I've challenged my team with saying how do we do an automated journey? So if I put a requirement in JIRA or rally or wherever it is and why then click a button and, you know, with either zero touch for one such, then put that into production and have confidence that, that has been done safely and that it works and what happens if it doesn't work. So, you know, that's, that's the next, um, the next few months, that's what our concentration, um, is, is about. But it's also about decision-making, you know, how do you actually understand those value judgments? >>And I think there's lots of the things dev ops, AI ops, kind of that always ask aspects of business operations. I think it's about having the information in one place to make those kinds of decisions. How does it all try and tie it together? As I say, even still with kind of dev ops, we've still got elements within my company where we've got lots of different organizations doing some, doing similar kinds of things, but they're all kind of working in silos. So I think having AI ops as it comes more and more to the fore as we go to cloud, and that's what we need to, you know, we're still very early on in our cloud journey, you know, so we need to make sure the technologies work with cloud as well as you can have, um, legacy systems, but it's about bringing that all together and having a full, visible pipeline, um, that everybody can see and make decisions. >>You said the word confidence, which jumped out at me right away, because absolutely you've got to have be able to have confidence in what your team is delivering and how it's impacting the business and those customers. Last question then for you is how would you advise your peers in a similar situation to leverage technology automation, for example, dev ops, to be able to gain the confidence that they're making the right decisions for their business? >>I think the, the, the, the, the approach that we've taken actually is not started with technology. Um, we've actually taken a human centered design, uh, as a core principle of what we do, um, within the it part of BT. So by using human centered design, that means we talk to our customers, we understand their pain points, we map out their current processes. Um, and then when we mapped out what this process does, it also understand their aspirations as well, you know? Um, and where do they want to be in six months? You know, do they want it to be, um, more agile and, you know, or do they want to, you know, is, is this a part of their business that they want to do one better? We actually then looked at why that's not running well, and then see what, what solutions are out there. >>We've been lucky that, you know, with our partnership, with Broadcom within the payer line, lots of the tools and the PLA have directly answered some of the business's problems. But I think by having those conversations and actually engaging with the business, um, you know, especially if the business hold the purse strings, which in, in, uh, you know, in some companies include not as they do there is that kind of, you know, almost by understanding their, their pain points and then starting, this is how we can solve your problem. Um, is we've, we've tended to be much more successful than trying to impose something and say, well, here's the technology that they don't quite understand. It doesn't really understand how it kind of resonates with their problems. So I think that's the heart of it. It's really about, you know, getting, looking at the data, looking at the processes, looking at where the kind of waste is. >>And then actually then looking at the right solutions. Then, as I say, continuous testing is massive for us. We've also got a good relationship with Apple towards looking at visual AI. And actually there's a common theme through that. And I mean, AI is becoming more and more prevalent. And I know, you know, sometimes what is AI and people have kind of this semantics of, is it true AI or not, but it's certainly, you know, AI machine learning is becoming more and more prevalent in the way that we work. And it's allowing us to be much more effective, be quicker in what we do and be more accurate. And, you know, whether it's finding defects running the right tests or, um, you know, being able to anticipate problems before they're happening in a production environment. >>Well, thank you so much for giving us this sort of insight outlook at dev ops sharing the successes that you're having, taking those challenges, converting them to opportunities and forgiving folks who might be in your shoes, or maybe slightly behind advice enter. They appreciate it. We appreciate your time. >>Well, it's been an absolute pleasure, really. Thank you for inviting me. I have a extremely enjoyed it. So thank you ever so much. >>Excellent. Me too. I've learned a lot for Glenn Martin. I'm Lisa Martin. You're watching the cube >>Driving revenue today means getting better, more valuable software features into the hands of your customers. If you don't do it quickly, your competitors as well, but going faster without quality creates risks that can damage your brand destroy customer loyalty and cost millions to fix dev ops from Broadcom is a complete solution for balancing speed and risk, allowing you to accelerate the flow of value while minimizing the risk and severity of critical issues with Broadcom quality becomes integrated across the entire DevOps pipeline from planning to production, actionable insights, including our unique readiness score, provide a three 60 degree view of software quality giving you visibility into potential issues before they become disasters. Dev ops leaders can manage these risks with tools like Canary deployments tested on a small subset of users, or immediately roll back to limit the impact of defects for subsequent cycles. Dev ops from Broadcom makes innovation improvement easier with integrated planning and continuous testing tools that accelerate the flow of value product requirements are used to automatically generate tests to ensure complete quality coverage and tests are easily updated. >>As requirements change developers can perform unit testing without ever leaving their preferred environment, improving efficiency and productivity for the ultimate in shift left testing the platform also integrates virtual services and test data on demand. Eliminating two common roadblocks to fast and complete continuous testing. When software is ready for the CIC CD pipeline, only DevOps from Broadcom uses AI to prioritize the most critical and relevant tests dramatically improving feedback speed with no decrease in quality. This release is ready to go wherever you are in your DevOps journey. Broadcom helps maximize innovation velocity while managing risk. So you can deploy ideas into production faster and release with more confidence from around the globe. It's the queue with digital coverage of dev ops virtual forum brought to you by Broadcom. >>Hi guys. Welcome back. So we have discussed the current state and the near future state of dev ops and how it's going to evolve from three unique perspectives. In this last segment, we're going to open up the floor and see if we can come to a shared understanding of where dev ops needs to go in order to be successful next year. So our guests today are, you've seen them all before Jeffrey Hammond is here. The VP and principal analyst serving CIO is at Forester. We've also Serge Lucio, the GM of Broadcom's enterprise software division and Glenn Martin, the head of QA transformation at BT guys. Welcome back. Great to have you all three together >>To be here. >>All right. So we're very, we're all very socially distanced as we've talked about before. Great to have this conversation. So let's, let's start with one of the topics that we kicked off the forum with Jeff. We're going to start with you spiritual co-location that's a really interesting topic that we've we've uncovered, but how much of the challenge is truly cultural and what can we solve through technology? Jeff, we'll start with you then search then Glen Jeff, take it away. >>Yeah, I think fundamentally you can have all the technology in the world and if you don't make the right investments in the cultural practices in your development organization, you still won't be effective. Um, almost 10 years ago, I wrote a piece, um, where I did a bunch of research around what made high-performance teams, software delivery teams, high performance. And one of the things that came out as part of that was that these teams have a high level of autonomy. And that's one of the things that you see coming out of the agile manifesto. Let's take that to today where developers are on their own in their own offices. If you've got teams where the team itself had a high level of autonomy, um, and they know how to work, they can make decisions. They can move forward. They're not waiting for management to tell them what to do. >>And so what we have seen is that organizations that embraced autonomy, uh, and got their teams in the right place and their teams had the information that they needed to make the right decisions have actually been able to operate pretty well, even as they've been remote. And it's turned out to be things like, well, how do we actually push the software that we've created into production that would become the challenge is not, are we writing the right software? And that's why I think the term spiritual co-location is so important because even though we may be physically distant, we're on the same plane, we're connected from a, from, from a, a shared purpose. Um, you know, surgeon, I worked together a long, long time ago. So it's been what almost 15, 16 years since we were at the same place. And yet I would say there's probably still a certain level of spiritual co-location between us, uh, because of the shared purposes that we've had in the past and what we've seen in the industry. And that's a really powerful tool, uh, to build on. So what do tools play as part of that, to the extent that tools make information available, to build shared purpose on to the extent that they enable communication so that we can build that spiritual co-location to the extent that they reinforce the culture that we want to put in place, they can be incredibly valuable, especially when, when we don't have the luxury of physical locate physical co-location. Okay. That makes sense. >>It does. I shouldn't have introduced us. This last segment is we're all spiritually co-located or it's a surge, clearly you're still spiritually co located with jump. Talk to me about what your thoughts are about spiritual of co-location the cultural impact and how technology can move it forward. >>Yeah. So I think, well, I'm going to sound very similar to Jeff in that respect. I think, you know, it starts with kind of a shared purpose and the other understanding, Oh, individuals teams, uh, contributed to kind of a business outcome, what is our shared goal or shared vision? What's what is it we're trying to achieve collectively and keeping it kind of aligned to that? Um, and so, so it's really starts with that now, now the big challenge, always these over the last 20 years, especially in large organization, there's been specialization of roles and functions. And so we, we all that started to basically measure which we do, uh, on a daily basis using metrics, which oftentimes are completely disconnected from kind of a business outcome or purpose. We, we kind of reverted back to, okay, what is my database all the time? What is my cycle time? >>Right. And, and I think, you know, which we can do or where we really should be focused as an industry is to start to basically provide a lens or these different stakeholders to look at what they're doing in the context of kind of these business outcomes. So, um, you know, probably one of my, um, favorites experience was to actually weakness at one of a large financial institution. Um, you know, Tuesday Golder's unquote development and operations staring at the same data, right. Which was related to, you know, in calming changes, um, test execution results, you know, Coverity coverage, um, official liabilities and all the all ran. It could have a direction level links. And that's when you start to put these things in context and represent that to you in a way that these different stakeholders can, can look at from their different lens. And, uh, and it can start to basically communicate and, and understand have they joined our company to, uh, to, to that kind of common view or objective. >>And Glen, we talked a lot about transformation with you last time. What are your thoughts on spiritual colocation and the cultural part, the technology impact? >>Yeah, I mean, I agree with Jeffrey that, you know, um, the people and culture, the most important thing, actually, that's why it's really important when you're transforming to have partners who have the same vision as you, um, who, who you can work with, have the same end goal in mind. And w I've certainly found that with our, um, you know, continuing relationship with Broadcom, what it also does though, is although, you know, tools can accelerate what you're doing and can join consistency. You know, we've seen within simplify, which is BTS flagship transformation program, where we're trying to, as it can, it says simplify the number of systems stacks that we have, the number of products that we have actually at the moment, we've got different value streams within that program who have got organizational silos. We were trying to rewrite, rewrite the wheel, um, who are still doing things manually. >>So in order to try and bring that consistency, we need the right tools that actually are at an enterprise grade, which can be flexible to work with in BT, which is such a complex and very dev, uh, different environments, depending on what area of BT you're in, whether it's a consumer, whether it's a mobile area, whether it's large global or government organizations, you know, we found that we need tools that can, um, drive that consistency, but also flex to Greenfield brownfield kind of technologies as well. So it's really important that as I say, for a number of different aspects, that you have the right partner, um, to drive the right culture, I've got the same vision, but also who have the tool sets to help you accelerate. They can't do that on their own, but they can help accelerate what it is you're trying to do in it. >>And a really good example of that is we're trying to shift left, which is probably a, quite a bit of a buzz phrase in their kind of testing world at the moment. But, you know, I could talk about things like continuous delivery direct to when a ball comes tools and it has many different features to it, but very simply on its own, it allows us to give the visibility of what the teams are doing. And once we have that visibility, then we can talk to the teams, um, around, you know, could they be doing better component testing? Could they be using some virtualized services here or there? And that's not even the main purpose of continuous delivery director, but it's just a reason that tools themselves can just give greater visibility of have much more intuitive and insightful conversations with other teams and reduce those organizational silos. >>Thanks, Ben. So we'd kind of sum it up, autonomy collaboration tools that facilitate that. So let's talk now about metrics from your perspectives. What are the metrics that matter? Jeff, >>I'm going to go right back to what Glenn said about data that provides visibility that enables us to, to make decisions, um, with shared purpose. And so business value has to be one of the first things that we look at. Um, how do we assess whether we have built something that is valuable, you know, that could be sales revenue, it could be net promoter score. Uh, if you're not selling what you've built, it could even be what the level of reuse is within your organization or other teams picking up the services, uh, that you've created. Um, one of the things that I've begun to see organizations do is to align value streams with customer journeys and then to align teams with those value streams. So that's one of the ways that you get to a shared purpose, cause we're all trying to deliver around that customer journey, the value with it. >>And we're all measured on that. Um, there are flow metrics which are really important. How long does it take us to get a new feature out from the time that we conceive it to the time that we can run our first experiments with it? There are quality metrics, um, you know, some of the classics or maybe things like defect, density, or meantime to response. Um, one of my favorites came from a, um, a company called ultimate software where they looked at the ratio of defects found in production to defects found in pre production and their developers were in fact measured on that ratio. It told them that guess what quality is your job to not just the test, uh, departments, a group, the fourth level that I think is really important, uh, in, in the current, uh, situation that we're in is the level of engagement in your development organization. >>We used to joke that we measured this with the parking lot metric helpful was the parking lot at nine. And how full was it at five o'clock. I can't do that anymore since we're not physically co-located, but what you can do is you can look at how folks are delivering. You can look at your metrics in your SCM environment. You can look at, uh, the relative rates of churn. Uh, you can look at things like, well, are our developers delivering, uh, during longer periods earlier in the morning, later in the evening, are they delivering, uh, you know, on the weekends as well? Are those signs that we might be heading toward a burnout because folks are still running at sprint levels instead of marathon levels. Uh, so all of those in combination, uh, business value, uh, flow engagement in quality, I think form the backbone of any sort of, of metrics, uh, a program. >>The second thing that I think you need to look at is what are we going to do with the data and the philosophy behind the data is critical. Um, unfortunately I see organizations where they weaponize the data and that's completely the wrong way to look at it. What you need to do is you need to say, you need to say, how is this data helping us to identify the blockers? The things that aren't allowing us to provide the right context for people to do the right thing. And then what do we do to remove those blockers, uh, to make sure that we're giving these autonomous teams the context that they need to do their job, uh, in a way that creates the most value for the customers. >>Great advice stuff, Glenn, over to your metrics that matter to you that really make a big impact. And, and, and also how do you measure quality kind of following onto the advice that Jeff provided? >>That's some great advice. Actually, he talks about value. He talks about flow. Both of those things are very much on my mind at the moment. Um, but there was this, I listened to a speaker, uh, called me Kirsten a couple of months ago. It taught very much around how important flow management is and removing, you know, and using that to remove waste, to understand in terms of, you know, making software changes, um, what is it that's causing us to do it longer than we need to. So where are those areas where it takes long? So I think that's a very important thing for us. It's even more basic than that at the moment, we're on a journey from moving from kind of a waterfall to agile. Um, and the problem with moving from waterfall to agile is with waterfall, the, the business had a kind of comfort that, you know, everything was tested together and therefore it's safer. >>Um, and with agile, there's that kind of, you know, how do we make sure that, you know, if we're doing things quick and we're getting stuff out the door that we give that confidence, um, that that's ready to go, or if there's a risk that we're able to truly articulate what that risk is. So there's a bit about release confidence, um, and some of the metrics around that and how, how healthy those releases are, and actually saying, you know, we spend a lot of money, um, um, an investment setting up our teams, training our teams, are we actually seeing them deliver more quickly and are we actually seeing them deliver more value quickly? So yeah, those are the two main things for me at the moment, but I think it's also about, you know, generally bringing it all together, the dev ops, you know, we've got the kind of value ops AI ops, how do we actually bring that together to so we can make quick decisions and making sure that we are, um, delivering the biggest bang for our buck, absolutely biggest bang for the buck, surge, your thoughts. >>Yeah. So I think we all agree, right? It starts with business metrics, flow metrics. Um, these are kind of the most important metrics. And ultimately, I mean, one of the things that's very common across a highly functional teams is engagements, right? When, when you see a team that's highly functioning, that's agile, that practices DevOps every day, they are highly engaged. Um, that that's, that's definitely true. Now the, you know, back to, I think, uh, Jeff's point on weaponization of metrics. One of the key challenges we see is that, um, organizations traditionally have been kind of, uh, you know, setting up benchmarks, right? So what is a good cycle time? What is a good lead time? What is a good meantime to repair? The, the problem is that this is very contextual, right? It varies. It's going to vary quite a bit, depending on the nature of application and system. >>And so one of the things that we really need to evolve, um, as an industry is to understand that it's not so much about those flow metrics is about our, these four metrics ultimately contribute to the business metric to the business outcome. So that's one thing. The second aspect, I think that's oftentimes misunderstood is that, you know, when you have a bad cycle time or, or, or what you perceive as being a buy cycle time or better quality, the problem is oftentimes like all, do you go and explore why, right. What is the root cause of this? And I think one of the key challenges is that we tend to focus a lot of time on metrics and not on the eye type patterns, which are pretty common across the industry. Um, you know, if you look at, for instance, things like lead time, for instance, it's very common that, uh, organizational boundaries are going to be a key contributor to badly time. >>And so I think that there is, you know, the only the metrics there is, I think a lot of work that we need to do in terms of classifying, descend type patterns, um, you know, back to you, Jeff, I think you're one of the cool offers of waterscrumfall as a, as, as a key pattern, the industry or anti-spatter. Um, but waterscrumfall right is a key one, right? And you will detect that through kind of a defect arrival rates. That's where that looks like an S-curve. And so I think it's beyond kind of the, the metrics is what do you do with those metrics? >>Right? I'll tell you a search. One of the things that is really interesting to me in that space is I think those of us had been in industry for a long time. We know the anti-patterns cause we've seen them in our career maybe in multiple times. And one of the things that I think you could see tooling do is perhaps provide some notification of anti-patterns based on the telemetry that comes in. I think it would be a really interesting place to apply, uh, machine learning and reinforcement learning techniques. Um, so hopefully something that we'd see in the future with dev ops tools, because, you know, as a manager that, that, you know, may be only a 10 year veteran or 15 year veteran, you may be seeing these anti-patterns for the first time. And it would sure be nice to know what to do, uh, when they start to pop up, >>That would right. Insight, always helpful. All right, guys, I would like to get your final thoughts on this. The one thing that you believe our audience really needs to be on the lookout for and to put on our agendas for the next 12 months, Jeff will go back to you. Okay. >>I would say look for the opportunities that this disruption presents. And there are a couple that I see, first of all, uh, as we shift to remote central working, uh, we're unlocking new pools of talent, uh, we're, it's possible to implement, uh, more geographic diversity. So, so look to that as part of your strategy. Number two, look for new types of tools. We've seen a lot of interest in usage of low-code tools to very quickly develop applications. That's potentially part of a mainstream strategy as we go into 2021. Finally, make sure that you embrace this idea that you are supporting creative workers that agile and dev ops are the peanut butter and chocolate to support creative, uh, workers with algorithmic capabilities, >>Peanut butter and chocolate Glen, where do we go from there? What are, what's the one silver bullet that you think folks to be on the lookout for now? I, I certainly agree that, um, low, low code is, uh, next year. We'll see much more low code we'd already started going, moving towards a more of a SAS based world, but low code also. Um, I think as well for me, um, we've still got one foot in the kind of cow camp. Um, you know, we'll be fully trying to explore what that means going into the next year and exploiting the capabilities of cloud. But I think the last, um, the last thing for me is how do you really instill quality throughout the kind of, um, the, the life cycle, um, where, when I heard the word scrum fall, it kind of made me shut it because I know that's a problem. That's where we're at with some of our things at the moment we need to get beyond that. We need >>To be releasing, um, changes more frequently into production and actually being a bit more brave and having the confidence to actually do more testing in production and go straight to production itself. So expect to see much more of that next year. Um, yeah. Thank you. I haven't got any food analogies. Unfortunately we all need some peanut butter and chocolate. All right. It starts to take us home. That's what's that nugget you think everyone needs to have on their agendas? >>That's interesting. Right. So a couple of days ago we had kind of a latest state of the DevOps report, right? And if you read through the report, it's all about the lost city, but it's all about sweet. We still are receiving DevOps as being all about speed. And so to me, the key advice is in order to create kind of a spiritual collocation in order to foster engagement, we have to go back to what is it we're trying to do collectively. We have to go back to tie everything to the business outcome. And so for me, it's absolutely imperative for organizations to start to plot their value streams, to understand how they're delivering value into aligning everything they do from a metrics to deliver it, to flow to those metrics. And only with that, I think, are we going to be able to actually start to really start to align kind of all these roles across the organizations and drive, not just speed, but business outcomes, >>All about business outcomes. I think you guys, the three of you could write a book together. So I'll give you that as food for thought. Thank you all so much for joining me today and our guests. I think this was an incredibly valuable fruitful conversation, and we appreciate all of you taking the time to spiritually co-located with us today, guys. Thank you. Thank you, Lisa. Thank you. Thank you for Jeff Hammond serves Lucio and Glen Martin. I'm Lisa Martin. Thank you for watching the broad cops Broadcom dev ops virtual forum.

Published Date : Nov 18 2020

SUMMARY :

of dev ops virtual forum brought to you by Broadcom. Nice to talk with you today. It's good to be here. One of the things that we think of is speed, it was essentially a sprint, you know, you run as hard as you can for as fast as you can And it's almost like, you know, if you've ever run a marathon the first mile or two in the marathon, um, we have to think about all the activities that you need to do from a dev ops perspective and to hiring, you know, achieve higher levels of digitization in our processes and We've said that the key to success with agile at the team level is cross-functional organizations, as you say, going from, you know, physical workspaces, uh, agile manifesto, you know, there were four principles that were espoused individuals and interactions is important to make sure that that agility is there for one thing, you have to defer decisions So those teams have to be empowered to make decisions because you can't have a I think we all could use some of that, but, you know, you talked about in the beginning and I've, Um, when everybody was in the office, you could kind of see the And that gives you an indication of how engaged your developers are. um, whether it's, you know, more regular social events, that have done this well, this adaptation, what can you share in terms of some real-world examples that might Um, you know, first of all, since the start of COVID, if you don't have good remote onboarding processes, Those are the kinds of things that we have to be, um, willing to, um, and the business folks to just get better at what they're doing and learning to embrace It's it's, it's an important thing. Thank you so much for joining for Jeffrey I'm Lisa Martin, of dev ops virtual forum brought to you by Broadcom, I just had the chance to talk with Jeffrey Hammond and he unlocked this really interesting concept, uh, you know, driving a bias towards action. Well, and it talked about culture being, it's something that, you know, we're so used to talking about dev ops with respect does it take where, you know, that code to be processed through pipeline pushy? you know, when I checked in code, you know, to do I guess the system to automatically identify what So we'll get to AI in a second, but I'm curious, what are some of the, of the metrics you think that really matter right And so I'm much more interested and we, you know, fruit for Broadcom. are being defined as a set of OTRs, they have interdependencies and you have have a new set And so, you know, it's not uncommon to see, you know, teams where, you know, How do you think those technologies can be leveraged by DevOps leaders to influence as a leader of a, you know, 1500 people organization, there's a number of from a people point of view, which were hidden, uh, you can start to understand maybe It's um, you know, you know, the SRE folks, the dev ops says can use AI and automation in the right ways Um, so you can, you can do a lot of an analytics, predictive analytics. So if you start to understand, for instance, that whenever maybe, you know, So I mentioned in the beginning of our conversation, that just came off the biz ops manifesto. the problem we have as an industry is that, um, there are set practices between And so to me, these ops is really about kind of, uh, putting a lens on So to me, the key, the key, um, challenge for, We thank you so much for sharing your insights and your time at today's DevOps Thanks for your time. of devops virtual forum brought to you by Broadcom. Transformation is at the heart of what you do. transformation that you are still responsible for driving? you know, we had to do some great things during this time around, um, you know, in the UK for one of the things that I want to ask you about, I'm again, looking at DevOps from the inside But one of the things I'd love to get your perspective I always say that, you know, you're only as good as your lowest And, you know, What are some of the shifts in terms of expectations Um, you know, there's a thing that I think people I mean, we talk about collaboration, but how do we actually, you know, do that and have something that did you face and figure out quickly enough to be able to pivot so fast? and that's the kind of trade off that we have to make, how do we actually deal with that and hide that from So we haven't got, you know, the cost of the last 12 months. What are some of the core technology capabilities that you see really as kind demands for the team, and, you know, with the pressures on, at the moment where we're being asked to do things, And for me, and from a testing point of view, you know, mounds and mounds of testing, we are, um, you let's start in bits that we do well, you know, we've started creating, ops as it comes more and more to the fore as we go to cloud, and that's what we need to, Last question then for you is how would you advise your peers in a similar situation to You know, do they want it to be, um, more agile and, you know, or do they want to, especially if the business hold the purse strings, which in, in, uh, you know, in some companies include not as they And I know, you know, sometimes what is AI Well, thank you so much for giving us this sort of insight outlook at dev ops sharing the So thank you ever so much. I'm Lisa Martin. the entire DevOps pipeline from planning to production, actionable This release is ready to go wherever you are in your DevOps journey. Great to have you all three together We're going to start with you spiritual co-location that's a really interesting topic that we've we've And that's one of the things that you see coming out of the agile Um, you know, surgeon, I worked together a long, long time ago. Talk to me about what your thoughts are about spiritual of co-location I think, you know, it starts with kind of a shared purpose and the other understanding, that to you in a way that these different stakeholders can, can look at from their different lens. And Glen, we talked a lot about transformation with you last time. And w I've certainly found that with our, um, you know, continuing relationship with Broadcom, So it's really important that as I say, for a number of different aspects, that you have the right partner, then we can talk to the teams, um, around, you know, could they be doing better component testing? What are the metrics So that's one of the ways that you get to a shared purpose, cause we're all trying to deliver around that um, you know, some of the classics or maybe things like defect, density, or meantime to response. later in the evening, are they delivering, uh, you know, on the weekends as well? teams the context that they need to do their job, uh, in a way that creates the most value for the customers. And, and, and also how do you measure quality kind of following the business had a kind of comfort that, you know, everything was tested together and therefore it's safer. Um, and with agile, there's that kind of, you know, how do we make sure that, you know, if we're doing things quick and we're getting stuff out the door that of, uh, you know, setting up benchmarks, right? And so one of the things that we really need to evolve, um, as an industry is to understand that we need to do in terms of classifying, descend type patterns, um, you know, And one of the things that I think you could see tooling do is The one thing that you believe our audience really needs to be on the lookout for and to put and dev ops are the peanut butter and chocolate to support creative, uh, But I think the last, um, the last thing for me is how do you really instill and having the confidence to actually do more testing in production and go straight to production itself. And if you read through the report, it's all about the I think this was an incredibly valuable fruitful conversation, and we appreciate all of you

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JeffPERSON

0.99+

JeffreyPERSON

0.99+

SergePERSON

0.99+

GlenPERSON

0.99+

Lisa MartinPERSON

0.99+

Jeffrey HammondPERSON

0.99+

Serge LucioPERSON

0.99+

AppleORGANIZATION

0.99+

Jeffery HammondPERSON

0.99+

GlennPERSON

0.99+

sixQUANTITY

0.99+

26QUANTITY

0.99+

Glenn MartinPERSON

0.99+

50 minutesQUANTITY

0.99+

MicrosoftORGANIZATION

0.99+

LisaPERSON

0.99+

BroadcomORGANIZATION

0.99+

Jeff HammondPERSON

0.99+

tensQUANTITY

0.99+

six monthsQUANTITY

0.99+

2021DATE

0.99+

BenPERSON

0.99+

10 yearQUANTITY

0.99+

UKLOCATION

0.99+

two hoursQUANTITY

0.99+

15 yearQUANTITY

0.99+

sevenQUANTITY

0.99+

9:00 PMDATE

0.99+

two hourQUANTITY

0.99+

14 stacksQUANTITY

0.99+

twoQUANTITY

0.99+

next yearDATE

0.99+

GlynnPERSON

0.99+

two dayQUANTITY

0.99+

MartinPERSON

0.99+

Glynn MartinPERSON

0.99+

KirstenPERSON

0.99+

todayDATE

0.99+

SRE ValleyORGANIZATION

0.99+

five o'clockDATE

0.99+

BothQUANTITY

0.99+

2020DATE

0.99+

millionsQUANTITY

0.99+

second aspectQUANTITY

0.99+

Glen JeffPERSON

0.99+

threeQUANTITY

0.99+

14QUANTITY

0.99+

75%QUANTITY

0.99+

three weeksQUANTITY

0.99+

Amanda silverPERSON

0.99+

oneQUANTITY

0.99+

seven teamsQUANTITY

0.99+

tens of thousandsQUANTITY

0.99+

last yearDATE

0.99+

Serge Lucio, Broadcom | BizOps Manifesto Unveiled 2020


 

>>from around the globe. It's the Cube with digital coverage of biz ops Manifesto unveiled Brought to you by Biz Ops Coalition >>Hey, welcome back, everybody. Jeffrey here with the Q. Come to you from our Palo Alto studios today for a big big reveal. We're excited to be here. It's the biz. Opps manifesto, unveiling things been in the works for a while and we're excited. Have our next guest one of the really the powers behind this whole effort. And he's joining us from Boston. It's surge Lucio, the vice president and general manager Enterprise software division that Broadcom Serge, Great to see you. >>Good to see. Oh, absolutely. So you've been >>in this business for a very long time? You've seen a lot of changes in technology. What is the biz Ops manifesto? What is this coalition all about? Why do we need this today in in 2020? >>Yeah, so? So I've been in this business for close to 25 years, writes about 25 years ago, the agile manifesto was created, and the goal of the actual manifesto was was really to address the uncertainty around software development and the inability to predict the effort to build software. And if you if you roll that kind of 20 years later and if you look at the current state of the industry, the Product Project Management Institute estimates that we're wasting about a million dollars every 20 seconds in digital transformation initiatives that do not deliver on business results. In fact, we we recently served, uh, the number of executives in partnership with Harvard Business Review and 77% off. Those executives think that one of the key challenges that they have is really at the collaboration between business and I t. And that that's been kind of a case for almost 20 years now. Eso the key challenge we're faced with is really that we need a new approach. And many of the players in the industry, including ourselves, have been using different terms. Right? Some are. We are talking about value stream management. Some are talking about software delivery management. If you look at the site reliability engineering movement, in many ways it embodies a lot of this kind of concepts and principles. So we believe that it became really imperative for us to crystallize around kind of one concept and so In many ways, the Bezos concept and the bazaars manifesto are out, bringing together a number of ideas which have been emerging in the last five years or so and defining the key values and principles to finally helped these organizations truly transform and become digital businesses. And so the hope is that by joining our forces and defining kind of key principles and values, we can help kind of the industry not just by, you know, providing them with support, but also tools and consulting that is required for them to truly achieve that kind of transformation, that everybody's >>right, right? So co vid Now we're six months into it, approximately seven months into it. Um, a lot of pain, a lot of bad stuff still happening. We've got a ways to go. But one of the things that on the positive side, right and you've seen all the memes and social media is a driver of digital transformation and a driver of change. Because we have this light switch moment in the middle of March and there was no more planning, there was no more conversation. You suddenly got remote. Workforce is everybody's working from home, and you gotta go, right, So the reliance on these tools increases dramatically. But I'm curious, you know, kind of short of the beginnings of this effort and short of kind of covert, which, you know, came along unexpectedly. I mean, what were those inhibitors? Because we've been making software for a very long time. Write the software development community has has adopted kind of rapid change and and iterative delivery and and sprints what was holding back the connection with the business side to make sure that those investments were properly aligned with outcomes. >>Well, so So that you have to understand that I ts is kind of its own silos. And traditionally it has been treated as a cost center within large organizations and not as a value center. And so as a result, kind of the traditional dynamic between the I t. And the business is basically one of a kind of supplier to to kind of a business on. Do you know if you could go back? Thio? I think Elon Musk a few years ago, basically, at these concepts, off the machines to build the machines and you went as far as saying that the machines or the production line is actually the product, so meaning that the core of the innovation is really about building kind of the engine to deliver on the value. And so, in many ways, way have missed on this shift from, um, kind of I t becoming this kind of value center within the enterprises and any told about culture now culture is is the sum total of behaviors and the realities that if you look at the i t, especially in the last decade with the agile with develops with hybrid infrastructures, it's it's way more volatile today than it was 10 years ago. And so the when you start to look at the velocity of the data, the volume of data, the variety of data to analyze kind of the system, um, it's very challenging for I t. To actually even understand and optimize its own processes, let alone to actually include business as kind of an integral part of kind of a delivery chain. And so it's both kind of a combination off culture which is required a za well as tools, right to be able to start to bring together all these data together and then given the volume variety velocity of the data. We have to apply some core technologies which have only really, truly emerging last 5 to 10 years around machine learning and knowledge. And so it's really kind of a combination of those freaks which are coming together today. Truly, organizations get to the next level, >>right? Right. So let's talk about the manifesto. Let's talk about the coalition, the Biz Ops Coalition. I just like that you put down these really simple you know, kind of straightforward core values. You guys have four core values that you're highlighting, you know, business outcomes over individual projects and outputs, trust and collaboration over side load teams and organizations, data driven decisions. What you just talked about, you know, over opinions and judgment on learned, responded Pivot. I mean, surgery sounds like pretty basic stuff, right? I mean, aren't isn't everyone working to these values already? And I think you touched on it on culture, right? Trust and collaboration, data driven decisions. I mean, these air fundamental ways that people must run their business today or the person that's across the street that's doing it is gonna knock him right off the block. >>Yeah, so that's very true. But so I'll mention the novel survey. We need, uh, think about six months ago and twist in partnership with an industry analyst, and we serve it again. The number of 80 executives to understand how many were tracking business outcomes somebody you have, the software executives I T executives were tracking business outcomes, and the there were. Less than 15% of these executives were actually tracking the outcomes of the software delivery. And you see that every day, right? So in my own teams, for instance, we've bean adopting a lot of these core principles in the last year or so, and we've uncovered that 16% of our resource is we're basically aligned around initiatives which were not strategic for us. I take, you know, another example. For instance, one of our customers in the airline industry uncovered, for instance, that a number of that they had software issues that led to people searching for flights and not returning any kind of availability. And yet, you know, the I T teams whether its operations software involvement were completely oblivious to that because they were completely blindsided to it. And so the connectivity between the in words metrics that Turkey is using, whether it's database I, time cycle, time or whatever metric we use in I t are typically completely divorced from the business metrics. And so at its core, it's really about starting to align the business metrics with with the the software delivered change. Right, this, uh, this system, which is really a core differentiator for these organizations. It's about connecting those two things and and starting Thio infuse some of the actual culture and principles. Um, that's emerged from the software side into the business side. Of course, the lien movement and over movements have started to change some of these dynamics on the business side. And and so I think this thesis is the moment where we were starting to see kind of the imperative to transform. Now Cuvee the obviously has been a key driver for that. The the technology is right to start to be able to leave data together and really kind of also the cultural shifts through agile fruit develops through the SRE movement, fueling business transformation. All of these things are coming together and that are really creating kind of conditions. For the Bezos Manifesto to exist. >>So, uh, Clayton Christensen, great hard professor innovator's dilemma might still my all time favorite business books, you know, talks about how difficult it is for in comments to react to to disruptive change, right, because they're always working on incremental change because that's what their customers are asking for. And there's a good our ally when you talk about, you know, companies not measuring the right thing. I mean, clearly, I t has some portion of their budget that has to go to keeping the lights on, right, that that's always the case. But hopefully that's a an ever decreasing percentage of their total activity. So, you know what should people be measuring? I mean, what are kind of the new metrics? Um, in biz ops that drive people to be looking at the right things, measuring the right things and subsequently making the right decisions investment decisions on whether they should do, you know, move Project a along or Project B. >>So there are really two things, right? So So I think what you are talking about this portfolio management, investment management, right and which, which is a key challenge, right in my own experience, right driving strategy or large scale kind of software organization for years. It's very difficult to even get kind of a base data as to who is doing what. Uh, I mean, some of our largest customers were engaged with right now are simply trying to get a very simple answer, which is how many people do I have, and that specific initiative at any point in time and just tracking that information is extremely difficult. So and and again, back to Product Project Management Institute, they have estimated that on average, I two organizations have anywhere between 10 to 20% of their resource is focused on initiatives which are not strategically aligned. So so that's one dimensional portfolio management. I think the key aspect, though that we are we're really keen on is really around kind of the alignment of the business metrics to the ICTY metrics eso I'll use kind of two simple examples, right and my background is around quality and I have always believed that fitness for purpose is really kind of a key, um, a philosophy, if you will. And so if you start to think about quality is fitness for purpose, you start to look at it from a customer point of view, right? And fitness for purpose for core banking application or mobile application are different, right? So the definition of a business value that you're trying to achieve is different on DSO the And yet if you look at our I t operations are operating there are using kind of the same type of kind of inward metrics like a database off time or a cycle time or what is my point? Velocity, right? And s o the challenge really is this inward facing metrics that the I t. Is using which are divorced from ultimately the outcome. And so, you know, if I'm if I'm trying to build a poor banking application, my core metric is likely going to be up time, right? If I'm if I'm trying to build a mobile application or maybe a social mobile app, it's probably going to be engagement. And so what you want is for everybody across I t to look at these metric and what part of the metrics withing the software delivery chain which ultimately contribute to that business metric in some cases, cycle time, maybe completely relevant. Right again. My core banking up. Maybe I don't care about cycle time. And so it's really about aligning those metrics and be able to start to differentiate. Um, the key challenge you mentioned around the around the disruption that we see is or the investors is. Dilemma now is really around the fact that many idea organizations are essentially applying the same approaches for innovation right for basically scrap work, Then they would apply to kind of over more traditional projects. And so, you know, there's been a lot of talk about to speed I t. And yes, it exists. But in reality are are really organizations truly differentiating out of the operate their their projects and products based on the outcomes that they're trying to achieve? And and this is really where bizarre is trying to affect. >>I love that. You know, again, it doesn't seem like brain surgery, but focus on the outcomes right and and it's horses for courses. As you said this project, you know what you're measuring and how you define success isn't necessarily the same as it is on this other project. So let's talk about some of the principles we talked about the values, but you know I think it's interesting that that that the bishops coalition, you know, just basically took the time to write these things down, and they don't seem all that super insightful. But I guess you just got to get him down and have them on paper and have it in front of your face. But I want to talk about, you know, one of the key ones which you just talked about, which is changing requirements right and working in a dynamic situation, which is really what's driven. You know this, the software to change and software development because, you know, if you're in a game app and your competitors comes out with a new blue sword, you've got to come out with a new blue swords. So whether you have that on your compound wall, we're not. So it's It's really this embracing of the speed of change and and and making that you know the rule, not the exception. I think that's a phenomenon. And the other one you talked about his data right and that today's organizations generate more data than humans can process. So informed decisions must be generated by machine learning and ai and you know and the big data thing with a dupe you know, started years ago. But we are seeing more and more that people are finally figuring it out that it's not just big data on It's not even generic machine learning artificial intelligence. But it's applying those particular data sets and that particular types of algorithms to a specific problem to your point, to try to actually reach an objective. Whether that's, you know, increasing the your average ticket or, you know, increasing your check out rate with with with shopping carts that don't get left behind and these types of things. So it's a really different way to think about the world in the good old days, probably when you got started, when we had big Giant you know, M R D s and P R. D s and sat down and coded for two years and and came out with a product release and hopefully not too many patches subsequently to that. Yeah, >>it's interesting right again, back to one of these service that we did with about 600 the ICTY executives and we we purposely designed those questions to be pretty open. Andi and one of them was really wrong requirements, and it was really around. Kind of. What is the best approach? What is your preferred approach towards requirements? And if I remember correctly, Over 80% of the ICTY executives said that the best approach their preferred approach is for requires to be completely defined before self for the bombing starts, let me pause there. We're 20 years after the agile manifesto, right, and for 80% of these idea executives to basically claimed that the best approach is for requires to be fully baked before solved before software development starts basically shows that we still have a very major issue again. Our apotheosis in working with many organizations is that the key challenges really the boundary between business and I t. Which is still very much contract based. If you look at the business side, they basically are expecting for I t deliver on time on budget, Right? But what is the incentive for I t to actually deliver on the business outcomes, right? How often is I t measured on the business outcomes and not on S L. A or on a budget secretary, and so that that's really the fundamental shift that we need to. We really need to drive up to send industry andi way. Talk about kind of this dis imperative for organizations to operate. That's one. And back to the, you know, various doors still, Um, no. The key difference between these large organization is really kind of a. If you look at the amount of capital investment that they can put into pretty much anything, why are they losing compared Thio? You know, startups. What? Why is it that more than 40% off personal loans today are issued not by your traditional brick and mortar banks, but by start ups? Well, the reason, Yes, it's the traditional culture of doing incremental changes, not disrupting ourselves, which Christenson covered at length. But it's also the inability to really fundamentally change kind of dynamic between business I t and partner, right, to to deliver on a specific business. All >>right, I love that. That's a great That's a great summary and in fact, getting ready for this interview. I saw you mentioning another thing where you know the problem with the agile development is that you're actually now getting mawr silos because you have all these autonomous people working you know, kind of independently. So it's even harder challenge for for the business leaders toe, as you said to know what's actually going on. But But, sir, I want to close um, and talk about the coalition eso clearly These are all great concepts, these air concepts. You want to apply to your business every day. Why the coalition? Why, you know, take these concepts out to a broader audience, including your competition and the broader industry to say, Hey, we as a group need to put a stamp of approval on these concepts. These values these principles. It's >>so first, I think we we want everybody to realize that we are all talking about the same things, the same concepts e think we were all from our own different vantage point, realizing that things have to change and again back to you know, whether it's value stream management or site reliability, engineering or biz Opps we're all kind of using slightly different languages on DSO. I think one of the important aspects of these apps is for us, all of us, whether we're talking about consulting actual transformation experts, whether we're talking about vendors right to provide sort of tools and technologies or these larger enterprises to transform for all of us to basically have kind of a reference that lets us speak around kind of in a much more consistent way. The second aspect is for to me is for these concepts to start to be embraced not just by us or trying or vendors, um, system integrators, consulting firms, educators, spot leaders but also for some of our own customers to start to become evangelists of their own in the industry. So we are. Our objective with the coalition is to be pretty, pretty broad, Um, and our hope is by by starting to basically educate our joint customers or our partners that we can start to really foster disbelievers and and start to really change some of dynamics. So we're very pleased that if you look at what some of the companies which have joined the the manifesto eso, we have vendors such as stashed up or advance or pager duty, for instance, or even planned you one of my direct competitors but also fought leaders like Tom Davenport or or Cap Gemini or smaller firms like Business Agility Institute or Agility Elf on DSO our goal really is to start to bring together. For three years, people have bean LP. Large organizations do digital transformation. Vendors were providing the technologies that many of these organizations used to deliver all these digital transformation and for all of us to start to provide the kind of education, support and tools that the industry need. >>That's great search. And, you know, congratulations to you and the team. I know this has been going on for a while putting all this together, getting people to sign onto the manifesto of putting the coalition together and finally today getting to unveil it to the world in a little bit more of a public opportunity. So again, you know, really good values, really simple principles, something that that shouldn't have to be written down. But it's nice because it is. And now you can print it out and stick it on your wall. So thank you for for sharing the story. And again, congrats to you on the team. >>Thank you. Thank you. Appreciate it. >>My pleasure. Alright, He surge If you wanna learn more about the bizarre manifesto goto biz Opps manifesto dot or greed it and you can sign it and you can stay here from or coverage on. The Cube of the bizarre manifesto unveiled. Thanks for watching. See you next time.

Published Date : Oct 16 2020

SUMMARY :

It's the Cube with digital Have our next guest one of the really the powers behind this whole effort. Good to see. What is the biz Ops manifesto? And many of the players in the industry, including ourselves, you know, kind of short of the beginnings of this effort and short of kind of covert, And so the when you start to look at the velocity of And I think you touched on it on culture, And yet, you know, the I T teams whether its operations software involvement And there's a good our ally when you talk about, you know, keen on is really around kind of the alignment of the business metrics to of the speed of change and and and making that you know the rule, and so that that's really the fundamental shift that we need to. So it's even harder challenge for for the business leaders toe, as you said to know what's actually going on. to change and again back to you know, whether it's value stream management or And again, congrats to you on the team. Thank you. manifesto dot or greed it and you can sign it and you can stay here from or coverage

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
BostonLOCATION

0.99+

JeffreyPERSON

0.99+

two yearsQUANTITY

0.99+

80%QUANTITY

0.99+

Clayton ChristensenPERSON

0.99+

16%QUANTITY

0.99+

Agility ElfORGANIZATION

0.99+

2020DATE

0.99+

Business Agility InstituteORGANIZATION

0.99+

Serge LucioPERSON

0.99+

80 executivesQUANTITY

0.99+

20 yearsQUANTITY

0.99+

LucioPERSON

0.99+

six monthsQUANTITY

0.99+

Product Project Management InstituteORGANIZATION

0.99+

three yearsQUANTITY

0.99+

oneQUANTITY

0.99+

more than 40%QUANTITY

0.99+

ICTYORGANIZATION

0.99+

Product Project Management InstituteORGANIZATION

0.99+

77%QUANTITY

0.99+

two thingsQUANTITY

0.99+

Cap GeminiORGANIZATION

0.99+

AndiPERSON

0.99+

Palo AltoLOCATION

0.99+

second aspectQUANTITY

0.99+

10QUANTITY

0.99+

Elon MuskPERSON

0.99+

last yearDATE

0.98+

todayDATE

0.98+

10 years agoDATE

0.98+

two organizationsQUANTITY

0.98+

20 years laterDATE

0.98+

BezosPERSON

0.98+

Biz Ops CoalitionORGANIZATION

0.97+

Less than 15%QUANTITY

0.97+

middle of MarchDATE

0.97+

firstQUANTITY

0.97+

almost 20 yearsQUANTITY

0.97+

20%QUANTITY

0.96+

ChristensonPERSON

0.95+

bothQUANTITY

0.95+

approximately seven monthsQUANTITY

0.95+

Over 80%QUANTITY

0.94+

about 600QUANTITY

0.94+

two simple examplesQUANTITY

0.94+

agileTITLE

0.92+

10 yearsQUANTITY

0.91+

years agoDATE

0.91+

DSOORGANIZATION

0.91+

20 secondsQUANTITY

0.89+

about 25 years agoDATE

0.89+

BroadcomORGANIZATION

0.89+

few years agoDATE

0.89+

one conceptQUANTITY

0.87+

four core valuesQUANTITY

0.86+

25 yearsQUANTITY

0.86+

six months agoDATE

0.84+

about a million dollarsQUANTITY

0.82+

HarvardORGANIZATION

0.82+

last five yearsDATE

0.79+

BizOpsORGANIZATION

0.78+

ThioPERSON

0.78+

SergePERSON

0.78+

5QUANTITY

0.77+

last decadeDATE

0.76+

DavenportPERSON

0.7+

Bezos ManifestoTITLE

0.69+

TomORGANIZATION

0.68+

OpsTITLE

0.66+

PPERSON

0.65+

BusinessTITLE

0.64+

CuveePERSON

0.64+

aboutDATE

0.63+

M R DORGANIZATION

0.62+

Greg Lotko, Broadcom Inc. | IBM Think 2020


 

Narrator: From the Cube studios in Palo Alto and Boston, (upbeat intro music) it's theCUBE! Covering IBM Think. Brought to you by IBM. >> Hi, everybody, we're back. This is Dave Vellante and you're watching theCUBE's coverage of the IBM Think 2020 digital event experience, wall to wall coverage, of course in the remote Cube studios in Palo Alto and Boston. Greg Lotko is here. He's with Broadcom. He's a senior vice president and general manager of the Broadcom mainframe division. Greg, great to see you. Thanks for coming on. >> Hey good seeing you too, happy to be here. >> Hey, lets talk Z. You know, I got to say when Broadcom made a nearly 19 billion dollar acquisition of CA, many people, myself included said, "Huh? I don't really get it." But as you start to see what's happening, the massive CA install base and the cross selling opportunities that have come to Broadcom, you start to connect the dots and say, "Ah, maybe this does make some sense." But you know, how's it going? How's the acquisition been? It's been, you know, what now, two years since that move? >> Yeah we're coming up on two years. I think it kind of shocked the world, right? I mean, there is a lot of value there and the customers that have been using the mainframe and running their core businesses for many, many years, they knew this, right? So Broadcom came in and said, "Hey, you know, I don't think this is the cash cow "that others maybe have been treating it as." You know, we absolutely believed with some investment that you could actually drive greater value to customers and you know, what a novel concept right? You know, expand expense, invest, drive greater value, and that would be the way you'd expand revenue and profit. >> Yeah, I mean I think generally, the mainframe market is misunderstood. It obviously goes in cycles. I did a report, you know, a couple of months ago on really focusing on Z15X, it was last summer. And how historically, IBM performance overall as a company is really driven still by mainframe cycles because it all still drags so much software and services and so we're in the midst of a Z15 tailwind and so, of course, the COVID changes everything. But nonetheless it's a good business. IBM's a dominant player in that business. Customers continue to buy mainframes because it just works. It's too risky to rip 'em out. People say, "Oh, why don't you get rid of the mainframe?" No way customers are going to do that. It's running their business. So it's a fabulous business if you have a play there and clearly... (poor internet connection interrupts Dave speaking) >> Yeah, and if you think about those cycles that's largely driven by the hardware, right? As each generation comes out, and if you look at traditional pricing metrics that really look at using that capacity, or even using full capacity, that's what caused this cyclicality with the software as well but, you know, there's a lot of changes even in that space. I mean with us, with mainframe consumption licensing from Broadcom, with IBM doing tailor fit pricing, you know, the idea that you can have that headroom on the hardware and then pay as you go, pay as you grow. I think that actually will smooth out and remove some of that cyclicality from the software space. And as you said, correctly, you look at the COVID stuff going on, I mean there's an awful lot of transactions going on online. People are obviously checking their financials with the economics going on. The shipping companies are booming with what they have to do, so that's actually driving transactions up as well, to use that capacity that's in the boxes. >> Yeah, and financial services is actually in really good... I know that the stocks have been hit, but the liquidity in the banks is very, very strong because of the 2009 crisis. So the fiscal policy sort of, you know, dictated that or, you know, the public policy dictated that. And the banks are obviously huge consumers of mainframe. >> Sure. >> One of the things that IBM did years ago was to sort of embrace Linux, was one of its first moves to open up the mainframe. But it's much more than just Linux. I wonder if you could talk about sort of your point of view on open meets mainframe. >> Yeah, so open is way more than just Linux, right? I mean Linux is good, running around the mainframe. I mean that's absolutely an open paradigm from the operating system, but open is also about opening up the API's, opening up the connectivities so that it's easier to interact with the platform. And, you know, sometimes people think open is just about dealing with open source. Certainly we've made a lot of investments there. We contributed the command line interface and actually a little more than 50% of the original contribution to the Zowe project, under the OMP, the Open Mainframe Project. So that was about allowing open source technologies that interact with distributed and cloud technologies to now interact with that mainframe. So it's not just the open source technologies, but opening up the API's, so you can then connect across technologies that are on the platform or off platform. >> So what about the developer community? I mean there's obviously a lot of talk in the industry about DevOps. How does DevOps fit into the mainframe world? What about innovations like Agile? And sort of beyond DevOps, if you will. Can you comment on that? >> Yeah, absolutely, I mean you can bring all those paradigms, all those capabilities to the mainframe now with opening up those API's. So I mean we had a large European retail bank that has actually used the Git Bridge that we work with providing, you know, through Zowe, to connect into Endeavor, so they could leverage all the investments they had made in that existing technology over the years, but actually use the same kind of CICD pipeline, the same interaction that they do across distributed platforms and mainframe together, and open up that experience across their development community. What that really means is you're using the same concepts, the same tools that they maybe became comfortable with in university or on different platforms, to then interact with the mainframe and it's not that you're doing anything that, you know, takes away from core capabilities of the mainframe. You're still leveraging the stability, the resiliency, the through put, the service ability. But you're pressing down on it and interacting with it just like you do with other platforms. So it's really cool. And that goes beyond Linux, right? Because you're interacting with capabilities and technologies that are on the mainframe and ZOS environment. >> Yeah, and the hardened security as well, >> Absolutely. >> is another key aspect of the mainframe. Let's talk about cloud. A lot of people talk about cloud, cloud first, multicloud. Where does the mainframe fit in the cloud world? >> So, there's a lot of definitions of cloud out there, right? I mean people will talk about private cloud, public cloud, hybrid cloud across multiple private clouds. They'll talk about, you know, this multicloud. We actually talk about it a little differently. We think about the customer's cloud environment. You know, our institution that we're dealing with, say it's a financial institution, to their end customers, their cloud is however you interact. And you think about it. If you're checking an account balance, if you're depositing in a check, if you're doing any of these interactions, you're probably picking up a mobile device or a PC. You're dealing with an edge server, you're going back into distributed servers, and you're eventually interacting with the mainframe and then that's got to come all the way back out to you. That is our customer's cloud. So we talk about their cloud environment, and you have to think about this paradigm of allowing the mainframe to connect through and to all of that while you hit it, preserving the security. So we think of cloud as being much more expansive and the mainframe is an integral part of that, absolutely. >> Yeah, and I've seen some of your discussions where you've talked about and sort of laid out, look, you know, the mainframe sits behind all this other infrastructure that, you know, ultimately the consumer on his or her mobile phone, you know, goes through a gateway, goes through, you know, some kind of site to buy something. But, you know, ends up ultimately doing a transaction and that transaction you want to be, you know, secure. You want it to be accurate. And then how does that happen? The majority of the word's transactions are running on some kind of, you know, IBM mainframe somewhere, in someway touches that transaction. You know, as the world gets more complex, that mainframe is... I called it sort of the hardened, you know, sort of back end. And that has to evolve to be able to adapt to the changes at the front end. And that's really kind of what's happening, whether it's cloud, whether it's mobile, whether it's, you know, Linux, and other open source technology. >> Right, it's fabulous that the mainframe has, you know, IO rates and throughput that no other platform can match, but if you can't connect that to the transactions that the customer is driving to it, then you're not leveraging the value, right? So you really have to think about it from a perspective of how do you open up everything you possibly can on the mainframe while preserving that security? >> I want to end with just talking about the Broadcom portfolio. When you hit the Broadcom mainframe site, it's actually quite mind boggling, the dozens and dozens of services and software capabilities that you provide. How would you describe that portfolio and what do you see as the vision for that portfolio going forward? >> Yeah, so when people normally say portfolio, they're thinking software products, and we have hundreds of software products. But we're looking at our portfolio as more than just the software. Sometimes people talk about, hey let me just talk to you about my latest and greatest product. One of the things we were really afforded the opportunity to do with Broadcom acquiring us was to reinvest, to double down on core products that customers have had for many years and we know that they want to be able to count on for many years to come. But the other really important thing we believe about driving value to our customers was those offerings and capabilities that you put around that, you know? Think about the idea of if you want to migrate off of a competitive product, or if you want to adopt an additional product that have the ability to tie these together. Often in our customer's shops, they don't have all the skills that they need or they just don't have the capacity to do it. So we've been investing in partnership. You know, we kept our services business from, at least the resources, the people, from CA. We rolled them directly into the division and we're investing them in true partnership, working side by side with our customers to help them deploy these capabilities, get up and running, and be successful. And we believe that that's the value of a true partnership. You invest side by side to have them be successful with the software and the capabilities and their operation. >> Well, like I said, it caught a lot of people, myself included, by surprise that acquisition. It was a big number, but you could see it, you know, Broadcom's performance post. You know, the July 2018 acquisition, done quite well. Obviously COVID has affected, you know, much of the market, but it seems to be paying off great. Thanks so much for coming to theCUBE and sharing your insights, and best of luck going forward. Stay safe. >> Pleasure being here. Everybody here, yourself, and everybody out there, be safe, be well. Take care. >> And thank you for everybody for watching. This is theCUBE's coverage of the IBM Think 2020 digital event experience. We'll be right back, right after this short break. You're watching theCUBE. (upbeat outro music)

Published Date : May 5 2020

SUMMARY :

Brought to you by IBM. of the Broadcom mainframe division. Hey good seeing you and the cross selling opportunities and you know, what a novel concept right? I did a report, you know, with the software as well but, you know, I know that the stocks have been hit, One of the things that of the original contribution And sort of beyond DevOps, if you will. and technologies that are on the mainframe is another key aspect of the mainframe. of allowing the mainframe to connect and that transaction you and what do you see as the vision and capabilities that you you know, much of the market, and everybody out there, of the IBM Think 2020

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

Greg LotkoPERSON

0.99+

BroadcomORGANIZATION

0.99+

IBMORGANIZATION

0.99+

GregPERSON

0.99+

July 2018DATE

0.99+

Palo AltoLOCATION

0.99+

BostonLOCATION

0.99+

two yearsQUANTITY

0.99+

LinuxTITLE

0.99+

DavePERSON

0.99+

hundredsQUANTITY

0.99+

ZOSTITLE

0.99+

CA.LOCATION

0.99+

CubeORGANIZATION

0.99+

dozensQUANTITY

0.98+

each generationQUANTITY

0.98+

COVIDORGANIZATION

0.98+

CALOCATION

0.98+

last summerDATE

0.98+

IBM Think 2020EVENT

0.97+

Broadcom Inc.ORGANIZATION

0.97+

Z15COMMERCIAL_ITEM

0.97+

DevOpsTITLE

0.97+

Z15XCOMMERCIAL_ITEM

0.97+

oneQUANTITY

0.96+

theCUBEORGANIZATION

0.96+

OneQUANTITY

0.95+

nearly 19 billion dollarQUANTITY

0.93+

more than 50%QUANTITY

0.92+

AgileTITLE

0.91+

first movesQUANTITY

0.89+

OMPTITLE

0.89+

Git BridgeTITLE

0.89+

couple of months agoDATE

0.89+

ZoweORGANIZATION

0.87+

years agoDATE

0.82+

COVIDOTHER

0.67+

softwareQUANTITY

0.67+

2009 crisisEVENT

0.65+

ZoweTITLE

0.59+

EuropeanOTHER

0.58+

ThinkCOMMERCIAL_ITEM

0.52+

ProjectTITLE

0.46+

COVIDTITLE

0.35+

Clayton Donley, Broadcom Inc. & Greg Lotko, Broadcom Inc. | IBM Think 2019


 

>> Live from San Francisco. It's the cube covering IBM thing twenty nineteen brought to you by IBM. >> Okay. Welcome back, everyone. We're live here in San Francisco for the cubes. Exclusive coverage of IBM. Think twenty nineteen for a student in our next two guests are great. Glad Co senior vice president, general manager of the mainframe division Broadcom Only with CIA and acquired Clayton Donley, head of security and immigration. Broadcom, both formerly of CIA. Big acquisition. Big value guys. Welcome to the Cube. Good to see you. >> Thanks a lot for having us. >> So we just talked before we came on camera here, IBM. Think a lot of cars here and software a icloud systems and software working together. Kind of thesis of the Broadcom. See a acquisition that that murder, that move was a big one. A lot of analysts liked it. Your thoughts. Now that's playing out here. Yeah, I think it was >> really interesting. If you look at what Broadcom has gone after in the marketplace, is they're They're not looking for the flash in the pan or trying to chase the next new thing. They're looking for core businesses or components. Software products that they believe are, you know, have real staying power and will be around for a decade or Mohr into the future, and then they want to invest in those and nurture they really want to be. Even if it's in a new space, they want to invest in something where they'll be number one or two in the marketplace. >> It's interesting. You're looking at the mark with cloud, you see scale. You see data, all this stuff we talked about for years and years, but it really comes back down. A systems and software working together clouds one big, complex distributed system. So all from distractions could maybe tract away. Those complexities isn't doing its end of the day. It's software plus large scale systems. This kind of was playing out. This is in the wheelhouse of what you guys do it. Can you guys had some color that trend and what needs to happen to make it more and more viable? Mohr performance easier to use? >> Yeah, I mean, I think that, you know, what we see is that customers are having a lot of problems with individual pieces of software. They're having problems when they put all this together. All right, So if you look at even to your question about Brock, come a moment ago. They sort of came in in price software through through the data center because they're providing everything. May I chips, too? Fibre channel networks and other kinds of things that are running peoples, you know, networks of the very largest scale and what their realizes when you get into the enterprise software level customers have such challenges because, you know, they don't get to cherry pick just cool things or the easy things to go integrate. They've got everything from mainframe client server to interior to whatever they picked up over the years. That stuff has to work together seamlessly to get kind of value. That makes sense. And that's why I think when you start looking at, you kind of are focus. It's on helping customers bring that together, get that value. >> It's all about the hybrid environment, because what we're getting at here is I got to make the legacy work with the new. But the beautiful thing about cloud native of some of these new micro services and containers is you don't have to kill the old Bring in the new. There's a great abstraction around software now that's making them work together. But yet the new stuff and work really great. This is the kind of the new architecture. Your thoughts on this? >> Yeah. I mean, obviously you're here a lot about that here it think right there, talking all about hybrid I T or multi cloud. I think there's a stat out there that, you know, seventy five percent of the large enterprises in the world say they'll be have multi cloud or hybrid environments by twenty twenty. I think they all have it today, Right? You think mobile mainframe, right? There's not workloads that work in isolation. You pick up your phone and you go to check your balance. It's gonna kick off a transaction that's going to go toe edge device or an edge server that's going to go through a network and maybe hit another server. And eventually it's going to go back to a mainframe to check the balance or to transfer funds or something like that. So they're having to deal with it already today. And and there's two kind of sides of the coin. You want that interaction for the developers to be common across those platforms, yet you want them to be ableto leverage on the power the strength, the security of the underlying platform without having to know all the gory details, which is, you know why. It makes a lot of sense for us mainframe and distributed. If you look across what is the CIA Technologies portfolio that Broadcom acquired, A lot of the capabilities that we have are the same capabilities that work across those environments so that the enterprise customers can interact with it one way. >> Clayton it. When I hear this environment, there's certain things that I need to worry about everywhere. It's, you know, my data. How do I protect my data? And, of course, security is one of those areas where there are lots of different environments, and unfortunately, there's lots of different considerations. Depending on which clouds I have which environment, you know, mainframe X eighty six power. You all have different considerations. The mantra I've heard that that seems to resonate is security is everyone's responsibility, you know, up and down the stack from the chip level all the way through the application. So explain where you know CIA. Now Broadcom fits in tow this picture and lives in this, you know, even more. Header. Genus world and by the way, totally agree. Multi cloud is what customers have today. Yeah, >> I mean, if you if you look at it, I mean, customers. They're building out, you say new mobile applications and and, you know, building them, his services in the cloud and so forth. But what we're finding is that the transactions and other kinds of things, they're still happening in some of these other environments. Maybe those environments still live in a data center. Maybe they've been moved to a private cloud. Maybe they're in a public cloud. Writing on my ass or some other kind of bank is a service. What we're finding is that each of you that transaction has to be protected. The guy that gives you the ability to call that transaction from a mobile app needs to be protected. All of these things need to be protected. But then you need to be able to orchestrate that. Make sure that you're laying down those based protecting those bits. Same way every time testing them the same way every time. And I think that if you look at what we're looking at and our values really in digital infrastructure management, right, but you're you're bringing all these pieces in cloud, multi cloud mainframe, All of these environments you have, You have a way to operate it, Manage it as well as for security. >> Yeah. So, Greg, you know, when I look back my career, there's something that's been repeating a lot. It's I go back to find it here, go back to the nineties. It was like, Okay, what was some of the reasons why the excess piece failed? It was like, Well, it was networking security, you know, in cloud happened. It was, well, security and management, howto like, you know, figure out some of these management of a hetero genius environment has typically been a downfall in it. It's something that we struggled at as an industry. So why will now be different? How how is the industry helping to solve that issue? And, you know, simple is something that we keep trying to hear. But, you know, actually, achieving it is pretty >> challenge. I think it's fundamentally realizing that the core large enterprises in the world today are using mainframes, right, and some of them have tried to migrate. Something's off, and it's not about complexity of migrating it off. It's about whether or not you can land somewhere that has that same security throughput, resiliency, all that kind of stuff. But if you recognize that you're gonna have these systems interacting and you recognize that we have to make it easier for people whether they're coming out of university or they're coming from a background of distributed or open source, you want to make it easier to interact. It's what's informing everything we do in our strategy and mainframe. So we talk about open, frictionless and optimized. So it's all about the idea of that mainframe system and those processes that were running, whether it's Dev ops, whether it's, you know, databases and tools, whatever we're doing, the security, the analytics that we're doing that has to be open and be able to interact with other people's tools as well as other people's platforms. Frictionless is all about the idea of you got to make it easy to do that interaction somebody that comes at this from a non mainframe context that maybe knows I calm the cartoon characters of open source. You know, get your gold for Jenkins or whatever, right that they can use that to interact with the mainframe and leverage it, and then you want to be optimized. You want to make it for the real deep technical professional to get the most out of it and focus where the expertise is, or for the novice cannot really have to need training wheels, but to be able to ride that bike right away and perform the things. So all these things you can see how their kind of informed and setting that tone of thinking about, ah, hybrid environment and connecting that mainframe in, across, not sitting as an island unto itself, >> I mean, you bring up a good point. A couple points, One is distributed. Computing has been around for a while. Mainframes. I mean, I'm old enough to remember that I was private client server way. We see the point of the main finger. >> You're gonna be >> dead soon. Most of all, kind of went away that, but it never died, right? We all know, but there's a renaissance. Rumors of my death are greatly, exactly. A lot of them didn't go down, but they were, but they were really died. But but here's the thing. There's a renaissance and mainframe because of cloud computing and cloud operations. If everything's cloud operationalized, then essentially you have a big one. Big distribute computer call resource and edges that are subsystem. So the notion of buying a mainframe isn't a platform decision. It's a right tool. The right job kind of decision, so people are not looking at mainframes was a bad decision. If it fits right, that's not like everyone should buy made friends. But if you need it, the horse power, the question is begs. The question is, why is there a renaissance and mainframes? What's the reason why people are buying them? Is it because it fits into a certain position? Is that certain scale? Is it because they could plug right into the cloud and be a big resource? >> I think there's I think there's also, ah, realization, you know, think about if you're the the newer CEO, our CTO, and you start looking at your state and you realize that you know this mainframe thing thatyou're spending twenty percent of your budget on is actually doing seventy percent of your process that you kind of look at it and you go. We'll work really cost effective. So then you start looking at? Well, where is it most cost effective. And does it make sense to use, Use it there. And then when you could tie it into everything else, when you can can get the same types of security tools and lock it down and locked the interaction down you say, Hey, this might, But this might make sense for me to do it. And I think it just ends up being dollars and cents and then the resiliency, right? I mean, when people aren't having that downtime >> plate, you're going to run your business. You want up time. If you're any commerce, you want high stamp your systems. So it really is the right tool for the world, like a thing for the right job. Is this happening? Give us the update on our people, buying more reason because it's just it's better. >> I think part of it also is, you know, why fix what isn't broken, right? The main friends running there, It's up. It's provided transactions. I think he used to have used to have this impediment to getting access, to need to find some old global guy, you need to find all this other stuff because you had your business, >> Cobol programmers. But now it runs analytics. >> It's like a It's like a foreign language to some people, right? You say Kobol was like, after one Chinese. So what? We've done those We've made it. So you don't have to learn. Cobell. You don't have to learn some specialized thing. You can come in with a prize. You come in with the technology, they teach kids and, you know, elementary school t use Java script and other kinds of things to come in access. So same things that are now in the mean >> it's basically a big iron and the old expression, big horsepower, >> horsepower, high throughput, high resiliency. >> Greg, I heard you talk about things like Dev ops that you fit in this environment. Absolutely. We've attracted. I remember, you know, when you nosy lennix on mainframe rolled out fifteen years ago. You want to do the cool new dock? Er, you know things? Absolutely. But if I look at the death ofthe people, people that are going to pay for this a lot of times they say, Well, I'm used to more that cloud model. How do I get? You know, they moved to an off ex model. We're still early in that trend, but, you know, Dizzy Syria's mainframe. Will it fit into the new modern paradigm? From a CFO standpoint, >> I definitely think so. If you if you look at a lot of the stuff that's going on in the marketplace and even concepts that we're testing with clients today around, you know you can refer to it as consumption based pricing or value based pricing, you know, looking at how much you're actually using and then charging for that with a known, you know, Hey, if I grow my capacity this much, how much am I going to pay or if I go down? I'm not going to be able to redeploy those dollars elsewhere. All those constructs are stuff that we're working with customers today on. So it is very much the idea of a cloud like environment that can either be delivered on creme through you buying your own hardware or, you know there's IBM that as easy cloud, there's folks like in so no its center that have clouds that have mainframe up in them today. >> And the developer environment clearly is going towards infrastructures code, which is the abstraction away? Just programmable infrastructure. They don't care where it was fast, right? Doesn't matter. Does it really matter >> how I look? Way contributed, too. Zoe Bright side, right? That was the command line interface, and everybody was like, Oh, my God, You know, they thought maybe we had some executives that were sitting back that had this brilliant idea. We were actually using out agile methodologies in our development, and we gave in each programming increment. We gave the engineers time to do what they wanted to do. You know, one sprint per cycle. And some of our young developers said, You know what I wish I could use, Get Jenkins and gulp and tie them into Endeavor or these other Dev ops tools or it stops tools. They developed it as an internal use tool for a command line, and we've stumbled over on accent. We said, Oh my God, this is thing We think something we think customers would want. And then, as we got talking with Rocket with IBM, Rocket had a Web interface, IBM at the mediation layer and we said, Holy cow, You know, this is something. If we got together, we contributed. We could really start a renaissance around mainframe, and a lot of people are going What? Why you've got proprietary tools and software. Why would you open up? Because the reality is we want our customers to find it easier to work with the mainframe and look out compete on the differentiation of my underlying lying product, whether it be price or function. But I want my customers to be able to tie in my software with IBM with rockets with rooms out whomever and picked because of where the value is, not because they feel locked in >> its You're going about one >> of the gripes about mainframe, right? People thought they were locked in >> lock and proprietary weird interfaces freshen, You take the friction away >> and that's not that's your father's >> mainframe. That's not today's May in front of that was exactly the old. The only kind of perception, right? We bring Lennox and all these tools and infrastructures. Code is just another resource on the network. Guys. Thanks for the insight. Appreciate Left home My mainframe. My God made my day here. So I'm free clad world final. Give a plug quickly for Broadcom. What, you guys working on. What's the big news here for you guys? Give a quick. >> Hey, I'll tell you for me, Broadcom acquiring the mainframe business is all about investment. And, I mean, we're a software business, So more than ninety percent of my expenses people, if I'm not hiring, I'm full of it. I'm not investing. I'm hiring were posted like crazy. We're hiring, We're expanding to the team, and the idea is all about there's customers have used core products for many years, and they want to count on him for many years to come. Were making those investments, and we're going to continue to invest in the new capabilities dealt, make more efficient, effective on the platform. >> Your thoughts, >> you know, I >> mean, I think that you know that, you know, it's interesting. You look a broad, calm, and a lot of people don't know. You know what's the focus right there? They're not traditionally the software space, and so >> on. They are the >> first thing you well, they are now. And one of the things that we're doing this if you look at our investment rate in R and D in general, it's up there. I mean, world class. If you look at the largest your most successful cloud players forget about, you know, your large cap. Take protect companies to sell in terms of percentage of our percentage of revenue. They spend it R and B. We're far above that. We're at a very high level. We're going to continue to invest in a lot of innovation, you know? Aye, aye. Machine Learning Dev. Ops, of course. You know, curious security is >> a cultural shift. We could see vinyl records. We're gonna come back now. You got mainframe back. How much back can I get a mainframe for? If I want to be the new cool kid on the block, you >> got to go to IBM >> for the hardware. But I could talk to you about yourself or to help you with it. You gotta mean faith for the Cube. Just have one in our house. Thanks, guys. I appreciate it. Thanks. Pleasure. You covered your talking mean freeze and IBM Think software. Lynn Nix, The new World Cloud Data. I I'm John's First Amendment back with more coverage after this short break

Published Date : Feb 12 2019

SUMMARY :

IBM thing twenty nineteen brought to you by IBM. general manager of the mainframe division Broadcom Only with CIA and acquired Clayton Donley, Think a lot of cars here and software a icloud Software products that they believe are, you know, have real staying power and will be around You're looking at the mark with cloud, you see scale. Yeah, I mean, I think that, you know, what we see is that customers are having a lot of problems with you don't have to kill the old Bring in the new. I think there's a stat out there that, you know, and lives in this, you know, even more. And I think that if you look at what we're looking at and our values And, you know, simple is something that we keep trying to hear. of you got to make it easy to do that interaction somebody that comes at this from a non mainframe context I mean, you bring up a good point. But if you need it, the horse power, the question is begs. I think there's I think there's also, ah, realization, you know, think about if you're the So it really is the right tool for the world, like a thing for the right job. to getting access, to need to find some old global guy, you need to find all this other stuff because you had your But now it runs analytics. So you don't have to learn. I remember, you know, when you nosy lennix on mainframe rolled out fifteen for that with a known, you know, Hey, if I grow my capacity this much, And the developer environment clearly is going towards infrastructures code, which is the abstraction away? We gave the engineers time to do What's the big news here for you guys? Hey, I'll tell you for me, Broadcom acquiring the mainframe business is all about mean, I think that you know that, you know, it's interesting. And one of the things that we're doing this if you look at our investment rate in R and D in If I want to be the new cool kid on the block, you But I could talk to you about yourself or to help you with it.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
GregPERSON

0.99+

IBMORGANIZATION

0.99+

twenty percentQUANTITY

0.99+

seventy percentQUANTITY

0.99+

CIAORGANIZATION

0.99+

Lynn NixPERSON

0.99+

BroadcomORGANIZATION

0.99+

San FranciscoLOCATION

0.99+

Clayton DonleyPERSON

0.99+

JohnPERSON

0.99+

JavaTITLE

0.99+

First AmendmentQUANTITY

0.99+

twentyQUANTITY

0.99+

twoQUANTITY

0.99+

todayDATE

0.99+

Greg LotkoPERSON

0.99+

Broadcom Inc.ORGANIZATION

0.99+

more than ninety percentQUANTITY

0.99+

oneQUANTITY

0.99+

fifteen years agoDATE

0.99+

eachQUANTITY

0.99+

bothQUANTITY

0.98+

seventy five percentQUANTITY

0.98+

LennoxORGANIZATION

0.98+

KobolPERSON

0.97+

two kind of sidesQUANTITY

0.97+

twenty twentyQUANTITY

0.96+

one sprintQUANTITY

0.95+

RocketORGANIZATION

0.94+

CIA TechnologiesORGANIZATION

0.93+

CobellPERSON

0.92+

2019DATE

0.92+

ClaytonPERSON

0.92+

first thingQUANTITY

0.91+

two guestsQUANTITY

0.91+

OneQUANTITY

0.91+

a decadeQUANTITY

0.9+

eightyQUANTITY

0.89+

MohrORGANIZATION

0.87+

one wayQUANTITY

0.86+

CobolPERSON

0.84+

ChineseOTHER

0.81+

Zoe BrightPERSON

0.8+

BrockORGANIZATION

0.78+

yearsQUANTITY

0.75+

MayDATE

0.74+

each programmingQUANTITY

0.72+

distributed systemQUANTITY

0.67+

Glad CoORGANIZATION

0.66+

IBM ThinkORGANIZATION

0.66+

couple pointsQUANTITY

0.64+

nineteenQUANTITY

0.63+

CubeCOMMERCIAL_ITEM

0.62+

sixQUANTITY

0.6+

JenkinsTITLE

0.6+

ThinkCOMMERCIAL_ITEM

0.6+

ninetiesQUANTITY

0.55+

opsTITLE

0.51+

icloudTITLE

0.47+

Dizzy SyriaPERSON

0.46+

MohrPERSON

0.37+

Stanley Toh, Broadcom - ServiceNow Knowledge 2017 - #Know17 - #theCUBE


 

(exciting, upbeat music) >> (Announcer) Live from Orlando, Florida. It's theCUBE, covering ServiceNow Knowledge '17. Brought to you by ServiceNow. >> We're back. Dave Vellante with Jeff Frick. This is theCube and we're here at ServiceNow Knowledge '17. Stanley Toh is here, he's the Global IT Director at semiconductor manufacturer Broadcom. Stanley, thanks for coming to theCUBE. >> Nice to be here. >> So, semiconductor, hot space right now. Things are going crazy and it's a good market, booming. That's good, it's always good to be in a hot space. But we're here at Knowledge. Maybe talk a little bit about your role, and then we'll get into what you're doing with ServiceNow. >> Sure. You're right. Semiconductor is booming. But we don't do anything sexy. Everything is components that go into your iPhones and stuff like that. They do the sexy stuff. We do the thing that make it work. So, I'm the what we call the Enterprise and User Services Director, so basically anything that touches the end user, from the help desk to collaboration to your PC support desk, everything is under. Basically anything that touches the end user, even onboarding, and then, now with the latest, we actually moved our old customer support portal to even ServiceNow CSM. >> Okay, so what led you to ServiceNow? Maybe take us back, and take us through the before and the after. >> Okay. Broadcom Limited, before we changed our name to Broadcom, we were Avago Technologies. We are very cloud centric. Anything that we can move to the cloud, we moved to the cloud. So we were the first multi-billion dollar company to move to Google, back in 2007. That was 10 years ago. And then we never stopped since. We have Opta, we have Workday. And if you look at it, all this cloud technology works so well with ServiceNow. And ServiceNow is a platform that has all the API and connectors to all these other cloud platforms. So, when we were looking and evaluating, first as just the ITSM replacement, we selected ServiceNow because of the ease of integration. But as we get into ServiceNow, and as we learn ServiceNow, we found that it's not just an ITSM platform. You can use it for HR, for finance, for legal, for facilities. Recently, probably about six months ago, we launched the HR module. And then three weeks ago, we went live with a CSM portal for the external customer. >> When you say you go back to 2007 with Google, you're talking about what, Google Docs? >> Everything. >> Dave: Everything. >> Email, calendar, docs, sites, Drive, but it was unknown. >> Dave: All the productivity stuff. >> Everything. >> Dave: Outsourced stuff. >> They were unknown then, >> Jeff: Right, right, right. >> And it's a risk. >> So what was the conversation to take that risk? Because obviously there was a lot of concern at the enterprise level on some of these cloud services beyond test/dev in the early days. Obviously you made the right bet, it worked out pretty well. (Stanley laughing) But I'm curious, what were the conversations and why did you ultimately decide to make that bet? >> Okay. So 2007 was just after the downturn. >> Jeff: Right. >> So everyone was looking at cost, at supportability. But at the same time, the mobile phone, the smart phone is just exploding in the market. So we want something that is very flexible, very scalable, and very easy to integrate, plus also give you mobility. So that's why we went with Google as the first cloud platform, but then we started adding. So right now, we can basically do everything on your smart phone. We have Opta as our single sign-on. From one portal, I go everywhere. >> Dave: Okay, so that's good. So you talked about some of the criteria for the platform. How has that affected how you do business, how you do IT business? >> See, IT has always been looked upon as a cost center. And we are always slow, legacy system, hard to use, we don't listen to you. (Jeff laughing) >> Dave: What do those guys do? >> You know, why are we paying those guys, right? And then you look at all the consumer stuff. They are sexy, they are mobile, they have pretty pictures. Now all your internal users want the same experience. So, the experience has changed. The old UNIX command key doesn't work anymore. They want something touch, GUI, mobile. They want the feel, the color, you know. >> That might be the best description (Stanley laughing) of the consumerization of IT, Dave, that we've ever had on theCUBE. >> It's really honest. Coming from an IT person, it is, it is honest. And now you've driven ServiceNow into other areas beyond IT. >> Stanley: Yes. >> You mentioned HR. >> HR. We went live six months ago. >> Okay. And these other areas, are you thinking about it, looking at it, or? >> So we are also looking with legal, because they have a lot of legal documents and NDAs and stuff like that. And ServiceNow have a very nice integration to DocuSign and Vox. So we are looking at that. But the latest one, we went live three weeks ago, is the CSM, the customer support management portal. And that one actually replaced one of our legacy system that has a stack of sixteen application running. And we collapsed that, and went live on ServiceNow CSM three weeks ago. >> And what has been, two impacts - the business impact, and, I'm curious, is it the culture impact. You sort of set it up as the attitude. We had fun with it, but it's true. What's the business impact? And what has the cultural impact been? >> The last few years, we have been doing a lot of acquisition. So we have been bringing in a lot of new BU's. Business units. And they want things to move fast, and we want to integrate them into one brand. So speed and agility is key when you do acquisitions. So that's why we are moving into a platform where we can integrate all these new companies easily. We found that in ServiceNow and we can integrate them. So for example, when we acquired Broadcom Corporation, they have 18,000 employees. We onboarded them on day one, and usually when you do an acquisition, they don't give you the employee information until the last minute. Two days, all I need, is to bring them all on, onboarded into my collaboration suite. I only need two days of the information, and on day one, Turn it on, they are live. Their information is in, they have an email account. All their information is in ServiceNow. They call one help desk, they call our help desk, they get all the help and services. So it's fully integrated on day one itself. >> And you guys also own LSI now, right? >> Yes, LSI. >> Emulex? >> Emulex, PLX. >> PLX. >> The latest acquisition is Brocade, which we will close in the summer. And then, the rumored Toshiba NAND business. So, yeah, we are doing a lot of acquisitions. >> Yeah, quite a roll-up there. >> Correct. So as you can see, they are all very different companies. So when they come in, they have different culture. They have different workflow, they have different processes. But if you integrate them into a platform that we are very familiar right now, it's the consumerized look and feel, it's very easy to bring them in. >> And that is the cultural change that has occurred. >> Yes, it's a huge, >> So do people love IT now? >> They still hate IT. (Jeff and Dave laughing) They still say iT is a cost center. But right now, they are coming around. They see that we are bringing value to them. So right now, IT is just not to provide you the basic. IT is to enable the business to be better and more competitive. >> A true partner for the business. >> Yes, correct. >> Stanley, thanks very much for coming to theCUBE. It was great to hear your story, we appreciate it. >> Stanley: Thanks for having me. >> You're welcome. All right, keep it right there, buddy. We'll be back with our next guest. This is theCUBE, we're live from ServiceNow Knowledge '17. We'll be right back. (upbeat music)

Published Date : May 10 2017

SUMMARY :

Brought to you by ServiceNow. Stanley Toh is here, he's the Global IT Director That's good, it's always good to be in a hot space. from the help desk to collaboration Okay, so what led you to ServiceNow? And ServiceNow is a platform that has all the API Drive, but it was unknown. and why did you ultimately decide to make that bet? So right now, we can basically do everything So you talked about some of the criteria for the platform. And we are always slow, legacy system, hard to use, And then you look at all the consumer stuff. That might be the best description And now you've driven ServiceNow are you thinking about it, looking at it, or? But the latest one, we went live three weeks ago, and, I'm curious, is it the culture impact. So we have been bringing in a lot of new BU's. And then, the rumored Toshiba NAND business. that we are very familiar right now, So right now, IT is just not to provide you the basic. It was great to hear your story, we appreciate it. This is theCUBE, we're live from ServiceNow Knowledge '17.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JeffPERSON

0.99+

StanleyPERSON

0.99+

DavePERSON

0.99+

Jeff FrickPERSON

0.99+

Dave VellantePERSON

0.99+

BroadcomORGANIZATION

0.99+

two daysQUANTITY

0.99+

2007DATE

0.99+

EmulexORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

Broadcom CorporationORGANIZATION

0.99+

18,000 employeesQUANTITY

0.99+

Broadcom LimitedORGANIZATION

0.99+

Two daysQUANTITY

0.99+

PLXORGANIZATION

0.99+

Orlando, FloridaLOCATION

0.99+

Avago TechnologiesORGANIZATION

0.99+

10 years agoDATE

0.99+

iPhonesCOMMERCIAL_ITEM

0.99+

BrocadeORGANIZATION

0.99+

UNIXTITLE

0.99+

three weeks agoDATE

0.99+

six months agoDATE

0.99+

ServiceNowORGANIZATION

0.99+

Stanley TohPERSON

0.98+

one portalQUANTITY

0.98+

ServiceNowTITLE

0.98+

KnowledgeORGANIZATION

0.98+

LSIORGANIZATION

0.98+

one brandQUANTITY

0.97+

first multi-billion dollarQUANTITY

0.97+

Google DocsTITLE

0.97+

firstQUANTITY

0.97+

about six months agoDATE

0.96+

first cloud platformQUANTITY

0.96+

two impactsQUANTITY

0.96+

one help deskQUANTITY

0.96+

sixteen applicationQUANTITY

0.94+

day oneQUANTITY

0.92+

2017DATE

0.91+

oneQUANTITY

0.9+

theCUBEORGANIZATION

0.88+

ServiceNow Knowledge '17ORGANIZATION

0.83+

#Know17EVENT

0.81+

VoxORGANIZATION

0.73+

single signQUANTITY

0.72+

DocuSignORGANIZATION

0.72+

last few yearsDATE

0.69+

Toshiba NANDORGANIZATION

0.69+

CSMORGANIZATION

0.68+

theCubeORGANIZATION

0.59+

ServiceNow KnowledgeORGANIZATION

0.58+

ServiceNow CSMTITLE

0.58+

CSMTITLE

0.54+

Knowledge '17TITLE

0.53+

OptaTITLE

0.52+

Odded Solomon, VMware & Jared Woodrey, Dell Technologies | MWC Barcelona 2023


 

>> Narrator: theCUBE's live coverage is made possible by funding from Dell Technologies. Creating technologies that drive human progress. (upbeat music) >> Welcome back to Barcelona, Spain, everyone. It's theCUBE live at MWC '23, day three of four days of CUBE coverage. It's like a cannon of CUBE content coming right at you. I'm Lisa Martin with Dave Nicholson. We've got Dell and VMware here. Going to be talking about the ecosystem partnerships and what they're doing to further organizations in the telco industry. Please welcome Jared Woodrey, Director of Partner Engineering Open Telecom Ecosystem Lab, OTEL. Odded Solomon is here as well, Director of Product Management, VMware Service Provider and Edge Business Unit at VMware. Guys, great to have you on the program. >> Thank you for having me. >> Welcome to theCUBE. So Jared, first question for you. Talk about OTEL. I know there's a big announcement this week, but give the audience context and understanding of what OTEL is and how it works. >> Sure. So the Open Telecom Ecosystem Lab is physically located at Round Rock, Texas, it's the heart and soul of it. But this week we also just announced opening up the Cork, Ireland extension of OTEL. The reason for our existence is to to try and make it as easy as possible for both partners and customers to come together and to re-aggregate this disaggregated ecosystem. So that comes with a number of automation tools and basically just giving a known good testing environment so that tests that happen in our lab are as close to real world as they possibly can be and make it as transparent and open as possible for both partners like VMware as well as customers. >> Odded, talk about what you're doing with Dell and OTEL and give us a customer example of maybe one that you're working with or even even mentioning it by a high level descriptor if you have to. >> Yeah. So we provide a telco cloud platform, which is essentially a vertical in VMware. The telco cloud platform is serving network function vendors, such as Ericsson, Nokia, Mavenir, and so on. What we do with Dell as part of this partnership is essentially complementing the platform with some additional functionality that is not coming out of the box. We used to have a data protection in the past, but this is no longer our main business focus. So we do provide APIs that we can expose and work together with Dell PPDM solution so customer can benefit from this and leverage the partnership and have overall solution that is not coming out of the box from VMware. >> I'm curious, from a VMware perspective. VMware is associated often with the V in VMware, virtualization, and we've seen a transition over time between sort of flavors of virtualization and what is the mix currently today in the telecom space between environments that are leveraging what we would think of as more traditional virtualization with full blown Linux, Windows operating systems in a VM versus the world of containerized microservices? What does that mix look like today? Where do you see it going? >> Yeah, so the VMware telco cloud platform exists for about eight years. And the V started around that time. You might heard about open stack in addition to VMware. So this has definitely helped the network equipment providers with virtualizing their network functions. Those are typically VNF, virtualized network functions, inside the VMs. Essentially we have 4G applications, so core applications, EPC, we have IMS. Those are typically, I would say maybe 80 or 90% of the ecosystem right now. 5G is associated with cloud native network functions. So 5G is getting started now, getting deployed. There is an exponential growth on the core side. Now, when we expand towards the edge of the network we see more potential growth. This is 5G ran, we see the vRAN, we see the open RAN, we see early POCs, we see field trials that are starting. We obviously has production customer now. You just spoke to one. So this is really starting, cloud native is really starting I would say about 10 to 20% of the network functions these days are cloud native. >> Jared, question for you. You mentioned data protection, a huge topic there obviously from a security perspective. Data protection used to be the responsibility of the CSPs. You guys are changing that. Can you talk a little bit about how you're doing that and what Dell's play there is? >> Yeah, so PowerProtect Data Management is a product, but it's produced by Dell. So what this does is it enables data protection over virtual cloud as well as the physical infrastructure of specifically in this case of a telecoms ecosystem. So what this does is enables an ability to rapidly redeploy and back up existing configurations all the way up to the TCP and TCA that pulls the basis of our work here with VMware. >> So you've offloaded that responsibility from the CSPs. You freed them from that. >> So the work that we did, honestly was to make sure that we have a very clear and concise and accurate procedures for how to conduct this as well. And to put this through a realistic and real world as if it was in a telecoms own production network, what did that would actually look like, and what it would take to bring it back up as well. So our responsibility is to make sure that when we when we provide these products to the customers that not only do they work exactly as their intended to, but there is also documentation to help support them and to enable them to have their exact specifications met by as well. >> Got it. So talk about a little bit about OTEL expansion into Cork. What you guys are doing together to enable CSPs here in EMEA? >> Yeah, so the reason why we opened up a facility in Cork Island was to give, for an EMEA audience, for an EMEA CSPs and ability to look and feel and touch some of the products that we're working on. It also just facilitates and ease especially for European-based partners to have a chance to very easily come to a lab environment. The difference though, honestly, is the between Round Rock, Texas and Cork Island is that it's virtually an extension of the same thing. Like the physical locations can make it easier to provide access and obviously to showcase the products that we've developed with partners. But the reality is that it's more than just the physical location. It's more about the ability and ease by which customers and partners can access the labs. >> So we should be expecting a lot of Tito's vodka to be consumed in Cork at some point. Might change the national beverage. >> We do need to have some international exchange. >> Yeah, no, that's good to know. Odded, on the VMware side of things. There's a large group of folks who have VMware skillsets. >> Odded: Correct. >> The telecom industry is moving into this world of the kind of agility that those folks are familiar with. How do people come out of the traditional VMware virtualization world and move into that world of cloud native applications and serve the telecom space? What would your recommendation be? If you were speaking at a VMUG, a VMware Users Group meeting with all of your telecom background, what would you share with them that's critical to understand about how telecom is different, or how telecom's spot in its evolution might be different than the traditional IT space? >> So we're talking about the people with the knowledge and the background of. >> Yeah, I'm a V expert, let's say. And I'm looking into the future and I hear that there are 80,000 people in Barcelona at this event, and I hear that Dell is building optimized infrastructure specifically for telecom, and that VMware is involved. And I'm an expert in VMware and I want to be involved. What do I need to do? I know it's a little bit outside of the box question, but especially against the backdrop of economic headwinds globally, there are a lot of people facing transitions. What are your thoughts there? >> So, first of all, we understand the telco requirements, we understand the telco needs, and we make sure that what we learn from the customers, what we learn from the partners is being built into the VMware products. And simplicity is number one thing that is important for us. We want the customer experience, we want the user experience to be the same as they know even though we are transitioning into cloud native networks that require more frequent upgrades and they have more complexity to be honest. And what we do in our vertical inside VMware we are focusing on automation, telco cloud automation, telco cloud service assurance. Think of it as a wrapper around the SDDC stack that we have from VMware that really simplifies the operations for the telcos because it's really a challenge about skillset. You need to be a DevOps, SRE in order to operate these networks. And things are becoming really complex. We simplify it for them with the same VMware experience. We have a very good ability to do that. We sell products in VMware. Unlike our competition that is mostly selling professional services and support, we try to focus more on the products and delivering the value. Of course, we have services offering because telcos requires some customizations, but we do focus on automation simplicity throughout our staff. >> So just follow up. So in other words the investment in education in this VMware ecosystem absolutely can be extended and applied into the telecom world. I think it's an important thing. >> I was going to add to that. Our engagement in OTEL was also something that we created a solutions brief whether we released from Mobile World Congress this week. But in conjunction with that, we also have a white paper coming out that has a much more expansive explanation and documentation of what it was that we accomplished in the work that we've done together. And that's not something that is going to be a one-off thing. This is something that will stay evergreen that we'll continue to expand both the testing scope as well as the documentation for what this solution looks like and how it can be used as well as documentation on for the V experts for how they can then leverage and realize the the potential for what we're creating together. >> Jared, does Dell look at OTEL as having the potential to facilitate the continued evolution of the actual telco industry? And if so, how? >> Well, I mean, it would be a horrible answer if I were to say no to that. >> Right. >> I think, I honestly believe that one of the most difficult things about this idea of having desired ecosystem is not just trying to put it back together, but then also how to give yourself choice. So each time that you build one of those solution sets like that exists as an island out of all the other possibilities that comes with it. And OTEL seeks to not just be able to facilitate building that first solution set. Like that's what solutions engineering can do. And that's generally done relatively protected and internally. The Open Telecom Ecosystem seeks to build that then to also provide the ability to very easily change specific components of that whether that's a hardware component, a NIC, whether a security pass just came out or a change in either TCP or TCA or we talked a little bit about for this specific engagement that it was done on TCP 2.5. >> Odded: Correct. >> Obviously there's already a 2.7 and 3.0 is coming out. It's not like we're going to sit around and write our coattails of what 2.7 has happened. So this isn't intended to be a one and done thing. So when we talk about trying to make that easier and simpler and de-risk all of the risk that comes from trying to put all these things together, it's not just the the one single solution that you built in the lab. It's what's the next one? And how do I optimize this? And I have specific requirements as a CSP, how can I take something you built that doesn't quite match it, but how do I make that adjustment? So that's what we see to do and make it as easy and as painless as possible. >> What's the engagement model with CSPs? Is it led by Dell only, VMware partner? How does that work? >> Yeah, I can take that. So that depends on the customer, but typically customers they want to choose the cloud vendor. So they come to VMware, we want VMware. Typically, they come from the IT side. They said, "Oh, we want to manage the network side of the house the same way as we manage the IT. We don't want to have special skill sets, special teams." So they move from the IT to the network side and they want VMware there. And then obviously they have an RSP process and they have hardware choices. They can go with Dell, they can go with others. We leverage vSphere, other compatibility. So we can be flexible with the customer choice. And then depending on which customer, how large they are, they select the network equipment provider that the runs on top. We position our platform as multi-vendor. So many of them choose multiple network functions providers. So we work with Dell. So assuming that the customer is choosing Dell. We work very closely with them, offering the best solution for the customer. We work with them sometimes to even design the boxes to make sure that it fits their use cases and to make sure that it works properly. So we have a partnership validation certification end-to-end from the applications all the way down to the hardware. >> It's a fascinating place in history to be right now with 5G. Something that a lot of consumers sort of assume. It's like, "Oh, hey, yeah, we're already there. What's the 6G thing going to look like?" Well, wait a minute, we're just at the beginning stages. And so you talk about disaggregation, re-aggregation, or reintegration, the importance of that. Folks like Dell have experience in that space. Folks at VMware have a lot of experience in the virtualization space, but I heard that VMware is being acquired by Broadcom, if it all goes through, of course. You don't need to comment on it. But you mentioned something, SDDC, software-defined data center. That stack is sometimes misunderstood by the public at large and maybe the folks in the EU, I will editorialize for a moment here. It is eliminating capture in a way by larger hyperscale cloud providers. It absolutely introduces more competition into the market space. So it's interesting to hear Broadcom acknowledging that this is part of the future of VMware, no matter what else happens. These capabilities that spill into the telecom space are something that they say they're going to embrace and extend. I think that's important for anyone who's evaluating this if they're concern. Well, wait a minute. Yeah, when I reintegrate, do I want VMware as part of this mix? Is that an unknown? It's pretty clear that that's something that is part of the future of VMware moving forward. That's my personal opinion based on analysis. But you brought up SDDC, so I wanted to mention that. Again, I'm not going to ask you to get into trouble on that at all. What should we be, from a broad perspective, are there any services, outcomes that are going to come out of all of this work? The agility that's being built by you folks and folks in the open world. Are there any specific things that you personally are excited about? Or when we think about consumer devices, getting data, what are the other kinds of things that this facilitates? Anything cool, either one of you. >> So specific use cases? >> Yeah, anything. It's got to be cool though. If it's not cool we're going to ask you to leave. >> All right. I'll take that challenge. (laughs) I think one of the things that is interesting for something like OTEL as an exist, as being an Open Telecom Ecosystem, there are going to be some CSPs that it's very difficult for them to have this optionality existing for themselves. Especially when you start talking about tailoring it for specific CSPs and their needs. One of the things that becomes much more available to some of the smaller CSPs is the ability to leverage OTEL and basically act as one of their pre-production labs. So this would be something that would be very specific to a customer and we would obviously make sure that it's completely isolated but the intention there would be that it would open up the ability for what would normally take a much longer time period for them to receive some of the benefits of some of the changes that are happening within the industry. But they would have immediate benefit by leveraging specifically looking OTEL to provide them some of their solutions. And I know that you were also looking for specific use cases out of it, but like that's a huge deal for a lot of CSPs around the world that don't have the ability to lay out all the different permutations that they are most interested in and start to put each one of those through a test cycle. A specific use cases for what this looks like is honestly the most exciting that I've seen for right now is on the private 5G networks. Specifically within mining industry, we have a, sorry for the audience, but we have a demo at our booth that starts to lay out exactly how it was deployed and kind of the AB of what this looked like before the world of private 5G for this mining company and what it looks like afterwards. And the ability for both safety, as well as operational costs, as well as their ability to obviously do their job better is night and day. It completely opened up a very analog system and opened up to a very digitalized system. And I would be remiss, I didn't also mention OpenBrew, which is also an example in our booth. >> We saw it last night in action. >> We saw it. >> I hope you did. So OpenBrew is small brewery in Northeast America and we basically took a very manual process of checking temperature and pressure on multiple different tanks along the entire brewing process and digitized everything for them. All of that was enabled by a private 5G deployment that's built on Dell hardware. >> You asked for cool. I think we got it. >> Yeah, it's cool. >> Jared: I think beer. >> Cool brew, yes. >> Root beer, I think is trump card there. >> At least for folks from North America, we like our brew cool. >> Exactly. Guys, thank you so much for joining Dave and me talking about what Dell, OTEL, and VMware are doing together, what you're enabling CSPs to do and achieve. We appreciate your time and your insights. >> Absolutely. >> Thank you. >> All right, our pleasure. For our guests and for Dave Nicholson, I'm Lisa Martin. You watching theCUBE live from MWC '23. Day three of our coverage continues right after a short break. (upbeat music)

Published Date : Mar 1 2023

SUMMARY :

that drive human progress. in the telco industry. but give the audience context So the Open Telecom Ecosystem Lab of maybe one that you're working with that is not coming out of the box. and what is the mix currently of the network functions responsibility of the CSPs. that pulls the basis of responsibility from the CSPs. So the work that we did, to enable CSPs here in EMEA? and partners can access the labs. Might change the national beverage. We do need to have some Odded, on the VMware side of things. and serve the telecom space? So we're talking about the people and I hear that there are 80,000 people that really simplifies the and applied into the telecom world. and realize the the potential Well, I mean, it would that one of the most difficult and simpler and de-risk all of the risk So that depends on the customer, that is part of the future going to ask you to leave. that don't have the ability to lay out All of that was enabled I think we got it. we like our brew cool. CSPs to do and achieve. You watching theCUBE live from MWC '23.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
MichielPERSON

0.99+

AnnaPERSON

0.99+

DavidPERSON

0.99+

BryanPERSON

0.99+

JohnPERSON

0.99+

IBMORGANIZATION

0.99+

MichaelPERSON

0.99+

ChrisPERSON

0.99+

NECORGANIZATION

0.99+

EricssonORGANIZATION

0.99+

KevinPERSON

0.99+

Dave FramptonPERSON

0.99+

MicrosoftORGANIZATION

0.99+

Kerim AkgonulPERSON

0.99+

Dave NicholsonPERSON

0.99+

JaredPERSON

0.99+

Steve WoodPERSON

0.99+

PeterPERSON

0.99+

Lisa MartinPERSON

0.99+

NECJORGANIZATION

0.99+

Lisa MartinPERSON

0.99+

Mike OlsonPERSON

0.99+

AmazonORGANIZATION

0.99+

DavePERSON

0.99+

Michiel BakkerPERSON

0.99+

FCAORGANIZATION

0.99+

NASAORGANIZATION

0.99+

NokiaORGANIZATION

0.99+

Lee CaswellPERSON

0.99+

ECECTORGANIZATION

0.99+

Peter BurrisPERSON

0.99+

OTELORGANIZATION

0.99+

David FloyerPERSON

0.99+

Bryan PijanowskiPERSON

0.99+

Rich LanePERSON

0.99+

KerimPERSON

0.99+

Kevin BoguszPERSON

0.99+

Jeff FrickPERSON

0.99+

Jared WoodreyPERSON

0.99+

LincolnshireLOCATION

0.99+

KeithPERSON

0.99+

Dave NicholsonPERSON

0.99+

ChuckPERSON

0.99+

JeffPERSON

0.99+

National Health ServicesORGANIZATION

0.99+

Keith TownsendPERSON

0.99+

WANdiscoORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

MarchDATE

0.99+

NutanixORGANIZATION

0.99+

San FranciscoLOCATION

0.99+

IrelandLOCATION

0.99+

Dave VellantePERSON

0.99+

Michael DellPERSON

0.99+

RajagopalPERSON

0.99+

Dave AllantePERSON

0.99+

EuropeLOCATION

0.99+

March of 2012DATE

0.99+

Anna GleissPERSON

0.99+

SamsungORGANIZATION

0.99+

Ritika GunnarPERSON

0.99+

Mandy DhaliwalPERSON

0.99+

Deania Davidson, Dell Technologies & Dave Lincoln, Dell Technologies | MWC Barcelona 2023


 

>> Narrator: theCUBE's live coverage is made possible by funding from Dell Technologies. Creating technologies that drive human progress. (upbeat music) >> Hey everyone and welcome back to Barcelona, Spain, it's theCUBE. We are live at MWC 23. This is day two of our coverage, we're giving you four days of coverage, but you already know that because you were here yesterday. Lisa Martin with Dave Nicholson. Dave this show is massive. I was walking in this morning and almost getting claustrophobic with the 80,000 people that are joining us. There is, seems to be at MWC 23 more interest in enterprise-class technology than we've ever seen before. What are some of the things that you've observed with that regard? >> Well I've observed a lot of people racing to the highest level messaging about how wonderful it is to have the kiss of a breeze on your cheek, and to feel the flowing wheat. (laughing) I want to hear about the actual things that make this stuff possible. >> Right. >> So I think we have a couple of guests here who can help us start to go down that path of actually understanding the real cool stuff that's behind the scenes. >> And absolutely we got some cool stuff. We've got two guests from Dell. Dave Lincoln is here, the VP of Networking and Emerging the Server Solutions, and Deania Davidson, Director Edge Server Product Planning and Management at Dell. So great to have you. >> Thank you. >> Two Daves, and a Davidson. >> (indistinct) >> Just me who stands alone here. (laughing) So guys talk about, Dave, we'll start with you the newest generation of PowerEdge servers. What's new? Why is it so exciting? What challenges for telecom operators is it solving? >> Yeah, well so this is actually Dell's largest server launch ever. It's the most expansive, which is notable because of, we have a pretty significant portfolio. We're very proud of our core mainstream portfolio. But really since the Supercompute in Dallas in November, that we started a rolling thunder of launches. MWC being part of that leading up to DTW here in May, where we're actually going to be announcing big investments in those parts of the market that are the growth segments of server. Specifically AIML, where we in, to address that. We're investing heavy in our XE series which we, as I said, we announced at Supercompute in November. And then we have to address the CSP segment, a big investment around the HS series which we just announced, and then lastly, the edge telecom segment which we're, we had the biggest investment, biggest announce in portfolio launch with XR series. >> Deania, lets dig into that. >> Yeah. >> Where we see the growth coming from you mentioned telecom CSPs with the edge. What are some of the growth opportunities there that organizations need Dell's help with to manage, so that they can deliver what they're demanding and user is wanting? >> The biggest areas being obviously, in addition the telecom has been the biggest one, but the other areas too we're seeing is in retail and manufacturing as well. And, so internally, I mean we're going to be focused on hardware, but we also have a solutions team who are working with us to build the solutions focused on retail, and edge and telecom as well on top of the servers that we'll talk about shortly. >> What are some of the biggest challenges that retailers and manufacturers are facing? And during the pandemic retailers, those that were successful pivoted very quickly to curbside delivery. >> Deania: Yeah. >> Those that didn't survive weren't able to do that digitally. >> Deania: Yeah. >> But we're seeing such demand. >> Yeah. >> At the retail edge. On the consumer side we want to get whatever we want right now. >> Yes. >> It has to be delivered, it has to be personalized. Talk a little bit more about some of the challenges there, within those two verticals and how Dell is helping to address those with the new server technologies. >> For retail, I think there's couple of things, the one is like in the fast food area. So obviously through COVID a lot of people got familiar and comfortable with driving through. >> Lisa: Yeah. >> And so there's probably a certain fast food restaurant everyone's pretty familiar with, they're pretty efficient in that, and so there are other customers who are trying to replicate that, and so how do we help them do that all, from a technology perspective. From a retail, it's one of the pickup and the online experience, but when you go into a store, I don't know about you but I go to Target, and I'm looking for something and I have kids who are kind of distracting you. Its like where is this one thing, and so I pull up the Target App for example, and it tells me where its at, right. And then obviously, stores want to make more money, so like hey, since you picked this thing, there are these things around you. So things like that is what we're having conversations with customers about. >> It's so interesting because the demand is there. >> Yeah, it is. >> And its not going to go anywhere. >> No. >> And it's certainly not going to be dialed down. We're not going to want less stuff, less often. >> Yeah (giggles) >> And as typical consumers, we don't necessarily make the association between what we're seeing in the palm of our hand on a mobile device. >> Deania: Right. >> And the infrastructure that's actually supporting all of it. >> Deania: Right. >> People hear the term Cloud and they think cloud-phone mystery. >> Yeah, magic just happens. >> Yeah. >> Yeah. >> But in fact, in order to support the things that we want to be able to do. >> Yeah. >> On the move, you have to optimize the server hardware. >> Deania: Yes. >> In certain ways. What does that mean exactly? When you say that its optimized, what are the sorts of decisions that you make when you're building? I think of this in the terms of Lego bricks. >> Yes, yeah >> Put together. What are some of the decisions that you make? >> So there were few key things that we really had to think about in terms of what was different from the Data center, which obviously supports the cloud environment, but it was all about how do we get closer to the customer right? How do we get things really fast and how do we compute that information really quickly. So for us, it's things like size. All right, so our server is going to weigh one of them is the size of a shoe box and (giggles), we have a picture with Dave. >> Dave: It's true. >> Took off his shoe. >> Its actually, its actually as big as a shoe. (crowd chuckles) >> It is. >> It is. >> To be fair, its a pretty big shoe. >> True, true. >> It is, but its small in relative to the old big servers that you see. >> I see what you're doing, you find a guy with a size 12, (crowd giggles) >> Yeah. >> Its the size of your shoe. >> Yeah. >> Okay. >> Its literally the size of a shoe, and that's our smallest server and its the smallest one in the portfolio, its the XR 4000, and so we've actually crammed a lot of technology in there going with the Intel ZRT processors for example to get into that compute power. The XR 8000 which you'll be hearing a lot more about shortly with our next guest is one I think from a telco perspective is our flagship product, and its size was a big thing there too. Ruggedization so its like (indistinct) certification, so it can actually operate continuously in negative 5 to 55 C, which for customers, or they need that range of temperature operation, flexibility was a big thing too. In meaning that, there are some customers who wanted to have one system in different areas of deployment. So can I take this one system and configure it one way, take that same system, configure another way and have it here. So flexibility was really key for us as well, and so we'll actually be seeing that in the next segment coming. >> I think one of, some of the common things you're hearing from this is our focus on innovation, purpose build servers, so yes our times, you know economic situation like in itself is tough yeah. But far from receding we've doubled down on investment and you've seen that with the products that we are launching here, and we will be launching in the years to come. >> I imagine there's a pretty sizeable day impact to the total adjustable market for PowerEdge based on the launch what you're doing, its going to be a tam, a good size tam expansion. >> Yeah, absolutely. Depending on how you look at it, its roughly we add about $30 Billion of adjustable tam between the three purposeful series that we've launched, XE, HS and XR. >> Can you comment on, I know Dell and customers are like this. Talk about, I'd love to get both of your perspective, I'm sure you have a favorite customer stories. But talk about the involvement of the customer in the generation, and the evolution of PowerEdge. Where are they in that process? What kind of feedback do they deliver? >> Well, I mean, just to start, one thing that is essential Cortana of Dell period, is it all is about the customer. All of it, everything that we do is about the customer, and so there is a big focus at our level, from on high to get out there and talk with customers, and actually we have a pretty good story around XR8000 which is call it our flagship of the XR line that we've just announced, and because of this deep customer intimacy, there was a last minute kind of architectural design change. >> Hm-mm. >> Which actually would have been, come to find out it would have been sort of a fatal flaw for deployment. So we corrected that because of this tight intimacy with our customers. This was in two Thanksgiving ago about and, so anyways it's super cool and the fact that we were able to make a change so late in development cycle, that's a testament to a lot of the speed and, speed of innovation that we're driving, so anyway that was that's one, just case of one example. >> Hm-mm. >> Let talk about AI, we can't go to any trade show without talking about AI, the big thing right now is ChatGPT. >> Yeah. >> I was using it the other day, it's so interesting. But, the growing demand for AI, talk about how its driving the evolution of the server so that more AI use cases can become more (indistinct). >> In the edge space primarily, we actually have another product, so I guess what you'll notice in the XR line itself because there are so many different use cases and technologies that support the different use cases. We actually have a range form factor, so we have really small, I guess I would say 350 ml the size of a shoe box, you know, Dave's shoe box. (crowd chuckles) And then we also have, at the other end a 472, so still small, but a little bit bigger, but we did recognize obviously AI was coming up, and so that is our XR 7620 platform and that does support 2 GPUs right, so, like for Edge infrencing, making sure that we have the capability to support customers in that too, but also in the small one, we do also have a GPU capability there, that also helps in those other use cases as well. So we've built the platforms even though they're small to be able to handle the GPU power for customers. >> So nice tight package, a lot of power there. >> Yes. >> Beside as we've all clearly demonstrated the size of Dave's shoe. (crowd chuckles) Dave, talk about Dell's long standing commitment to really helping to rapidly evolve the server market. >> Dave: Yeah. >> Its a pivotal payer there. >> Well, like I was saying, we see innovation, I mean, this is, to us its a race to the top. You talked about racing and messaging that sort of thing, when you opened up the show here, but we see this as a race to the top, having worked at other server companies where maybe its a little bit different, maybe more of a race to the bottom source of approach. That's what I love about being at Dell. This is very much, we understand that it's innovation is that is what's going to deliver the most value for our customers. So whether its some of the first to market, first of its kind sort of innovation that you find in the XR4000, or XR8000, or any of our XE line, we know that at the end of day, that is what going to propel Dell, do the best for our customers and thereby do the best for us. To be honest, its a little bit surprising walking by some of our competitors booths, there's been like a dearth of zero, like no, like it's almost like you wouldn't even know that there was a big launch here right? >> Yeah. >> Or is it just me? >> No. >> It was a while, we've been walking around and yet we've had, and its sort of maybe I should take this as a flattery, but a lot of our competitors have been coming by to our booth everyday actually. >> Deania: Yeah, everyday. >> They came by multiple times yesterday, they came by multiple times today, they're taking pictures of our stuff I kind of want to just send 'em a sample. >> Lisa: Or your shoe. >> Right? Or just maybe my shoe right? But anyway, so I suppose I should take it as an honor. >> Deania: Yeah. >> And conversely when we've walked over there we actually get in back (indistinct), maybe I need a high Dell (indistinct). (crowd chuckles) >> We just had that experience, yeah. >> Its kind of funny but. >> Its a good position to be in. >> Yeah. >> Yes. >> You talked about the involvement of the customers, talk a bit more about Dell's ecosystem is also massive, its part of what makes Dell, Dell. >> Wait did you say ego-system? (laughing) After David just. >> You caught that? Darn it! The talk about the influence or the part of the ecosystem and also some of the feedback from the partners as you've been rapidly evolving the server market and clearly your competitors are taking notice. >> Yeah, sorry. >> Deania: That's okay. >> Dave: you want to take that? >> I mean I would say generally, one of the things that Dell prides itself on is being able to deliver the worlds best innovation into the hands of our customers, faster and better that any other, the optimal solution. So whether its you know, working with our great partners like Intel, AMD Broadcom, these sorts of folks. That is, at the end of the day that is our core mantra, again its retractor on service, doing the best, you know, what's best for the customers. And we want to bring the world's best innovation from our technology partners, get it into the hands of our partners you know, faster and better than any other option out there. >> Its a satisfying business for all of us to be in, because to your point, I made a joke about the high level messaging. But really, that's what it comes down to. >> Lisa: Yeah. >> We do these things, we feel like sometimes we're toiling in obscurity, working with the hardware. But what it delivers. >> Deania: Hm-mm. >> The experiences. >> Dave: Absolutely. >> Deania: Yes. >> Are truly meaningful. So its a fun. >> Absolutely. >> Its a really fun thing to be a part of. >> It is. >> Absolutely. >> Yeah. Is there a favorite customer story that you have that really articulates the value of what Dell is doing, full PowerEdge, at the Edge? >> Its probably one I can't particularly name obviously but, it was, they have different environments, so, in one case there's like on flights or on sea vessels, and just being able to use the same box in those different environments is really cool. And they really appreciate having the small compact, where they can just take the server with them and go somewhere. That was really cool to me in terms of how they were using the products that we built for them. >> I have one that's kind of funny. It around XR8000. Again a customer I won't name but they're so proud of it, they almost kinds feel like they co defined it with us, they want to be on the patent with us so, anyways that's. >> Deania: (indistinct). >> That's what they went in for, yeah. >> So it shows the strength of the partnership that. >> Yeah, exactly. >> Of course, the ecosystem of partners, customers, CSVs, telecom Edge. Guys thank you so much for joining us today. >> Thank you. >> Thank you. >> Sharing what's new with the PowerEdge. We can't wait to, we're just, we're cracking open the box, we saw the shoe. (laughing) And we're going to be dealing a little bit more later. So thank you. >> We're going to be able to touch something soon? >> Yes, yes. >> Yeah. >> In couple of minutes? >> Next segment I think. >> All right! >> Thanks for setting the table for that guys. We really appreciate your time. >> Thank you for having us. >> Thank you. >> Alright, our pleasure. >> For our guests and for Dave Nicholson, I'm Lisa Martin . You're watching theCUBE. The leader in live tech coverage, LIVE in Barcelona, Spain, MWC 23. Don't go anywhere, we will be right back with our next guests. (gentle music)

Published Date : Feb 28 2023

SUMMARY :

that drive human progress. What are some of the have the kiss of a breeze that's behind the scenes. the VP of Networking and and a Davidson. the newest generation that are the growth segments of server. What are some of the but the other areas too we're seeing is What are some of the biggest challenges do that digitally. On the consumer side we some of the challenges there, the one is like in the fast food area. and the online experience, because the demand is there. going to be dialed down. in the palm of our hand And the infrastructure People hear the term Cloud the things that we want to be able to do. the server hardware. decisions that you make What are some of the from the Data center, its actually as big as a shoe. that you see. and its the smallest one in the portfolio, some of the common things for PowerEdge based on the between the three purposeful and the evolution of PowerEdge. flagship of the XR line and the fact that we were able the big thing right now is ChatGPT. the evolution of the server but also in the small one, a lot of power there. the size of Dave's shoe. the first to market, and its sort of maybe I should I kind of want to just send 'em a sample. But anyway, so I suppose I should take it we actually get in back (indistinct), involvement of the customers, Wait did you say ego-system? and also some of the one of the things that I made a joke about the we feel like sometimes So its a fun. that really articulates the the server with them they want to be on the patent with us so, So it shows the Of course, the ecosystem of partners, we saw the shoe. the table for that guys. we will be right back

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave NicholsonPERSON

0.99+

DavePERSON

0.99+

DeaniaPERSON

0.99+

Lisa MartinPERSON

0.99+

LisaPERSON

0.99+

MayDATE

0.99+

Dave LincolnPERSON

0.99+

DavidPERSON

0.99+

NovemberDATE

0.99+

DellORGANIZATION

0.99+

CortanaTITLE

0.99+

350 mlQUANTITY

0.99+

DallasLOCATION

0.99+

TargetORGANIZATION

0.99+

Dell TechnologiesORGANIZATION

0.99+

IntelORGANIZATION

0.99+

TwoQUANTITY

0.99+

XR 4000COMMERCIAL_ITEM

0.99+

four daysQUANTITY

0.99+

80,000 peopleQUANTITY

0.99+

two guestsQUANTITY

0.99+

XR 8000COMMERCIAL_ITEM

0.99+

XR8000COMMERCIAL_ITEM

0.99+

55 CQUANTITY

0.99+

2 GPUsQUANTITY

0.99+

Deania DavidsonPERSON

0.99+

XR4000COMMERCIAL_ITEM

0.99+

yesterdayDATE

0.99+

todayDATE

0.99+

two verticalsQUANTITY

0.99+

Barcelona, SpainLOCATION

0.98+

bothQUANTITY

0.98+

LegoORGANIZATION

0.98+

oneQUANTITY

0.98+

XR seriesCOMMERCIAL_ITEM

0.98+

one systemQUANTITY

0.98+

about $30 BillionQUANTITY

0.97+

SupercomputeORGANIZATION

0.97+

MWCEVENT

0.97+

zeroQUANTITY

0.95+

5QUANTITY

0.95+

firstQUANTITY

0.94+

MWC 23EVENT

0.94+

this morningDATE

0.94+

telcoORGANIZATION

0.93+

one wayQUANTITY

0.93+

DavidsonORGANIZATION

0.92+

coupleQUANTITY

0.92+

twoDATE

0.91+

EdgeORGANIZATION

0.91+

Phil Brotherton, NetApp | Broadcom’s Acquisition of VMware


 

(upbeat music) >> Hello, this is Dave Vellante, and we're here to talk about the massive $61 billion planned acquisition of VMware by Broadcom. And I'm here with Phil Brotherton of NetApp to discuss the implications for customers, for the industry, and NetApp's particular point of view. Phil, welcome. Good to see you again. >> It's great to see you, Dave. >> So this topic has garnered a lot of conversation. What's your take on this epic event? What does it mean for the industry generally, and customers specifically? >> You know, I think time will tell a little bit, Dave. We're in the early days. We've, you know, so we heard the original announcements and then it's evolved a little bit, as we're going now. I think overall it'll be good for the ecosystem in the end. There's a lot you can do when you start combining what VMware can do with compute and some of the hardware assets of Broadcom. There's a lot of security things that can be brought, for example, to the infrastructure, that are very high-end and cool, and then integrated, so it's easy to do. So I think there's a lot of upside for it. There's obviously a lot of concern about what it means for vendor consolidation and pricing and things like that. So time will tell. >> You know, when this announcement first came out, I wrote a piece, you know, how "Broadcom will tame the VMware beast," I called it. And, you know, looked at Broadcom's history and said they're going to cut, they're going to raise prices, et cetera, et cetera. But I've seen a different tone, certainly, as Broadcom has got into the details. And I'm sure I and others maybe scared a lot of customers, but I think everybody's kind of calming down now. What are you hearing from customers about this acquisition? How are they thinking about it? >> You know, I think it varies. There's, I'd say generally we have like half our installed base, Dave, runs ESX Server, so the bulk of our customers use VMware, and generally they love VMware. And I'm talking mainly on-prem. We're just extending to the cloud now, really, at scale. And there's a lot of interest in continuing to do that, and that's really strong. The piece that's careful is this vendor, the cost issues that have come up. The things that were in your piece, actually. And what does that mean to me, and how do I balance that out? Those are the questions people are dealing with right now. >> Yeah, so there's obviously a lot of talk about the macro, the macro headwinds. Everybody's being a little cautious. The CIOs are tapping the brakes. We all sort of know that story. But we have some data from our partner ETR that ask, they go out every quarter and they survey, you know, 1500 or so IT practitioners, and they ask the ones that are planning to spend less, that are cutting, "How are you going to approach that? What's your primary methodology in terms of achieving, you know, cost optimization?" The number one, by far, answer was to consolidate redundant vendors. It was like, it's now up to about 40%. The second, distant second, was, "We're going to, you know, optimize cloud costs." You know, still significant, but it was really that consolidating the redundant vendors. Do you see that? How does NetApp fit into that? >> Yeah, that is an interesting, that's a very interesting bit of research, Dave. I think it's very right. One thing I would say is, because I've been in the infrastructure business in Silicon Valley now for 30 years. So these ups and downs are, that's a consistent thing in our industry, and I always think people should think of their infrastructure and cost management. That's always an issue, with infrastructure as cost management. What I've told customers forever is that when you look at cost management, our best customers at cost management are typically service providers. There's another aspect to cost management, is you want to automate as much as possible. And automation goes along with vendor consolidation, because how you automate different products, you don't want to have too many vendors in your layers. And what I mean by the layers of ecosystem, there's a storage layer, the network layer, the compute layer, like, the security layer, database layer, et cetera. When you think like that, everybody should pick their partners very carefully, per layer. And one last thought on this is, it's not like people are dumb, and not trying to do this. It's, when you look at what happens in the real world, acquisitions happen, things change as you go. And in these big customers, that's just normal, that things change. But you always have to have this push towards consolidating and picking your vendors very carefully. >> Also, just to follow up on that, I mean, you know, when you think about multi-cloud, and you mentioned, you know, you've got some big customers, they do a lot of M & A, it's kind of been multi-cloud by accident. "Oh, we got all these other tools and storage platforms and whatever it is." So where does NetApp fit in that whole consolidation equation? I'm thinking about, you know, cross-cloud services, which is a big VMware theme, thinking about a consistent experience, on-prem, hybrid, across the three big clouds, out to the edge. Where do you fit? >> So our view has been, and it was this view, and we extend it to the cloud, is that the data layer, so in our software, is called ONTAP, the data layer is a really important layer that provides a lot of efficiency. It only gets bigger, how you do compliance, how you do backup, DR, blah blah blah. All that data layer services needs to operate on-prem and on the clouds. So when you look at what we've done over the years, we've extended to all the clouds, our data layer. We've put controls, management tools, over the top, so that you can manage the entire data layer, on-prem and cloud, as one layer. And we're continuing to head down that path, 'cause we think that data layer is obviously the path to maximum ability to do compliance, maximum cost advantages, et cetera. So we've really been the company that set our sights on managing the data layer. Now, if you look at VMware, go up into the network layer, the compute layer, VMware is a great partner, and that's why we work with them so closely, is they're so perfect a fit for us, and they've been a great partner for 20 years for us, connecting those infrastructural data layers: compute, network, and storage. >> Well, just to stay on that for a second. I've seen recently, you kind of doubled down on your VMware alliance. You've got stuff at re:Invent I saw, with AWS, you're close to Azure, and I'm really talking about ONTAP, which is sort of an extension of what you were just talking about, Phil, which is, you know, it's kind of NetApp's storage operating system, if you will. It's a world class. But so, maybe talk about that relationship a little bit, and how you see it evolving. >> Well, so what we've been seeing consistently is, customers want to use the advantages of the cloud. So, point one. And when you have to completely refactor apps and all this stuff, it limits, it's friction. It limits what you can do, it raises costs. And what we did with VMware, VMware is this great platform for being able to run basically client-server apps on-prem and cloud, the exact same way. The problem is, when you have large data sets in the VMs, there's some cost issues and things, especially on the cloud. That drove us to work together, and do what we did. We GA-ed, we're the, so NetApp is the only independent storage, independent storage, say this right, independent storage platform certified to run with VMware cloud on Amazon. We GA-ed that last summer. We GA-ed with Azure, the Azure VMware service, a couple months ago. And you'll see news coming with GCP soon. And so the idea was, make it easy for customers to basically run in a hybrid model. And then if you back out and go, "What does that mean for you as a customer?", it's not saying you should go to the cloud, necessarily, or stay on-prem, or whatever. But it's giving you the flexibility to cost-optimize where you want to be. And from a data management point of view, ONTAP gives you the consistent data management, whichever way you decide to go. >> Yeah, so I've been following NetApp for decades, when you were Network Appliance, and I saw you go from kind of the workstation space into the enterprise. I saw you lean into virtualization really early on, and you've been a great VMware partner ever since. And you were early in cloud, so, sort of talking about, you know, that cross-cloud, what we call supercloud. I'm interested in what you're seeing in terms of specific actions that customers are taking. Like, I think about ELAs, and I think it's a two-edged sword. You know, should customers, you know, lean into ELAs right now? You know, what are you seeing there? You talked about, you know, sort of modernizing apps with things like Kubernetes, you know, cloud migration. What are some of the techniques that you're advising customers to take in the context of this acquisition? >> You know, so the basics of this are pretty easy. One is, and I think even Raghu, the CEO of VMware, has talked about this. Extending your ELA is probably a good idea. Like I said, customers love VMware, so having a commitment for a time, consistent cost management for a time is a good strategy. And I think that's why you're hearing ELA extensions being discussed. It's a good idea. The second part, and I think it goes to your surveys, that cost optimization point on the cloud is, moving to the cloud has huge advantages, but if you just kind of lift and shift, oftentimes the costs aren't realized the way you'd want. And the term "modernization," changing your app to use more Kubernetes, more cloud-native services, is often a consideration that goes into that. But that requires time. And you know, most companies have hundreds of apps, or thousands of apps, they have to consider modernizing. So you want to then think through the journey, what apps are going to move, what gets modernized, what gets lifted-shifted, how many data centers are you compressing? There's a lot of data center, the term I've been hearing is "data center evacuations," but data center consolidation. So that there's some even energy savings advantages sometimes with that. But the whole point, I mean, back up to my whole point, the whole point is having the infrastructure that gives you the flexibility to make the journey on your cost advantages and your business requirements. Not being forced to it. Like, it's not really a philosophy, it's more of a business optimization strategy. >> When you think about application modernization and Kubernetes, how does NetApp, you know, fit into that, as a data layer? >> Well, so if you kind of think, you said, like our journey, Dave, was, when we started our life, we were doing basically virtualization of volumes and things for technical customers. And the servers were always bare metal servers that we got involved with back then. This is, like, going back 20 years. Then everyone moved to VMs, and, like, it's probably, today, I mean, getting to your question in a second, but today, loosely, 20% bare metal servers, 80% virtual machines today. And containers is growing, now a big growing piece. So, if you will, sort of another level of virtual machines in containers. And containers were historically stateless, meaning the storage didn't have anything to do. Storage is always the stateful area in the architectures. But as containers are getting used more, stateful containers have become a big deal. So we've put a lot of emphasis into a product line we call Astra that is the world's best data management for containers. And that's both a cloud service and used on-prem in a lot of my customers. It's a big growth area. So that's what, when I say, like, one partner that can do data management, just, that's what we have to do. We have to keep moving with our customers to the type of data they want to store, and how do you store it most efficiently? Hey, one last thought on this is, where I really see this happening, there's a booming business right now in artificial intelligence, and we call it modern data analytics, but people combining big data lakes with AI, and that's where some of this, a lot of the container work comes in. We've extended objects, we have a thing we call file-object duality, to make it easy to bridge the old world of files to the new world of objects. Those all go hand in hand with app modernization. >> Yeah, it's a great thing about this industry. It never sits still. And you're right, it's- >> It's why I'm in it. >> Me too. Yeah, it's so much fun. There's always something. >> It is an abstraction layer. There's always going to be another abstraction layer. Serverless is another example. It's, you know, primarily stateless, that's probably going to, you know, change over time. All right, last question. In thinking about this Broadcom acquisition of VMware, in the macro climate, put a sort of bow on where NetApp fits into this equation. What's the value you bring in this context? >> Oh yeah, well it's like I said earlier, I think it's the data layer of, it's being the data layer that gives you what you guys call the supercloud, that gives you the ability to choose which cloud. Another thing, all customers are running at least two clouds, and you want to be able to pick and choose, and do it your way. So being the data layer, VMware is going to be in our infrastructures for at least as long as I'm in the computer business, Dave. I'm getting a little old. So maybe, you know, but "decades" I think is an easy prediction, and we plan to work with VMware very closely, along with our customers, as they extend from on-prem to hybrid cloud operations. That's where I think this will go. >> Yeah, and I think you're absolutely right. Look at the business case for migrating off of VMware. It just doesn't make sense. It works, it's world class, it recover... They've done so much amazing, you know, they used to be called, Moritz called it the software mainframe, right? And that's kind of what it is. I mean, it means it doesn't go down, right? And it supports virtually any application, you know, around the world, so. >> And I think getting back to your original point about your article, from the very beginning, is, I think Broadcom's really getting a sense of what they've bought, and it's going to be, hopefully, I think it'll be really a fun, another fun era in our business. >> Well, and you can drive EBIT a couple of ways. You can cut, okay, fine. And I'm sure there's some redundancies that they'll find. But there's also, you can drive top-line revenue. And you know, we've seen how, you know, EMC and then Dell used that growth from VMware to throw off free cash flow, and it was just, you know, funded so much, you know, innovation. So innovation is the key. Hock Tan has talked about that a lot. I think there's a perception that Broadcom, you know, doesn't invest in R & D. That's not true. I think they just get very focused with that investment. So, Phil, I really appreciate your time. Thanks so much for joining us. >> Thanks a lot, Dave. It's fun being here. >> Yeah, our pleasure. And thank you for watching theCUBE, your leader in enterprise and emerging tech coverage. (upbeat music)

Published Date : Jan 31 2023

SUMMARY :

Good to see you again. the industry generally, There's a lot you can do I wrote a piece, you know, and how do I balance that out? a lot of talk about the macro, is that when you look at cost management, and you mentioned, you know, so that you can manage and how you see it evolving. to cost-optimize where you want to be. and I saw you go from kind And you know, and how do you store it most efficiently? And you're right, it's- Yeah, it's so much fun. What's the value you and you want to be able They've done so much amazing, you know, and it's going to be, and it was just, you know, Thanks a lot, Dave. And thank you for watching theCUBE,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

DavePERSON

0.99+

PhilPERSON

0.99+

Phil BrothertonPERSON

0.99+

DellORGANIZATION

0.99+

80%QUANTITY

0.99+

VMwareORGANIZATION

0.99+

AWSORGANIZATION

0.99+

thousandsQUANTITY

0.99+

20 yearsQUANTITY

0.99+

Phil BrothertonPERSON

0.99+

BroadcomORGANIZATION

0.99+

20%QUANTITY

0.99+

30 yearsQUANTITY

0.99+

AmazonORGANIZATION

0.99+

Silicon ValleyLOCATION

0.99+

$61 billionQUANTITY

0.99+

RaghuPERSON

0.99+

NetAppORGANIZATION

0.99+

second partQUANTITY

0.99+

1500QUANTITY

0.99+

one layerQUANTITY

0.99+

EMCORGANIZATION

0.99+

Hock TanPERSON

0.99+

todayDATE

0.98+

hundreds of appsQUANTITY

0.98+

NetAppTITLE

0.98+

OneQUANTITY

0.98+

bothQUANTITY

0.98+

secondQUANTITY

0.97+

ETRORGANIZATION

0.97+

Breaking Analysis: Supercloud2 Explores Cloud Practitioner Realities & the Future of Data Apps


 

>> Narrator: From theCUBE Studios in Palo Alto and Boston bringing you data-driven insights from theCUBE and ETR. This is breaking analysis with Dave Vellante >> Enterprise tech practitioners, like most of us they want to make their lives easier so they can focus on delivering more value to their businesses. And to do so, they want to tap best of breed services in the public cloud, but at the same time connect their on-prem intellectual property to emerging applications which drive top line revenue and bottom line profits. But creating a consistent experience across clouds and on-prem estates has been an elusive capability for most organizations, forcing trade-offs and injecting friction into the system. The need to create seamless experiences is clear and the technology industry is starting to respond with platforms, architectures, and visions of what we've called the Supercloud. Hello and welcome to this week's Wikibon Cube Insights powered by ETR. In this breaking analysis we give you a preview of Supercloud 2, the second event of its kind that we've had on the topic. Yes, folks that's right Supercloud 2 is here. As of this recording, it's just about four days away 33 guests, 21 sessions, combining live discussions and fireside chats from theCUBE's Palo Alto Studio with prerecorded conversations on the future of cloud and data. You can register for free at supercloud.world. And we are super excited about the Supercloud 2 lineup of guests whereas Supercloud 22 in August, was all about refining the definition of Supercloud testing its technical feasibility and understanding various deployment models. Supercloud 2 features practitioners, technologists and analysts discussing what customers need with real-world examples of Supercloud and will expose thinking around a new breed of cross-cloud apps, data apps, if you will that change the way machines and humans interact with each other. Now the example we'd use if you think about applications today, say a CRM system, sales reps, what are they doing? They're entering data into opportunities they're choosing products they're importing contacts, et cetera. And sure the machine can then take all that data and spit out a forecast by rep, by region, by product, et cetera. But today's applications are largely about filling in forms and or codifying processes. In the future, the Supercloud community sees a new breed of applications emerging where data resides on different clouds, in different data storages, databases, Lakehouse, et cetera. And the machine uses AI to inspect the e-commerce system the inventory data, supply chain information and other systems, and puts together a plan without any human intervention whatsoever. Think about a system that orchestrates people, places and things like an Uber for business. So at Supercloud 2, you'll hear about this vision along with some of today's challenges facing practitioners. Zhamak Dehghani, the founder of Data Mesh is a headliner. Kit Colbert also is headlining. He laid out at the first Supercloud an initial architecture for what that's going to look like. That was last August. And he's going to present his most current thinking on the topic. Veronika Durgin of Sachs will be featured and talk about data sharing across clouds and you know what she needs in the future. One of the main highlights of Supercloud 2 is a dive into Walmart's Supercloud. Other featured practitioners include Western Union Ionis Pharmaceuticals, Warner Media. We've got deep, deep technology dives with folks like Bob Muglia, David Flynn Tristan Handy of DBT Labs, Nir Zuk, the founder of Palo Alto Networks focused on security. Thomas Hazel, who's going to talk about a new type of database for Supercloud. It's several analysts including Keith Townsend Maribel Lopez, George Gilbert, Sanjeev Mohan and so many more guests, we don't have time to list them all. They're all up on supercloud.world with a full agenda, so you can check that out. Now let's take a look at some of the things that we're exploring in more detail starting with the Walmart Cloud native platform, they call it WCNP. We definitely see this as a Supercloud and we dig into it with Jack Greenfield. He's the head of architecture at Walmart. Here's a quote from Jack. "WCNP is an implementation of Kubernetes for the Walmart ecosystem. We've taken Kubernetes off the shelf as open source." By the way, they do the same thing with OpenStack. "And we have integrated it with a number of foundational services that provide other aspects of our computational environment. Kubernetes off the shelf doesn't do everything." And so what Walmart chose to do, they took a do-it-yourself approach to build a Supercloud for a variety of reasons that Jack will explain, along with Walmart's so-called triplet architecture connecting on-prem, Azure and GCP. No surprise, there's no Amazon at Walmart for obvious reasons. And what they do is they create a common experience for devs across clouds. Jack is going to talk about how Walmart is evolving its Supercloud in the future. You don't want to miss that. Now, next, let's take a look at how Veronica Durgin of SAKS thinks about data sharing across clouds. Data sharing we think is a potential killer use case for Supercloud. In fact, let's hear it in Veronica's own words. Please play the clip. >> How do we talk to each other? And more importantly, how do we data share? You know, I work with data, you know this is what I do. So if you know I want to get data from a company that's using, say Google, how do we share it in a smooth way where it doesn't have to be this crazy I don't know, SFTP file moving? So that's where I think Supercloud comes to me in my mind, is like practical applications. How do we create that mesh, that network that we can easily share data with each other? >> Now data mesh is a possible architectural approach that will enable more facile data sharing and the monetization of data products. You'll hear Zhamak Dehghani live in studio talking about what standards are missing to make this vision a reality across the Supercloud. Now one of the other things that we're really excited about is digging deeper into the right approach for Supercloud adoption. And we're going to share a preview of a debate that's going on right now in the community. Bob Muglia, former CEO of Snowflake and Microsoft Exec was kind enough to spend some time looking at the community's supercloud definition and he felt that it needed to be simplified. So in near real time he came up with the following definition that we're showing here. I'll read it. "A Supercloud is a platform that provides programmatically consistent services hosted on heterogeneous cloud providers." So not only did Bob simplify the initial definition he's stressed that the Supercloud is a platform versus an architecture implying that the platform provider eg Snowflake, VMware, Databricks, Cohesity, et cetera is responsible for determining the architecture. Now interestingly in the shared Google doc that the working group uses to collaborate on the supercloud de definition, Dr. Nelu Mihai who is actually building a Supercloud responded as follows to Bob's assertion "We need to avoid creating many Supercloud platforms with their own architectures. If we do that, then we create other proprietary clouds on top of existing ones. We need to define an architecture of how Supercloud interfaces with all other clouds. What is the information model? What is the execution model and how users will interact with Supercloud?" What does this seemingly nuanced point tell us and why does it matter? Well, history suggests that de facto standards will emerge more quickly to resolve real world practitioner problems and catch on more quickly than consensus-based architectures and standards-based architectures. But in the long run, the ladder may serve customers better. So we'll be exploring this topic in more detail in Supercloud 2, and of course we'd love to hear what you think platform, architecture, both? Now one of the real technical gurus that we'll have in studio at Supercloud two is David Flynn. He's one of the people behind the the movement that enabled enterprise flash adoption, that craze. And he did that with Fusion IO and he is now working on a system to enable read write data access to any user in any application in any data center or on any cloud anywhere. So think of this company as a Supercloud enabler. Allow me to share an excerpt from a conversation David Flore and I had with David Flynn last year. He as well gave a lot of thought to the Supercloud definition and was really helpful with an opinionated point of view. He said something to us that was, we thought relevant. "What is the operating system for a decentralized cloud? The main two functions of an operating system or an operating environment are one the process scheduler and two, the file system. The strongest argument for supercloud is made when you go down to the platform layer and talk about it as an operating environment on which you can run all forms of applications." So a couple of implications here that will be exploring with David Flynn in studio. First we're inferring from his comment that he's in the platform camp where the platform owner is responsible for the architecture and there are obviously trade-offs there and benefits but we'll have to clarify that with him. And second, he's basically saying, you kill the concept the further you move up the stack. So the weak, the further you move the stack the weaker the supercloud argument becomes because it's just becoming SaaS. Now this is something we're going to explore to better understand is thinking on this, but also whether the existing notion of SaaS is changing and whether or not a new breed of Supercloud apps will emerge. Which brings us to this really interesting fellow that George Gilbert and I RIFed with ahead of Supercloud two. Tristan Handy, he's the founder and CEO of DBT Labs and he has a highly opinionated and technical mind. Here's what he said, "One of the things that we still don't know how to API-ify is concepts that live inside of your data warehouse inside of your data lake. These are core concepts that the business should be able to create applications around very easily. In fact, that's not the case because it involves a lot of data engineering pipeline and other work to make these available. So if you really want to make it easy to create these data experiences for users you need to have an ability to describe these metrics and then to turn them into APIs to make them accessible to application developers who have literally no idea how they're calculated behind the scenes and they don't need to." A lot of implications to this statement that will explore at Supercloud two versus Jamma Dani's data mesh comes into play here with her critique of hyper specialized data pipeline experts with little or no domain knowledge. Also the need for simplified self-service infrastructure which Kit Colbert is likely going to touch upon. Veronica Durgin of SAKS and her ideal state for data shearing along with Harveer Singh of Western Union. They got to deal with 200 locations around the world in data privacy issues, data sovereignty how do you share data safely? Same with Nick Taylor of Ionis Pharmaceutical. And not to blow your mind but Thomas Hazel and Bob Muglia deposit that to make data apps a reality across the Supercloud you have to rethink everything. You can't just let in memory databases and caching architectures take care of everything in a brute force manner. Rather you have to get down to really detailed levels even things like how data is laid out on disk, ie flash and think about rewriting applications for the Supercloud and the MLAI era. All of this and more at Supercloud two which wouldn't be complete without some data. So we pinged our friends from ETR Eric Bradley and Darren Bramberm to see if they had any data on Supercloud that we could tap. And so we're going to be analyzing a number of the players as well at Supercloud two. Now, many of you are familiar with this graphic here we show some of the players involved in delivering or enabling Supercloud-like capabilities. On the Y axis is spending momentum and on the horizontal accesses market presence or pervasiveness in the data. So netscore versus what they call overlap or end in the data. And the table insert shows how the dots are plotted now not to steal ETR's thunder but the first point is you really can't have supercloud without the hyperscale cloud platforms which is shown on this graphic. But the exciting aspect of Supercloud is the opportunity to build value on top of that hyperscale infrastructure. Snowflake here continues to show strong spending velocity as those Databricks, Hashi, Rubrik. VMware Tanzu, which we all put under the magnifying glass after the Broadcom announcements, is also showing momentum. Unfortunately due to a scheduling conflict we weren't able to get Red Hat on the program but they're clearly a player here. And we've put Cohesity and Veeam on the chart as well because backup is a likely use case across clouds and on-premises. And now one other call out that we drill down on at Supercloud two is CloudFlare, which actually uses the term supercloud maybe in a different way. They look at Supercloud really as you know, serverless on steroids. And so the data brains at ETR will have more to say on this topic at Supercloud two along with many others. Okay, so why should you attend Supercloud two? What's in it for me kind of thing? So first of all, if you're a practitioner and you want to understand what the possibilities are for doing cross-cloud services for monetizing data how your peers are doing data sharing, how some of your peers are actually building out a Supercloud you're going to get real world input from practitioners. If you're a technologist, you're trying to figure out various ways to solve problems around data, data sharing, cross-cloud service deployment there's going to be a number of deep technology experts that are going to share how they're doing it. We're also going to drill down with Walmart into a practical example of Supercloud with some other examples of how practitioners are dealing with cross-cloud complexity. Some of them, by the way, are kind of thrown up their hands and saying, Hey, we're going mono cloud. And we'll talk about the potential implications and dangers and risks of doing that. And also some of the benefits. You know, there's a question, right? Is Supercloud the same wine new bottle or is it truly something different that can drive substantive business value? So look, go to Supercloud.world it's January 17th at 9:00 AM Pacific. You can register for free and participate directly in the program. Okay, that's a wrap. I want to give a shout out to the Supercloud supporters. VMware has been a great partner as our anchor sponsor Chaos Search Proximo, and Alura as well. For contributing to the effort I want to thank Alex Myerson who's on production and manages the podcast. Ken Schiffman is his supporting cast as well. Kristen Martin and Cheryl Knight to help get the word out on social media and at our newsletters. And Rob Ho is our editor-in-chief over at Silicon Angle. Thank you all. Remember, these episodes are all available as podcast. Wherever you listen we really appreciate the support that you've given. We just saw some stats from from Buzz Sprout, we hit the top 25% we're almost at 400,000 downloads last year. So really appreciate your participation. All you got to do is search Breaking Analysis podcast and you'll find those I publish each week on wikibon.com and siliconangle.com. Or if you want to get ahold of me you can email me directly at David.Vellante@siliconangle.com or dm me DVellante or comment on our LinkedIn post. I want you to check out etr.ai. They've got the best survey data in the enterprise tech business. This is Dave Vellante for theCUBE Insights, powered by ETR. Thanks for watching. We'll see you next week at Supercloud two or next time on breaking analysis. (light music)

Published Date : Jan 14 2023

SUMMARY :

with Dave Vellante of the things that we're So if you know I want to get data and on the horizontal

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Bob MugliaPERSON

0.99+

Alex MyersonPERSON

0.99+

Cheryl KnightPERSON

0.99+

David FlynnPERSON

0.99+

VeronicaPERSON

0.99+

JackPERSON

0.99+

Nelu MihaiPERSON

0.99+

Zhamak DehghaniPERSON

0.99+

Thomas HazelPERSON

0.99+

Nick TaylorPERSON

0.99+

Dave VellantePERSON

0.99+

Jack GreenfieldPERSON

0.99+

Kristen MartinPERSON

0.99+

Ken SchiffmanPERSON

0.99+

Veronica DurginPERSON

0.99+

WalmartORGANIZATION

0.99+

Rob HoPERSON

0.99+

Warner MediaORGANIZATION

0.99+

Tristan HandyPERSON

0.99+

Veronika DurginPERSON

0.99+

George GilbertPERSON

0.99+

Ionis PharmaceuticalORGANIZATION

0.99+

George GilbertPERSON

0.99+

Bob MugliaPERSON

0.99+

David FlorePERSON

0.99+

DBT LabsORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

BobPERSON

0.99+

Palo AltoLOCATION

0.99+

21 sessionsQUANTITY

0.99+

Darren BrambermPERSON

0.99+

33 guestsQUANTITY

0.99+

Nir ZukPERSON

0.99+

BostonLOCATION

0.99+

AmazonORGANIZATION

0.99+

Harveer SinghPERSON

0.99+

Kit ColbertPERSON

0.99+

DatabricksORGANIZATION

0.99+

Sanjeev MohanPERSON

0.99+

Supercloud 2TITLE

0.99+

SnowflakeORGANIZATION

0.99+

last yearDATE

0.99+

Western UnionORGANIZATION

0.99+

CohesityORGANIZATION

0.99+

SupercloudORGANIZATION

0.99+

200 locationsQUANTITY

0.99+

AugustDATE

0.99+

Keith TownsendPERSON

0.99+

Data MeshORGANIZATION

0.99+

Palo Alto NetworksORGANIZATION

0.99+

David.Vellante@siliconangle.comOTHER

0.99+

next weekDATE

0.99+

bothQUANTITY

0.99+

oneQUANTITY

0.99+

secondQUANTITY

0.99+

first pointQUANTITY

0.99+

OneQUANTITY

0.99+

FirstQUANTITY

0.99+

VMwareORGANIZATION

0.98+

Silicon AngleORGANIZATION

0.98+

ETRORGANIZATION

0.98+

Eric BradleyPERSON

0.98+

twoQUANTITY

0.98+

todayDATE

0.98+

SachsORGANIZATION

0.98+

SAKSORGANIZATION

0.98+

SupercloudEVENT

0.98+

last AugustDATE

0.98+

each weekQUANTITY

0.98+

Why Should Customers Care About SuperCloud


 

Hello and welcome back to Supercloud 2 where we examine the intersection of cloud and data in the 2020s. My name is Dave Vellante. Our Supercloud panel, our power panel is back. Maribel Lopez is the founder and principal analyst at Lopez Research. Sanjeev Mohan is former Gartner analyst and principal at Sanjeev Mohan. And Keith Townsend is the CTO advisor. Folks, welcome back and thanks for your participation today. Good to see you. >> Okay, great. >> Great to see you. >> Thanks. Let me start, Maribel, with you. Bob Muglia, we had a conversation as part of Supercloud the other day. And he said, "Dave, I like the work, you got to simplify this a little bit." So he said, quote, "A Supercloud is a platform." He said, "Think of it as a platform that provides programmatically consistent services hosted on heterogeneous cloud providers." And then Nelu Mihai said, "Well, wait a minute. This is just going to create more stove pipes. We need more standards in an architecture," which is kind of what Berkeley Sky Computing initiative is all about. So there's a sort of a debate going on. Is supercloud an architecture, a platform? Or maybe it's just another buzzword. Maribel, do you have a thought on this? >> Well, the easy answer would be to say it's just a buzzword. And then we could just kill the conversation and be done with it. But I think the term, it's more than that, right? The term actually isn't new. You can go back to at least 2016 and find references to supercloud in Cornell University or assist in other documents. So, having said this, I think we've been talking about Supercloud for a while, so I assume it's more than just a fancy buzzword. But I think it really speaks to that undeniable trend of moving towards an abstraction layer to deal with the chaos of what we consider managing multiple public and private clouds today, right? So one definition of the technology platform speaks to a set of services that allows companies to build and run that technology smoothly without worrying about the underlying infrastructure, which really gets back to something that Bob said. And some of the question is where that lives. And you could call that an abstraction layer. You could call it cross-cloud services, hybrid cloud management. So I see momentum there, like legitimate momentum with enterprise IT buyers that are trying to deal with the fact that they have multiple clouds now. So where I think we're moving is trying to define what are the specific attributes and frameworks of that that would make it so that it could be consistent across clouds. What is that layer? And maybe that's what the supercloud is. But one of the things I struggle with with supercloud is. What are we really trying to do here? Are we trying to create differentiated services in the supercloud layer? Is a supercloud just another variant of what AWS, GCP, or others do? You spoken to Walmart about its cloud native platform, and that's an example of somebody deciding to do it themselves because they need to deal with this today and not wait for some big standards thing to happen. So whatever it is, I do think it's something. I think we're trying to maybe create an architecture out of it would be a better way of saying it so that it does get to those set of principles, but it also needs to be edge aware. I think whenever we talk about supercloud, we're always talking about like the big centralized cloud. And I think we need to think about all the distributed clouds that we're looking at in edge as well. So that might be one of the ways that supercloud evolves. >> So thank you, Maribel. Keith, Brian Gracely, Gracely's law, things kind of repeat themselves. We've seen it all before. And so what Muglia brought to the forefront is this idea of a platform where the platform provider is really responsible for the architecture. Of course, the drawback is then you get a a bunch of stove pipes architectures. But practically speaking, that's kind of the way the industry has always evolved, right? >> So if we look at this from the practitioner's perspective and we talk about platforms, traditionally vendors have provided the platforms for us, whether it's distribution of lineage managed by or provided by Red Hat, Windows, servers, .NET, databases, Oracle. We think of those as platforms, things that are fundamental we can build on top. Supercloud isn't today that. It is a framework or idea, kind of a visionary goal to get to a point that we can have a platform or a framework. But what we're seeing repeated throughout the industry in customers, whether it's the Walmarts that's kind of supersized the idea of supercloud, or if it's regular end user organizations that are coming out with platform groups, groups who normalize cloud native infrastructure, AWS multi-cloud, VMware resources to look like one thing internally to their developers. We're seeing this trend that there's a desire for a platform that provides the capabilities of a supercloud. >> Thank you for that. Sanjeev, we often use Snowflake as a supercloud example, and now would presumably would be a platform with an architecture that's determined by the vendor. Maybe Databricks is pushing for a more open architecture, maybe more of that nirvana that we were talking about before to solve for supercloud. But regardless, the practitioner discussions show. At least currently, there's not a lot of cross-cloud data sharing. I think it could be a killer use case, egress charges or a barrier. But how do you see it? Will that change? Will we hide that underlying complexity and start sharing data across cloud? Is that something that you think Snowflake or others will be able to achieve? >> So I think we are already starting to see some of that happen. Snowflake is definitely one example that gets cited a lot. But even we don't talk about MongoDB in this like, but you could have a MongoDB cluster, for instance, with nodes sitting in different cloud providers. So there are companies that are starting to do it. The advantage that these companies have, let's take Snowflake as an example, it's a centralized proprietary platform. And they are building the capabilities that are needed for supercloud. So they're building things like you can push down your data transformations. They have the entire security and privacy suite. Data ops, they're adding those capabilities. And if I'm not mistaken, it'll be very soon, we will see them offer data observability. So it's all works great as long as you are in one platform. And if you want resilience, then Snowflake, Supercloud, great example. But if your primary goal is to choose the most cost-effective service irrespective of which cloud it sits in, then things start falling sideways. For example, I may be a very big Snowflake user. And I like Snowflake's resilience. I can move from one cloud to another cloud. Snowflake does it for me. But what if I want to train a very large model? Maybe Databricks is a better platform for that. So how do I do move my workload from one platform to another platform? That tooling does not exist. So we need server hybrid, cross-cloud, data ops platform. Walmart has done a great job, but they built it by themselves. Not every company is Walmart. Like Maribel and Keith said, we need standards, we need reference architectures, we need some sort of a cost control. I was just reading recently, Accenture has been public about their AWS bill. Every time they get the bill is tens of millions of lines, tens of millions 'cause there are over thousand teams using AWS. If we have not been able to corral a usage of a single cloud, now we're talking about supercloud, we've got multiple clouds, and hybrid, on-prem, and edge. So till we've got some cross-platform tooling in place, I think this will still take quite some time for it to take shape. >> It's interesting. Maribel, Walmart would tell you that their on-prem infrastructure is cheaper to run than the stuff in the cloud. but at the same time, they want the flexibility and the resiliency of their three-legged stool model. So the point as Sanjeev was making about hybrid. It's an interesting balance, isn't it, between getting your lowest cost and at the same time having best of breed and scale? >> It's basically what you're trying to optimize for, as you said, right? And by the way, to the earlier point, not everybody is at Walmart's scale, so it's not actually cheaper for everybody to have the purchasing power to make the cloud cheaper to have it on-prem. But I think what you see almost every company, large or small, moving towards is this concept of like, where do I find the agility? And is the agility in building the infrastructure for me? And typically, the thing that gives you outside advantage as an organization is not how you constructed your cloud computing infrastructure. It might be how you structured your data analytics as an example, which cloud is related to that. But how do you marry those two things? And getting back to sort of Sanjeev's point. We're in a real struggle now where one hand we want to have best of breed services and on the other hand we want it to be really easy to manage, secure, do data governance. And those two things are really at odds with each other right now. So if you want all the knobs and switches of a service like geospatial analytics and big query, you're going to have to use Google tools, right? Whereas if you want visibility across all the clouds for your application of state and understand the security and governance of that, you're kind of looking for something that's more cross-cloud tooling at that point. But whenever you talk to somebody about cross-cloud tooling, they look at you like that's not really possible. So it's a very interesting time in the market. Now, we're kind of layering this concept of supercloud on it. And some people think supercloud's about basically multi-cloud tooling, and some people think it's about a whole new architectural stack. So we're just not there yet. But it's not all about cost. I mean, cloud has not been about cost for a very, very long time. Cloud has been about how do you really make the most of your data. And this gets back to cross-cloud services like Snowflake. Why did they even exist? They existed because we had data everywhere, but we need to treat data as a unified object so that we can analyze it and get insight from it. And so that's where some of the benefit of these cross-cloud services are moving today. Still a long way to go, though, Dave. >> Keith, I reached out to my friends at ETR given the macro headwinds, And you're right, Maribel, cloud hasn't really been about just about cost savings. But I reached out to the ETR, guys, what's your data show in terms of how customers are dealing with the economic headwinds? And they said, by far, their number one strategy to cut cost is consolidating redundant vendors. And a distant second, but still notable was optimizing cloud costs. Maybe using reserve instances, or using more volume buying. Nowhere in there. And I asked them to, "Could you go look and see if you can find it?" Do we see repatriation? And you hear this a lot. You hear people whispering as analysts, "You better look into that repatriation trend." It's pretty big. You can't find it. But some of the Walmarts in the world, maybe even not repatriating, but they maybe have better cost structure on-prem. Keith, what are you seeing from the practitioners that you talk to in terms of how they're dealing with these headwinds? >> Yeah, I just got into a conversation about this just this morning with (indistinct) who is an analyst over at GigaHome. He's reading the same headlines. Repatriation is happening at large scale. I think this is kind of, we have these quiet terms now. We have quiet quitting, we have quiet hiring. I think we have quiet repatriation. Most people haven't done away with their data centers. They're still there. Whether they're completely on-premises data centers, and they own assets, or they're partnerships with QTX, Equinix, et cetera, they have these private cloud resources. What I'm seeing practically is a rebalancing of workloads. Do I really need to pay AWS for this instance of SAP that's on 24 hours a day versus just having it on-prem, moving it back to my data center? I've talked to quite a few customers who were early on to moving their static SAP workloads onto the public cloud, and they simply moved them back. Surprising, I was at VMware Explore. And we can talk about this a little bit later on. But our customers, net new, not a lot that were born in the cloud. And they get to this point where their workloads are static. And they look at something like a Kubernetes, or a OpenShift, or VMware Tanzu. And they ask the question, "Do I need the scalability of cloud?" I might consider being a net new VMware customer to deliver this base capability. So are we seeing repatriation as the number one reason? No, I think internal IT operations are just naturally come to this realization. Hey, I have these resources on premises. The private cloud technologies have moved far along enough that I can just simply move this workload back. I'm not calling it repatriation, I'm calling it rightsizing for the operating model that I have. >> Makes sense. Yeah. >> Go ahead. >> If I missed something, Dave, why we are on this topic of repatriation. I'm actually surprised that we are talking about repatriation as a very big thing. I think repatriation is happening, no doubt, but it's such a small percentage of cloud migration that to me it's a rounding error in my opinion. I think there's a bigger problem. The problem is that people don't know where the cost is. If they knew where the cost was being wasted in the cloud, they could do something about it. But if you don't know, then the easy answer is cloud costs a lot and moving it back to on-premises. I mean, take like Capital One as an example. They got rid of all the data centers. Where are they going to repatriate to? They're all in the cloud at this point. So I think my point is that data observability is one of the places that has seen a lot of traction is because of cost. Data observability, when it first came into existence, it was all about data quality. Then it was all about data pipeline reliability. And now, the number one killer use case is FinOps. >> Maribel, you had a comment? >> Yeah, I'm kind of in violent agreement with both Sanjeev and Keith. So what are we seeing here? So the first thing that we see is that many people wildly overspent in the big public cloud. They had stranded cloud credits, so to speak. The second thing is, some of them still had infrastructure that was useful. So why not use it if you find the right workloads to what Keith was talking about, if they were more static workloads, if it was already there? So there is a balancing that's going on. And then I think fundamentally, from a trend standpoint, these things aren't binary. Everybody, for a while, everything was going to go to the public cloud and then people are like, "Oh, it's kind of expensive." Then they're like, "Oh no, they're going to bring it all on-prem 'cause it's really expensive." And it's like, "Well, that doesn't necessarily get me some of the new features and functionalities I might want for some of my new workloads." So I'm going to put the workloads that have a certain set of characteristics that require cloud in the cloud. And if I have enough capability on-prem and enough IT resources to manage certain things on site, then I'm going to do that there 'cause that's a more cost-effective thing for me to do. It's not binary. That's why we went to hybrid. And then we went to multi just to describe the fact that people added multiple public clouds. And now we're talking about super, right? So I don't look at it as a one-size-fits-all for any of this. >> A a number of practitioners leading up to Supercloud2 have told us that they're solving their cloud complexity by going in monocloud. So they're putting on the blinders. Even though across the organization, there's other groups using other clouds. You're like, "In my group, we use AWS, or my group, we use Azure. And those guys over there, they use Google. We just kind of keep it separate." Are you guys hearing this in your view? Is that risky? Are they missing out on some potential to tap best of breed? What do you guys think about that? >> Everybody thinks they're monocloud. Is anybody really monocloud? It's like a group is monocloud, right? >> Right. >> This genie is out of the bottle. We're not putting the genie back in the bottle. You might think your monocloud and you go like three doors down and figure out the guy or gal is on a fundamentally different cloud, running some analytics workload that you didn't know about. So, to Sanjeev's earlier point, they don't even know where their cloud spend is. So I think the concept of monocloud, how that's actually really realized by practitioners is primary and then secondary sources. So they have a primary cloud that they run most of their stuff on, and that they try to optimize. And we still have forked workloads. Somebody decides, "Okay, this SAP runs really well on this, or these analytics workloads run really well on that cloud." And maybe that's how they parse it. But if you really looked at it, there's very few companies, if you really peaked under the hood and did an analysis that you could find an actual monocloud structure. They just want to pull it back in and make it more manageable. And I respect that. You want to do what you can to try to streamline the complexity of that. >> Yeah, we're- >> Sorry, go ahead, Keith. >> Yeah, we're doing this thing where we review AWS service every day. Just in your inbox, learn about a new AWS service cursory. There's 238 AWS products just on the AWS cloud itself. Some of them are redundant, but you get the idea. So the concept of monocloud, I'm in filing agreement with Maribel on this that, yes, a group might say I want a primary cloud. And that primary cloud may be the AWS. But have you tried the licensed Oracle database on AWS? It is really tempting to license Oracle on Oracle Cloud, Microsoft on Microsoft. And I can't get RDS anywhere but Amazon. So while I'm driven to desire the simplicity, the reality is whether be it M&A, licensing, data sovereignty. I am forced into a multi-cloud management style. But I do agree most people kind of do this one, this primary cloud, secondary cloud. And I guarantee you're going to have a third cloud or a fourth cloud whether you want to or not via shadow IT, latency, technical reasons, et cetera. >> Thank you. Sanjeev, you had a comment? >> Yeah, so I just wanted to mention, as an organization, I'm complete agreement, no organization is monocloud, at least if it's a large organization. Large organizations use all kinds of combinations of cloud providers. But when you talk about a single workload, that's where the program arises. As Keith said, the 238 services in AWS. How in the world am I going to be an expert in AWS, but then say let me bring GCP or Azure into a single workload? And that's where I think we probably will still see monocloud as being predominant because the team has developed its expertise on a particular cloud provider, and they just don't have the time of the day to go learn yet another stack. However, there are some interesting things that are happening. For example, if you look at a multi-cloud example where Oracle and Microsoft Azure have that interconnect, so that's a beautiful thing that they've done because now in the newest iteration, it's literally a few clicks. And then behind the scene, your .NET application and your Oracle database in OCI will be configured, the identities in active directory are federated. And you can just start using a database in one cloud, which is OCI, and an application, your .NET in Azure. So till we see this kind of a solution coming out of the providers, I think it's is unrealistic to expect the end users to be able to figure out multiple clouds. >> Well, I have to share with you. I can't remember if he said this on camera or if it was off camera so I'll hold off. I won't tell you who it is, but this individual was sort of complaining a little bit saying, "With AWS, I can take their best AI tools like SageMaker and I can run them on my Snowflake." He said, "I can't do that in Google. Google forces me to go to BigQuery if I want their excellent AI tools." So he was sort of pushing, kind of tweaking a little bit. Some of the vendor talked that, "Oh yeah, we're so customer-focused." Not to pick on Google, but I mean everybody will say that. And then you say, "If you're so customer-focused, why wouldn't you do a ABC?" So it's going to be interesting to see who leads that integration and how broadly it's applied. But I digress. Keith, at our first supercloud event, that was on August 9th. And it was only a few months after Broadcom announced the VMware acquisition. A lot of people, myself included said, "All right, cuts are coming." Generally, Tanzu is probably going to be under the radar, but it's Supercloud 22 and presumably VMware Explore, the company really... Well, certainly the US touted its Tanzu capabilities. I wasn't at VMware Explore Europe, but I bet you heard similar things. Hawk Tan has been blogging and very vocal about cross-cloud services and multi-cloud, which doesn't happen without Tanzu. So what did you hear, Keith, in Europe? What's your latest thinking on VMware's prospects in cross-cloud services/supercloud? >> So I think our friend and Cube, along host still be even more offended at this statement than he was when I sat in the Cube. This was maybe five years ago. There's no company better suited to help industries or companies, cross-cloud chasm than VMware. That's not a compliment. That's a reality of the industry. This is a very difficult, almost intractable problem. What I heard that VMware Europe were customers serious about this problem, even more so than the US data sovereignty is a real problem in the EU. Try being a company in Switzerland and having the Swiss data solvency issues. And there's no local cloud presence there large enough to accommodate your data needs. They had very serious questions about this. I talked to open source project leaders. Open source project leaders were asking me, why should I use the public cloud to host Kubernetes-based workloads, my projects that are building around Kubernetes, and the CNCF infrastructure? Why should I use AWS, Google, or even Azure to host these projects when that's undifferentiated? I know how to run Kubernetes, so why not run it on-premises? I don't want to deal with the hardware problems. So again, really great questions. And then there was always the specter of the problem, I think, we all had with the acquisition of VMware by Broadcom potentially. 4.5 billion in increased profitability in three years is a unbelievable amount of money when you look at the size of the problem. So a lot of the conversation in Europe was about industry at large. How do we do what regulators are asking us to do in a practical way from a true technology sense? Is VMware cross-cloud great? >> Yeah. So, VMware, obviously, to your point. OpenStack is another way of it. Actually, OpenStack, uptake is still alive and well, especially in those regions where there may not be a public cloud, or there's public policy dictating that. Walmart's using OpenStack. As you know in IT, some things never die. Question for Sanjeev. And it relates to this new breed of data apps. And Bob Muglia and Tristan Handy from DBT Labs who are participating in this program really got us thinking about this. You got data that resides in different clouds, it maybe even on-prem. And the machine polls data from different systems. No humans involved, e-commerce, ERP, et cetera. It creates a plan, outcomes. No human involvement. Today, you're on a CRM system, you're inputting, you're doing forms, you're, you're automating processes. We're talking about a new breed of apps. What are your thoughts on this? Is it real? Is it just way off in the distance? How does machine intelligence fit in? And how does supercloud fit? >> So great point. In fact, the data apps that you're talking about, I call them data products. Data products first came into limelight in the last couple of years when Jamal Duggan started talking about data mesh. I am taking data products out of the data mesh concept because data mesh, whether data mesh happens or not is analogous to data products. Data products, basically, are taking a product management view of bringing data from different sources based on what the consumer needs. We were talking earlier today about maybe it's my vacation rentals, or it may be a retail data product, it may be an investment data product. So it's a pre-packaged extraction of data from different sources. But now I have a product that has a whole lifecycle. I can version it. I have new features that get added. And it's a very business data consumer centric. It uses machine learning. For instance, I may be able to tell whether this data product has stale data. Who is using that data? Based on the usage of the data, I may have a new data products that get allocated. I may even have the ability to take existing data products, mash them up into something that I need. So if I'm going to have that kind of power to create a data product, then having a common substrate underneath, it can be very useful. And that could be supercloud where I am making API calls. I don't care where the ERP, the CRM, the survey data, the pricing engine where they sit. For me, there's a logical abstraction. And then I'm building my data product on top of that. So I see a new breed of data products coming out. To answer your question, how early we are or is this even possible? My prediction is that in 2023, we will start seeing more of data products. And then it'll take maybe two to three years for data products to become mainstream. But it's starting this year. >> A subprime mortgages were a data product, definitely were humans involved. All right, let's talk about some of the supercloud, multi-cloud players and what their future looks like. You can kind of pick your favorites. VMware, Snowflake, Databricks, Red Hat, Cisco, Dell, HP, Hashi, IBM, CloudFlare. There's many others. cohesive rubric. Keith, I wanted to start with CloudFlare because they actually use the term supercloud. and just simplifying what they said. They look at it as taking serverless to the max. You write your code and then you can deploy it in seconds worldwide, of course, across the CloudFlare infrastructure. You don't have to spin up containers, you don't go to provision instances. CloudFlare worries about all that infrastructure. What are your thoughts on CloudFlare this approach and their chances to disrupt the current cloud landscape? >> As Larry Ellison said famously once before, the network is the computer, right? I thought that was Scott McNeley. >> It wasn't Scott McNeley. I knew it was on Oracle Align. >> Oracle owns that now, owns that line. >> By purpose or acquisition. >> They should have just called it cloud. >> Yeah, they should have just called it cloud. >> Easier. >> Get ahead. >> But if you think about the CloudFlare capability, CloudFlare in its own right is becoming a decent sized cloud provider. If you have compute out at the edge, when we talk about edge in the sense of CloudFlare and points of presence, literally across the globe, you have all of this excess computer, what do you do with it? First offering, let's disrupt data in the cloud. We can't start the conversation talking about data. When they say we're going to give you object-oriented or object storage in the cloud without egress charges, that's disruptive. That we can start to think about supercloud capability of having compute EC2 run in AWS, pushing and pulling data from CloudFlare. And now, I've disrupted this roach motel data structure, and that I'm freely giving away bandwidth, basically. Well, the next layer is not that much more difficult. And I think part of CloudFlare's serverless approach or supercloud approaches so that they don't have to commit to a certain type of compute. It is advantageous. It is a feature for me to be able to go to EC2 and pick a memory heavy model, or a compute heavy model, or a network heavy model, CloudFlare is taken away those knobs. and I'm just giving code and allowing that to run. CloudFlare has a massive network. If I can put the code closest using the CloudFlare workers, if I can put that code closest to where the data is at or residing, super compelling observation. The question is, does it scale? I don't get the 238 services. While Server List is great, I have to know what I'm going to build. I don't have a Cognito, or RDS, or all these other services that make AWS, GCP, and Azure appealing from a builder's perspective. So it is a very interesting nascent start. It's great because now they can hide compute. If they don't have the capacity, they can outsource that maybe at a cost to one of the other cloud providers, but kind of hiding the compute behind the surplus architecture is a really unique approach. >> Yeah. And they're dipping their toe in the water. And they've announced an object store and a database platform and more to come. We got to wrap. So I wonder, Sanjeev and Maribel, if you could maybe pick some of your favorites from a competitive standpoint. Sanjeev, I felt like just watching Snowflake, I said, okay, in my opinion, they had the right strategy, which was to run on all the clouds, and then try to create that abstraction layer and data sharing across clouds. Even though, let's face it, most of it might be happening across regions if it's happening, but certainly outside of an individual account. But I felt like just observing them that anybody who's traditional on-prem player moving into the clouds or anybody who's a cloud native, it just makes total sense to write to the various clouds. And to the extent that you can simplify that for users, it seems to be a logical strategy. Maybe as I said before, what multi-cloud should have been. But are there companies that you're watching that you think are ahead in the game , or ones that you think are a good model for the future? >> Yes, Snowflake, definitely. In fact, one of the things we have not touched upon very much, and Keith mentioned a little bit, was data sovereignty. Data residency rules can require that certain data should be written into certain region of a certain cloud. And if my cloud provider can abstract that or my database provider, then that's perfect for me. So right now, I see Snowflake is way ahead of this pack. I would not put MongoDB too far behind. They don't really talk about this thing. They are in a different space, but now they have a lakehouse, and they've got all of these other SQL access and new capabilities that they're announcing. So I think they would be quite good with that. Oracle is always a dark forest. Oracle seems to have revived its Cloud Mojo to some extent. And it's doing some interesting stuff. Databricks is the other one. I have not seen Databricks. They've been very focused on lakehouse, unity, data catalog, and some of those pieces. But they would be the obvious challenger. And if they come into this space of supercloud, then they may bring some open source technologies that others can rely on like Delta Lake as a table format. >> Yeah. One of these infrastructure players, Dell, HPE, Cisco, even IBM. I mean, I would be making my infrastructure as programmable and cloud friendly as possible. That seems like table stakes. But Maribel, any companies that stand out to you that we should be paying attention to? >> Well, we already mentioned a bunch of them, so maybe I'll go a slightly different route. I'm watching two companies pretty closely to see what kind of traction they get in their established companies. One we already talked about, which is VMware. And the thing that's interesting about VMware is they're everywhere. And they also have the benefit of having a foot in both camps. If you want to do it the old way, the way you've always done it with VMware, they got all that going on. If you want to try to do a more cross-cloud, multi-cloud native style thing, they're really trying to build tools for that. So I think they have really good access to buyers. And that's one of the reasons why I'm interested in them to see how they progress. The other thing, I think, could be a sleeping horse oddly enough is Google Cloud. They've spent a lot of work and time on Anthos. They really need to create a certain set of differentiators. Well, it's not necessarily in their best interest to be the best multi-cloud player. If they decide that they want to differentiate on a different layer of the stack, let's say they want to be like the person that is really transformative, they talk about transformation cloud with analytics workloads, then maybe they do spend a good deal of time trying to help people abstract all of the other underlying infrastructure and make sure that they get the sexiest, most meaningful workloads into their cloud. So those are two people that you might not have expected me to go with, but I think it's interesting to see not just on the things that might be considered, either startups or more established independent companies, but how some of the traditional providers are trying to reinvent themselves as well. >> I'm glad you brought that up because if you think about what Google's done with Kubernetes. I mean, would Google even be relevant in the cloud without Kubernetes? I could argue both sides of that. But it was quite a gift to the industry. And there's a motivation there to do something unique and different from maybe the other cloud providers. And I'd throw in Red Hat as well. They're obviously a key player and Kubernetes. And Hashi Corp seems to be becoming the standard for application deployment, and terraform, or cross-clouds, and there are many, many others. I know we're leaving lots out, but we're out of time. Folks, I got to thank you so much for your insights and your participation in Supercloud2. Really appreciate it. >> Thank you. >> Thank you. >> Thank you. >> This is Dave Vellante for John Furrier and the entire Cube community. Keep it right there for more content from Supercloud2.

Published Date : Jan 10 2023

SUMMARY :

And Keith Townsend is the CTO advisor. And he said, "Dave, I like the work, So that might be one of the that's kind of the way the that we can have a Is that something that you think Snowflake that are starting to do it. and the resiliency of their and on the other hand we want it But I reached out to the ETR, guys, And they get to this point Yeah. that to me it's a rounding So the first thing that we see is to Supercloud2 have told us Is anybody really monocloud? and that they try to optimize. And that primary cloud may be the AWS. Sanjeev, you had a comment? of a solution coming out of the providers, So it's going to be interesting So a lot of the conversation And it relates to this So if I'm going to have that kind of power and their chances to disrupt the network is the computer, right? I knew it was on Oracle Align. Oracle owns that now, Yeah, they should have so that they don't have to commit And to the extent that you And if my cloud provider can abstract that that stand out to you And that's one of the reasons Folks, I got to thank you and the entire Cube community.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
KeithPERSON

0.99+

Dave VellantePERSON

0.99+

Jamal DugganPERSON

0.99+

Nelu MihaiPERSON

0.99+

IBMORGANIZATION

0.99+

MaribelPERSON

0.99+

Bob MugliaPERSON

0.99+

CiscoORGANIZATION

0.99+

DellORGANIZATION

0.99+

EuropeLOCATION

0.99+

OracleORGANIZATION

0.99+

Tristan HandyPERSON

0.99+

Keith TownsendPERSON

0.99+

Larry EllisonPERSON

0.99+

Brian GracelyPERSON

0.99+

BobPERSON

0.99+

HPORGANIZATION

0.99+

AWSORGANIZATION

0.99+

EquinixORGANIZATION

0.99+

QTXORGANIZATION

0.99+

WalmartORGANIZATION

0.99+

Maribel LopezPERSON

0.99+

August 9thDATE

0.99+

DavePERSON

0.99+

GracelyPERSON

0.99+

AmazonORGANIZATION

0.99+

WalmartsORGANIZATION

0.99+

Red HatORGANIZATION

0.99+

VMwareORGANIZATION

0.99+

SanjeevPERSON

0.99+

MicrosoftORGANIZATION

0.99+

HashiORGANIZATION

0.99+

GigaHomeORGANIZATION

0.99+

DatabricksORGANIZATION

0.99+

2023DATE

0.99+

Hawk TanPERSON

0.99+

GoogleORGANIZATION

0.99+

two companiesQUANTITY

0.99+

two thingsQUANTITY

0.99+

BroadcomORGANIZATION

0.99+

SwitzerlandLOCATION

0.99+

SnowflakeTITLE

0.99+

SnowflakeORGANIZATION

0.99+

HPEORGANIZATION

0.99+

twoQUANTITY

0.99+

238 servicesQUANTITY

0.99+

two peopleQUANTITY

0.99+

2016DATE

0.99+

GartnerORGANIZATION

0.99+

tens of millionsQUANTITY

0.99+

three yearsQUANTITY

0.99+

DBT LabsORGANIZATION

0.99+

fourth cloudQUANTITY

0.99+

Evan Touger, Prowess | Prowess Benchmark Testing Results for AMD EPYC Genoa on Dell Servers


 

(upbeat music) >> Welcome to theCUBE's continuing coverage of AMD's fourth generation EPYC launch. I've got a special guest with me today from Prowess Consulting. His name is Evan Touger, he's a senior technical writer with Prowess. Evan, welcome. >> Hi, great to be here. Thanks. >> So tell us a little bit about Prowess, what does Prowess do? >> Yeah, we're a consulting firm. We've been around for quite a few years, based in Bellevue, Washington. And we do quite a few projects with folks from Dell to a lot of other companies, and dive in. We have engineers, writers, production folks, so pretty much end-to-end work, doing research testing and writing, and diving into different technical topics. >> So you- in this case what we're going to be talking about is some validation studies that you've done, looking at Dell PowerEdge servers that happened to be integrating in fourth-gen EPYC processors from AMD. What were the specific workloads that you were focused on in this study? >> Yeah, this particular one was honing in on virtualization, right? You know, obviously it's pretty much ubiquitous in the industry, everybody works with virtualization in one way or another. So just getting optimal performance for virtualization was critical, or is critical for most businesses. So we just wanted to look a little deeper into, you know, how do companies evaluate that? What are they going to use to make the determination for virtualization performance as it relates to their workloads? So that led us to this study, where we looked at some benchmarks, and then went a little deeper under the hood to see what led to the results that we saw from those benchmarks. >> So when you say virtualization, does that include virtual desktop infrastructure or are we just talking about virtual machines in general? >> No, it can include both. We looked at VMs, thinking in terms of what about database performance when you're working in VMs, all the way through to VDI and companies like healthcare organizations and so forth, where it's common to roll out lots of virtual desktops, and performance is critical there as well. >> Okay, you alluded to, sort of, looking under the covers to see, you know, where these performance results were coming from. I assume what you're referencing is the idea that it's not just all about the CPU when you talk about a system. Am I correct in that assumption and- >> Yeah, absolutely. >> What can you tell us? >> Well, you know, for companies evaluating, there's quite a bit to consider, obviously. So they're looking at not just raw performance but power performance. So that was part of it, and then what makes up that- those factors, right? So certainly CPU is critical to that, but then other things come into play, like the RAID controllers. So we looked a little bit there. And then networking, of course can be critical for configurations that are relying on good performance on their networks, both in terms of bandwidth and just reducing latency overall. So interconnects as well would be a big part of that. So with, with PCIe gen 5 or 5.0 pick your moniker. You know in this- in the infrastructure game, we're often playing a game of whack-a-mole, looking for the bottlenecks, you know, chasing the bottlenecks. PCIe 5 opens up a lot of bandwidth for memory and things like RAID controllers and NICs. I mean, is the bottleneck now just our imagination, Evan, have we reached a point where there are no bottlenecks? What did you see when you ran these tests? What, you know, what were you able to stress to a point where it was saturated, if anything? >> Yeah. Well, first of all, we didn't- these are particular tests were ones that we looked at industry benchmarks, and we were examining in particular to see where world records were set. And so we uncovered a few specific servers, PowerEdge servers that were pretty key there, or had a lot of- were leading in the category in a lot of areas. So that's what led us to then, okay, well why is that? What's in these servers, and what's responsible for that? So in a lot of cases they, we saw these results even with, you know, gen 4, PCIe gen 4. So there were situations where clearly there was benefit from faster interconnects and, and especially NVMe for RAID, you know, for supporting NVMe and SSDs. But all of that just leads you to the understanding that it means it can only get better, right? So going from gen 4 to- if you're seeing great results on gen 4, then gen 5 is probably going to be, you know, blow that away. >> And in this case, >> It'll be even better. >> In this case, gen 5 you're referencing PCIe >> PCIe right. Yeah, that's right. >> (indistinct) >> And then the same thing with EPYC actually holds true, some of the records, we saw records set for both 3rd and 4th gen, so- with EPYC, so the same thing there. Anywhere there's a record set on the 3rd gen, you know, makes us really- we're really looking forward to going back and seeing over the next few months, which of those records fall and are broken by newer generation versions of these servers, once they actually wrap to the newer generation processors. You know, based on, on what we're seeing for the- for what those processors can do, not only in. >> (indistinct) Go ahead. >> Sorry, just want to say, not only in terms of raw performance, but as I mentioned before, the power performance, 'cause they're very efficient, and that's a really critical consideration, right? I don't think you can overstate that for companies who are looking at, you know, have to consider expenditures and power and cooling and meeting sustainability goals and so forth. So that was really an important category in terms of what we looked at, was that power performance, not just raw performance. >> Yeah, I want to get back to that, that's a really good point. We should probably give credit where credit is due. Which Dell PowerEdge servers are we talking about that were tested and what did those interconnect components look like from a (indistinct) perspective? >> Yeah, so we focused primarily on a couple benchmarks that seemed most important for real world performance results for virtualization. TPCx-V and VMmark 3.x. the TPCx-V, that's where we saw PowerEdge R7525, R7515. They both had top scores in different categories there. That benchmark is great for looking at database workloads in particular, right? Running in virtualization settings. And then the VMmark 3.x was critical. We saw good, good results there for the 7525 and the R 7515 as well as the R 6525, in that one and that included, sorry, just checking notes to see what- >> Yeah, no, no, no, no, (indistinct) >> Included results for power performance, as I mentioned earlier, that's where we could see that. So we kind of, we saw this in a range of servers that included both 3rd gen AMD EPYC and newer 4th gen as well as I mentioned. The RAID controllers were critical in the TPCx-V. I don't think that came into play in the VM mark test, but they were definitely part of the TPCx-V benchmarks. So that's where the RAID controllers would make a difference, right? And in those tests, I think they're using PERC 11. So, you know, the newer PERC 12 controllers there, again we'd expect >> (indistinct) >> To see continued, you know, gains in newer benchmarks. That's what we'll be looking for over the next several months. >> Yeah. So I think if I've got my Dell nomenclature down, performance, no no, PowerEdge RAID Controller, is that right? >> Exactly, yeah, there you go. Right? >> With Broadcom, you know, powered by Broadcom. >> That's right. There you go. Yeah. Isn't the Dell naming scheme there PERC? >> Yeah, exactly, exactly. Back to your comment about power. So you've had a chance to take a pretty deep look at the latest stuff coming out. You're confident that- 'cause some of these servers are going to be more expensive than previous generation. Now a server is not a server is not a server, but some are awakening to the idea that there might be some sticker shock. You're confident that the bang for your buck, the bang for your kilowatt hour is actually going to be beneficial. We're actually making things better, faster, stronger, cheaper, more energy efficient. We're continuing on that curve? >> That's what I would expect to see, right. I mean, of course can't speak to to pricing without knowing, you know, where the dollars are going to land on the servers. But I would expect to see that because you're getting gains in a couple of ways. I mean, one, if the performance increases to the point where you can run more VMs, right? Get more performance out of your VMs and run more total VMs or more BDIs, then there's obviously a good, you know, payback on your investment there. And then as we were discussing earlier, just the power performance ratio, right? So if you're bringing down your power and cooling costs, if these machines are just more efficient overall, then you should see some gains there as well. So, you know, I think the key is looking at what's the total cost of ownership over, you know, a standard like a three-year period or something and what you're going to get out of it for your number of sessions, the performance for the sessions, and the overall efficiency of the machines. >> So just just to be clear with these Dell PowerEdge servers, you were able to validate world record performance. But this isn't, if you, if you look at CPU architecture, PCIe bus architecture, memory, you know, the class of memory, the class of RAID controller, the class of NIC. Those were not all state of the art in terms of at least what has been recently announced. Correct? >> Right. >> Because (indistinct) the PCI 4.0, So to your point- world records with that, you've got next-gen RAID controllers coming out, and NICs coming out. If the motherboard was PCIe 5, with commensurate memory, all of those things are getting better. >> Exactly, right. I mean you're, you're really you're just eliminating bandwidth constraints latency constraints, you know, all of that should be improved. NVMe, you know, just collectively all these things just open the doors, you know, letting more bandwidth through reducing all the latency. Those are, those are all pieces of the puzzle, right? That come together and it's all about finding the weakest link and eliminating it. And I think we're reaching the point where we're removing the biggest constraints from the systems. >> Okay. So I guess is it fair to summarize to say that with this infrastructure that you tested, you were able to set world records. This, during this year, I mean, over the next several months, things are just going to get faster and faster and faster and faster. >> That's what I would anticipate, exactly, right. If they're setting world records with these machines before some of the components are, you know, the absolute latest, it seems to me we're going to just see a continuing trend there, and more and more records should fall. So I'm really looking forward to seeing how that goes, 'cause it's already good and I think the return on investment is pretty good there. So I think it's only going to get better as these roll out. >> So let me ask you a question that's a little bit off topic. >> Okay. >> Kind of, you know, we see these gains, you know, we're all familiar with Moore's Law, we're familiar with, you know, the advancements in memory and bus architecture and everything else. We just covered SuperCompute 2022 in Dallas a couple of weeks ago. And it was fascinating talking to people about advances in AI that will be possible with new architectures. You know, most of these supercomputers that are running right now are n minus 1 or n minus 2 infrastructure, you know, they're, they're, they're PCI 3, right. And maybe two generations of processors old, because you don't just throw out a 100,000 CPU super computing environment every 18 months. It doesn't work that way. >> Exactly. >> Do you have an opinion on this question of the qualitative versus quantitative increase in computing moving forward? And, I mean, do you think that this new stuff that you're starting to do tests on is going to power a fundamental shift in computing? Or is it just going to be more consolidation, better power consumption? Do you think there's an inflection point coming? What do you think? >> That's a great question. That's a hard one to answer. I mean, it's probably a little bit of both, 'cause certainly there will be better consolidation, right? But I think that, you know, the systems, it works both ways. It just allows you to do more with less, right? And you can go either direction, you can do what you're doing now on fewer machines, you know, and get better value for it, or reduce your footprint. Or you can go the other way and say, wow, this lets us add more machines into the mix and take our our level of performance from here to here, right? So it just depends on what your focus is. Certainly with, with areas like, you know, HPC and AI and ML, having the ability to expand what you already are capable of by adding more machines that can do more is going to be your main concern. But if you're more like a small to medium sized business and the opportunity to do what you were doing on, on a much smaller footprint and for lower costs, that's really your goal, right? So I think you can use this in either direction and it should, should pay back in a lot of dividends. >> Yeah. Thanks for your thoughts. It's an interesting subject moving forward. You know, sometimes it's easy to get lost in the minutiae of the bits and bites and bobs of all the components we're studying, but they're powering something that that's going to effect effectively all of humanity as we move forward. So what else do we need to consider when it comes to what you've just validated in the virtualization testing? Anything else, anything we left out? >> I think we hit all the key points, or most of them it's, you know, really, it's just keeping in mind that it's all about the full system, the components not- you know, the processor is a obviously a key, but just removing blockages, right? Freeing up, getting rid of latency, improving bandwidth, all these things come to play. And then the power performance, as I said, I know I keep coming back to that but you know, we just, and a lot of what we work on, we just see that businesses, that's a really big concern for businesses and finding efficiency, right? And especially in an age of constrained budgets, that's a big deal. So, it's really important to have that power performance ratio. And that's one of the key things we saw that stood out to us in, in some of these benchmarks, so. >> Well, it's a big deal for me. >> It's all good. >> Yeah, I live in California and I know exactly how much I pay for a kilowatt hour of electricity. >> I bet, yeah. >> My friends in other places don't even know. So I totally understand the power constraint question. >> Yeah, it's not going to get better, so, anything you can do there, right? >> Yeah. Well Evan, this has been great. Thanks for sharing the results that Prowess has come up with, third party validation that, you know, even without the latest and greatest components in all categories, Dell PowerEdge servers are able to set world records. And I anticipate that those world records will be broken in 2023 and I expect that Prowess will be part of that process, So Thanks for that. For the rest of us- >> (indistinct) >> Here at theCUBE, I want to thank you for joining us. Stay tuned for continuing coverage of AMD's fourth generation EPYC launch, for myself and for Evan Touger. Thanks so much for joining us. (upbeat music)

Published Date : Dec 8 2022

SUMMARY :

Welcome to theCUBE's Hi, great to be here. to a lot of other companies, and dive in. that you were focused on in this study? you know, how do companies evaluate that? all the way through to VDI looking under the covers to see, you know, you know, chasing the bottlenecks. But all of that just leads you Yeah, that's right. you know, makes us really- (indistinct) are looking at, you know, and what did those interconnect and the R 7515 as well as So, you know, the newer To see continued, you know, is that right? Exactly, yeah, there you go. With Broadcom, you There you go. the bang for your buck, to pricing without knowing, you know, PCIe bus architecture, memory, you know, So to your point- world records with that, just open the doors, you know, with this infrastructure that you tested, components are, you know, So let me ask you a question that's we're familiar with, you know, and the opportunity to do in the minutiae of the or most of them it's, you know, really, it's a big deal for me. for a kilowatt hour of electricity. So I totally understand the third party validation that, you know, I want to thank you for joining us.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
EvanPERSON

0.99+

Evan TougerPERSON

0.99+

CaliforniaLOCATION

0.99+

DallasLOCATION

0.99+

DellORGANIZATION

0.99+

Prowess ConsultingORGANIZATION

0.99+

2023DATE

0.99+

three-yearQUANTITY

0.99+

AMDORGANIZATION

0.99+

R 6525COMMERCIAL_ITEM

0.99+

BroadcomORGANIZATION

0.99+

3rdQUANTITY

0.99+

R 7515COMMERCIAL_ITEM

0.99+

R7515COMMERCIAL_ITEM

0.99+

bothQUANTITY

0.99+

4th genQUANTITY

0.99+

3rd genQUANTITY

0.98+

both waysQUANTITY

0.98+

7525COMMERCIAL_ITEM

0.98+

ProwessORGANIZATION

0.98+

Bellevue, WashingtonLOCATION

0.98+

100,000 CPUQUANTITY

0.98+

PowerEdgeCOMMERCIAL_ITEM

0.97+

two generationsQUANTITY

0.97+

oneQUANTITY

0.96+

PCIe 5OTHER

0.96+

todayDATE

0.95+

theCUBEORGANIZATION

0.94+

this yearDATE

0.93+

PCI 4.0OTHER

0.92+

TPCx-VCOMMERCIAL_ITEM

0.92+

fourth-genQUANTITY

0.92+

gen 5QUANTITY

0.9+

MooreORGANIZATION

0.89+

fourth generationQUANTITY

0.88+

gen 4QUANTITY

0.87+

PCI 3OTHER

0.87+

couple of weeks agoDATE

0.85+

SuperCompute 2022TITLE

0.8+

PCIe gen 5OTHER

0.79+

VMmark 3.xCOMMERCIAL_ITEM

0.75+

minusQUANTITY

0.74+

one wayQUANTITY

0.74+

18 monthsQUANTITY

0.7+

PERC 12COMMERCIAL_ITEM

0.67+

5.0OTHER

0.67+

EPYCCOMMERCIAL_ITEM

0.65+

monthsDATE

0.64+

5QUANTITY

0.63+

PERC 11COMMERCIAL_ITEM

0.6+

next few monthsDATE

0.6+

firstQUANTITY

0.59+

VMmark 3.x.COMMERCIAL_ITEM

0.55+

EPYC GenoaCOMMERCIAL_ITEM

0.53+

genOTHER

0.52+

R7525COMMERCIAL_ITEM

0.52+

1QUANTITY

0.5+

2QUANTITY

0.47+

PowerEdgeORGANIZATION

0.47+