Krista Satterthwaite | International Women's Day
(upbeat music) >> Hello, welcome to the Cube's coverage of International Women's Day 2023. I'm John Furrier, host of the CUBE series of profiles around leaders in the tech industry sharing their stories, advice, best practices, what they're doing in their jobs their vision of the future, and more importantly, passing it on and encouraging more and more networking and telling the stories that matter. Our next guest is a great executive leader talking about how to lead in challenging times. Krista Satterthwaite, who is Senior Vice President and GM of Mainstream Compute. Krista great to see you're Cube alumni. We've had you on before talking about compute power. And by the way, congratulations on your BPT and Black Professional Tech Network 2023 Black Tech Exec of the Year Award. >> Thank you very much. Appreciate it. And thanks for having me. >> I knew I liked you the first time we were doing interviews together. You were so smart and so on top of it. Thanks for coming on. >> No problem. >> All kidding aside, let's get into it. You know, one of the things that's coming out on these interviews is leadership is being showcased and there's a network effect happening in the industry and you're starting to see people look and hear stories that they may or may not have heard before or news stories are coming out. So, one of the things that's interesting is that also in the backdrop of post pandemic, there's been a turn in the industry a little bit, there's a little bit of headwind in certain areas, some tailwinds in cloud and other areas. Compute, your area is doing very well. It could be challenging. And as a leader, has the conversation changed? And where are you at right now in the network of folks you're working with? What's the mood? >> Yeah, so actually I, things are much better. Obviously we had a chip shortage last year. Things are much, much better. But I learned a lot when it came to going through challenging times and leadership. And I think when we talk to customers, a lot of 'em are in challenging situations. Sometimes it's budget, sometimes it's attracting and retaining talent and sometimes it's just demands because, it's really exciting that technology is behind everything. But that means the demands on IT are bigger than ever before. So what I find when it comes to challenging times is that there's really three qualities that are game changers when it comes to leading and challenging times. And the first one is positivity. People have to feel like there's a light at the end of the tunnel to make sure that, their attitudes stay up, that they stay working really really hard and they look to the leader for that. The second one is communication. And I read somewhere that communication is leadership. And we had a great example from our CEO Antonio Neri when the pandemic hit and everything shut down. He had an all employee meeting every week for a month and we have tens of thousands of employees. And then even after that month, we had 'em very regularly. But he wanted to make sure that everybody heard from, him his thoughts had all the updates, knew how their peers were doing, how we were helping customers. And I really learned a lot from that in terms of communicating and communicating more during tough times. And then I would say the third one is making sure that they are informed and they feel empowered. So I would say a leader who is able to do that really, really stands out in a challenging time. >> So how do you get yourself together? Obviously you the chip shortage everyone knows in the industry and for the folks not in the tech industry, it was an economic potential disaster, because you don't get the chips you need. You guys make servers and technology, chips power everything. If you miss a shipment, it could cause a lot of backlash. So Cisco had an earnings impact. It has impact to the business. When do you have that code red moment where it's like, okay, we have to kind of put the pause and go into emergency mode. And how do you handle that? >> Well, you know, it is funny 'cause when it, when we have challenges, I come to learn that people can look at challenges and hard work as a burden or a mission and they behave totally different. If they see it as a burden, then they're doing the bare minimum and they're pointing fingers and they're complaining and they're probably not getting a whole lot done. If they see it as a mission, then all of a sudden they're going above and beyond. They're working really hard, they're really partnering. And if it affects customers for HPE, obviously we, HPE is a very customer centric company, so everyone pays attention and tries to pitch in. But when it comes to a mission, I started thinking, what are the real ingredients for a mission? And I think it's important. I think it's, people feel like they can make an impact. And then I think the third one is that the goal is clear, even if the path isn't, 'cause you may have to pivot a lot if it's a challenge. And so when it came to the chip shortage, it was a mission. We wanted to make sure that we could ship to customers as quickly as possible. And it was a mission. Everybody pulled together. I learned how much our team could pull off and pull together through that challenge. >> And the consequences can be quantified in economics. So it's like the burn the boats example, you got to burn the boats, you're stuck. You got to figure out a solution. How does that change the demands on people? Because this is, okay, there's a mission it they're not, it's not normal. What are some of those new demands that arise during those times and how do you manage that? How do you be a leader? >> Yeah, so it's funny, I was reading this statement from James White who used to be the CEO of Jamba Juice. And he was talking about how he got that job. He said, "I think it was one thing I said that really convinced them that I was the right person." And what he said was something like, "I will get more out of people than nine out of 10 leaders on the planet." He said, "Because I will look at their strengths and their capabilities and I will play to their passions." and their capabilities and I will play their passions. and getting the most out people in difficult times, it is all about how much you can get out of people for their own sake and for the company's sake. >> That's great feedback. And to people watching who are early in their careers, leading is getting the best out of your team, attitude. Some of the things you mentioned. What advice would you give folks that are starting to get into the workforce, that are starting to get into that leadership track or might have a trajectory or even might have an innate ability that they know they have and they want to pursue that dream? >> Yeah so. >> What advice would you give them? >> Yeah, what I would say, I say this all the time that, for the first half of my career I was very job conscious, but I wasn't very career conscious. So I'd get in a role and I'd stay in that role for long periods of time and I'd do a good job, but I wasn't really very career conscious. And what I would say is, everybody says how important risk taking is. Well, risk taking can be a little bit of a scary word, right? Or term. And the way I see it is give it a shot and see what happens. You're interested in something, give it a shot and see what happens. It's kind of a less intimidating way of looking at risk because even though I was job conscious, and not career conscious, one thing I did when people asked me to take something on, hey Krista, would you like to take on more responsibility here? The answer was always yes, yes, yes, yes. So I said yes because I said, hey I'll give it a shot and see what happens. And that helped me tremendously because I felt like I am giving it a try. And the more you do that, the the better it is. >> It's great. >> And actually the the less scary it is because you do that, a few times and it goes well. It's like a muscle that builds. >> It's funny, a woman executive was on the program. I said, the word balance comes up a lot. And she stopped and said, "Let's just talk about balance for a second." And then she went contrarian and said, "It's about not being unbalanced. It's about being, taking a chance and being a little bit off balance to put yourself outside your comfort zone to try new things." And then she also came up and followed and said, "If you do that alone, you increase your risk. But if you do it with people, a team that you trust and you're authentic and you're vulnerable and you're communicating, that is the chemistry." And that was a really good point. What's your reaction? 'Cause you were talking about authentic conversations good communications with Antonio. How does someone get, feel, find that team and do you agree with it? And what was your, how would you react to that? >> Yes, I agree with that. And when it comes to being authentic, that's the magic and when someone isn't, if someone's not really being themselves, it's really funny because you can feel it, you can sense it. There's kind of a wall between you and them. And over time people won't be able to put their finger on it, but they'll feel a distance from you. But when you're authentic and you share who you are, what you find is you find things in common with other people. 'Cause you're sharing more of who you are and it's like, oh, I do that too. Oh, I'm interested in that too. And build the bonds between people and the authenticity. And that's what people crave. They want people to be authentic and people can tell when you're authentic and when you're not. >> Is managing and leading through a crisis a born talent or can you learn it? >> Oh, definitely learned. I think that we're born knowing nothing and I once read people are nurtured into greatness and I think that's true. So yeah, definitely learned. >> What are some examples that can come out of a tough time as folks may look at a crisis and be shy away from it? How do they lean into it? What advice would you give folks? How do you handle it? I mean, everyone's got different personality. Okay, they get to a position but stepping through that door. >> Yeah, well, I do this presentation called, "10 things I Wish I Knew Earlier in my Career." And one of those things is about the growth mindset and the growth mindset. There's a book called "Mindset" by Carol Dweck and the growth mindset is all about learning and not always having to know everything, but really the winning is in the learning. And so if you have a growth mindset it makes you feel better about everything because you can't lose. You're winning because you're learning. So when I've learned that, I started looking at things much differently. And when it comes to going through tough times, what I find is you're exercising muscles that you didn't even know you had, which makes you stronger when the crisis is over, obviously. And I also feel like you become a lot a much more creative when you're in challenging times. You're forced to do things that you hadn't had to do before. And it also bonds the team. It's almost like going through bootcamp together. When you go through a challenge together it bonds you for life. >> I mean, you could have bonding, could be trauma bonding or success bonding. People love to be on the success side because that's positive and that's really the key mindset. You're always winning if you have that attitude. And learnings is also positive. So it's not, it's never a failure unless you make it. >> That's right, exactly. As long as you learn from it. And that's the name of the game. So, learning is the goal. >> So I have to ask you, on your job now, you have a really big responsibility HPE compute and big division. What's the current mindset that you have right now in your career, where you're at? What are some of the things on your mind that you think about? We had other, other seniors leaders say, hey, you know I got the software as my brain and the hardware's my body. I like to keep software and hardware working together. What is your current state of your career and how you looking at it, what's next and what's going on in your mind right now? >> Yeah, so for me, I really want to make sure that for my team we're nurturing the next generation of leadership and that we're helping with career development and career growth. And people feel like they can grow their careers here. Luckily at HPE, we have a lot of people stay at HPE a long time, and even people who leave HPE a lot of times they come back because the culture's fantastic. So I just want to make sure I'm contributing to that culture and I'm bringing up the next generation of leaders. >> What's next for you? What are you looking at from a career personal standpoint? >> You know, it's funny, I, I love what I'm doing right now. I'm actually on a joint venture board with H3C, which is HPE Joint Venture Company. And so I'm really enjoying that and exploring more board service opportunities. >> You have a focus of good growth mindset, challenging through, managing through tough times. How do you stay focused on that North star? How do you keep the reinforcement of the mission? How do you nurture the team to greatness? >> Yeah, so I think it's a lot of clarity, providing a lot of clarity about what's important right now. And it goes back to some of the communication that I mentioned earlier, making sure that everybody knows where the North Star is, so everybody's focused on the same thing, because I feel like with the, I always felt like throughout my career I was set up for success if I had the right information, the right guidance and the right goals. And I try to make sure that I do that with my team. >> What are some of the things that you could share as we wrap up here for the folks watching, as the networks increase, as the stories start to unfold more and more on digital like we're doing here, what do you hope people walk away with? What's working, what needs work, and what is some things that people aren't talking about that should be discussed publicly? >> Do you mean from a career standpoint or? >> For career? For growing into tech and into leadership positions. >> Okay. >> Big migration tech is now a wide field. I mean, when I grew up, broke into the eighties, it was computer science, software engineering, and three degrees in engineering, right? >> I see huge swath of AI coming. So many technical careers. There's a lot more women. >> Yeah. And that's what's so exciting about being in a technical career, technical company, is that everything's always changing. There's always opportunity to learn something new. And frankly, you know, every company is in the business of technology right now, because they want to closer to their customers. Typically, they're using technology to do that. Everyone's digitally transforming. And so what I would say is that there's so much opportunity, keep your mind open, explore what interests you and keep learning because it's changing all the time. >> You know I was talking with Sue, former HP, she's on a lot of boards. The balance at the board level still needs a lot of work and the leaderships are getting better, but the board at the seats at the table needs work. Where do you see that transition for you in the future? Is that something on your mind? Maybe a board seat? You mentioned you're on a board with HPE, but maybe sitting on some other boards? Any, any? >> Yes, actually, actually, we actually have a program here at HPE called the Board Ready Now program that I'm a part of. And so HPE is very supportive of me exploring an independent board seat. And so they have some education and programming around that. And I know Sue well, she's awesome. And so yes, I'm looking into those opportunities right now. >> She advises do one no more than two. The day job. >> Yeah, I would only be doing one current job that I have. >> Well, kris, it was great to chat with you about these topics and leadership and challenging times. Great masterclass, great advice. As SVP and GM of mainstream compute for HPE, what's going on in your job these days? What's the most exciting thing happening? Share some of your work situations. >> Sure, so the most exciting thing happening right now is HPE Gen 11, which we just announced and started shipping, brings tremendous performance benefit, has an intuitive operating experience, a trusted security by design, and it's optimized to run workloads so much faster. So if anybody is interested, they should go check it out on hpe.com. >> And of course the CUBE will be at HPE Discover. We'll see you there. Any final wisdom you'd like to share as we wrap up the last minute here? >> Yeah, so I think the last thing I'll say is that when it comes to setting your sights, I think, expecting it, good things to happen usually happens when you believe you deserve it. So what happens is you believe you deserve it, then you expect it and you get it. And so sometimes that's about making sure you raise your thermostat to expect more. And I always talk about you don't have to raise it all up at once. You could do that incrementally and other people can set your thermostat too when they say, hey, you should be, you should get a level this high or that high, but raise your thermostat because what you expect is what you get. >> Krista, thank you so much for contributing to this program. We're going to do it quarterly. We're going to do getting more stories out there, so we'll have you back and if you know anyone with good stories, send them our way. And congratulations on your BPTN Tech Executive of the Year award for 2023. Congratulations, great prize there and great recognition for your hard work. >> Thank you so much, John, I appreciate it. >> Okay, this is the Cube's coverage of National Woodman's Day. I'm John Furrier, stories from the front lines, management ranks, developers, all there, global coverage of international events with theCUBE. Thanks for watching. (soft music)
SUMMARY :
And by the way, Thank you very much. I knew I liked you And where are you at right now And the first one is positivity. And how do you handle that? that the goal is clear, And the consequences can and for the company's sake. Some of the things you mentioned. And the more you do that, And actually the the less scary it is find that team and do you agree with it? and you share who you are, and I once read What advice would you give folks? And I also feel like you become a lot I mean, you could have And that's the name of the game. that you have right now of leadership and that we're helping And so I'm really enjoying that How do you nurture the team to greatness? of the communication For growing into tech and broke into the eighties, I see huge swath of AI coming. And frankly, you know, every company is Where do you see that transition And so they have some education She advises do one no more than two. one current job that I have. great to chat with you Sure, so the most exciting And of course the CUBE So what happens is you and if you know anyone with Thank you so much, from the front lines,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Nutanix | ORGANIZATION | 0.99+ |
Western Digital | ORGANIZATION | 0.99+ |
John | PERSON | 0.99+ |
David | PERSON | 0.99+ |
Krista | PERSON | 0.99+ |
Bernie Hannon | PERSON | 0.99+ |
Jeff Frick | PERSON | 0.99+ |
Bernie | PERSON | 0.99+ |
H3C | ORGANIZATION | 0.99+ |
Citrix | ORGANIZATION | 0.99+ |
September of 2015 | DATE | 0.99+ |
Dave Tang | PERSON | 0.99+ |
Krista Satterthwaite | PERSON | 0.99+ |
SanDisk | ORGANIZATION | 0.99+ |
Martin | PERSON | 0.99+ |
James White | PERSON | 0.99+ |
Sue | PERSON | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
Carol Dweck | PERSON | 0.99+ |
Martin Fink | PERSON | 0.99+ |
Jeff | PERSON | 0.99+ |
HPE | ORGANIZATION | 0.99+ |
two | QUANTITY | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Dave allante | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
Raghu | PERSON | 0.99+ |
Raghu Nandan | PERSON | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
three | QUANTITY | 0.99+ |
Lee Caswell | PERSON | 0.99+ |
HP | ORGANIZATION | 0.99+ |
Antonio Neri | PERSON | 0.99+ |
five years | QUANTITY | 0.99+ |
three-month | QUANTITY | 0.99+ |
four-year | QUANTITY | 0.99+ |
one minute | QUANTITY | 0.99+ |
Gary | PERSON | 0.99+ |
Antonio | PERSON | 0.99+ |
Feb 2018 | DATE | 0.99+ |
2023 | DATE | 0.99+ |
seven dollars | QUANTITY | 0.99+ |
three months | QUANTITY | 0.99+ |
Arm Holdings | ORGANIZATION | 0.99+ |
first | QUANTITY | 0.99+ |
SiliconANGLE News | Red Hat Collaborates with Nvidia, Samsung and Arm on Efficient, Open Networks
(upbeat music) >> Hello, everyone; I'm John Furrier with SiliconANGLE NEWS and host of theCUBE, and welcome to our SiliconANGLE NEWS MWC NEWS UPDATE in Barcelona where MWC is the premier event for the cloud telecommunication industry, and in the news here is Red Hat, Red Hat announcing a collaboration with NVIDIA, Samsung and Arm on Efficient Open Networks. Red Hat announced updates across various fields including advanced 5G telecommunications cloud, industrial edge, artificial intelligence, and radio access networks, RAN, and Efficiency. Red Hat's enterprise Kubernetes platform, OpenShift, has added support for NVIDIA's converged accelerators and aerial SDK facilitating RAND deployments on industry standard service across hybrid and multicloud platforms. This composable infrastructure enables telecom firms to support heavier compute demands for edge computing, AI, private 5G, and more, and just also helps network operators adopt open architectures, allowing them to choose non-proprietary components from multiple suppliers. In addition to the NVIDIA collaboration, Red Hat is working with Samsung to offer a new vRAN solution for service providers to better manage their open RAN networks. They're also working with UK chip designer, Arm, to create new networking solutions for energy efficient Red Hat Open Source Kubernetes-based Efficient Power Level Exporter project, or Kepler, has been donated to the open Cloud Native Compute Foundation, allowing enterprise to better understand their cloud native workloads and power consumptions. Kepler can also help in the development of sustainable software by creating less power hungry applications. Again, Red Hat continuing to provide OpenSource, OpenRAN, and contributing an open source project to the CNCF, continuing to create innovation for developers, and, of course, Red Hat knows what, a lot about operating systems and the telco could be the next frontier. That's SiliconANGLE NEWS. I'm John Furrier; thanks for watching. (monotone music)
SUMMARY :
and in the news here is Red Hat,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
NVIDIA | ORGANIZATION | 0.99+ |
Nvidia | ORGANIZATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
Samsung | ORGANIZATION | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
Barcelona | LOCATION | 0.99+ |
Cloud Native Compute Foundation | ORGANIZATION | 0.99+ |
CNCF | ORGANIZATION | 0.98+ |
UK | LOCATION | 0.95+ |
OpenRAN | TITLE | 0.93+ |
telco | ORGANIZATION | 0.93+ |
Kubernetes | TITLE | 0.92+ |
Kepler | ORGANIZATION | 0.9+ |
SiliconANGLE NEWS | ORGANIZATION | 0.88+ |
vRAN | TITLE | 0.88+ |
SiliconANGLE | ORGANIZATION | 0.87+ |
Arm | ORGANIZATION | 0.87+ |
MWC | EVENT | 0.86+ |
Arm on Efficient Open Networks | ORGANIZATION | 0.86+ |
theCUBE | ORGANIZATION | 0.84+ |
OpenShift | TITLE | 0.78+ |
Hat | TITLE | 0.73+ |
SiliconANGLE News | ORGANIZATION | 0.65+ |
OpenSource | TITLE | 0.61+ |
NEWS | ORGANIZATION | 0.51+ |
Red | ORGANIZATION | 0.5+ |
SiliconANGLE | TITLE | 0.43+ |
Day 2 MWC Analyst Hot Takes MWC Barcelona 2023
(soft music) >> Announcer: TheCUBE's live coverage is made possible by funding from Dell Technologies. Creating technologies that drive human progress. (upbeat music) >> Welcome back to Spain, everybody. We're here at the Fira in MWC23. Is just an amazing day. This place is packed. They said 80,000 people. I think it might even be a few more walk-ins. I'm Dave Vellante, Lisa Martin is here, David Nicholson. But right now we have the Analyst Hot Takes with three friends of theCUBE. Chris Lewis is back again with me in the co-host seat. Zeus Kerravala, analyst extraordinaire. Great to see you, Z. and Sarbjeet SJ Johal. Good to see you again, theCUBE contributor. And that's my new name for him. He says that is his nickname. Guys, thanks for coming back on. We got the all male panel, sorry, but it is what it is. So Z, is this the first time you've been on it at MWC. Take aways from the show, Hot Takes. What are you seeing? Same wine, new bottle? >> In a lot of ways, yeah. I mean, I was talking to somebody this earlier that if you had come from like MWC five years ago to this year, a lot of the themes are the same. Telco transformation, cloud. I mean, 5G is a little new. Sustainability is certainly a newer theme here. But I think it highlights just the difficulty I think the telcos have in making this transformation. And I think, in some ways, I've been unfair to them in some degree 'cause I've picked on them in the past for not moving fast enough. These are, you know, I think these kind of big transformations almost take like a perfect storm of things that come together to happen, right? And so, in the past, we had technologies that maybe might have lowered opex, but they're hard to deploy. They're vertically integrated. We didn't have the software stacks. But it appears today that between the cloudification of, you know, going to cloud native, the software stacks, the APIs, the ecosystems, I think we're actually in a position to see this industry finally move forward. >> Yeah, and Chris, I mean, you have served this industry for a long time. And you know, when you, when you do that, you get briefed as an analyst, you actually realize, wow, there's a lot of really smart people here, and they're actually, they have challenges, they're working through it. So Zeus was saying he's been tough on the industry. You know, what do you think about how the telcos have evolved in the last five years? >> I think they've changed enormously. I think the problem we have is we're always looking for the great change, the big step change, and there is no big step change in a way. What telcos deliver to us as individuals, businesses, society, the connectivity piece, that's changed. We get better and better and more reliable connectivity. We're shunting a load more capacity through. What I think has really changed is their attitude to their suppliers, their attitude to their partners, and their attitude to the ecosystem in which they play. Understanding that connectivity is not the end game. Connectivity is part of the emerging end game where it will include storage, compute, connect, and analytics and everything else. So I think the realization that they are not playing their own game anymore, it's a much more open game. And some things they will continue to do, some things they'll stop doing. We've seen them withdraw from moving into adjacent markets as much as we used to see. So a lot of them in the past went off to try and do movies, media, and a lot went way way into business IT stuff. They've mainly pulled back from that, and they're focusing on, and let's face it, it's not just a 5G show. The fixed environment is unbelievably important. We saw that during the pandemic. Having that fixed broadband connection using wifi, combining with cellular. We love it. But the problem as an industry is that the users often don't even know the connectivity's there. They only know when it doesn't work, right? >> If it's not media and it's not business services, what is it? >> Well, in my view, it will be enabling third parties to deliver the services that will include media, that will include business services. So embedding the connectivity all the way into the application that gets delivered or embedding it so the quality mechanism deliver the gaming much more accurately or, I'm not a gamer, so I can't comment on that. But no, the video quality if you want to have a high quality video will come through better. >> And those cohorts will pay for that value? >> Somebody will pay somewhere along the line. >> Seems fuzzy to me. >> Me too. >> I do think it's use case dependent. Like you look at all the work Verizon did at the Super Bowl this year, that's a perfect case where they could have upsold. >> Explain that. I'm not familiar with it. >> So Verizon provided all the 5G in the Super Bowl. They provided a lot of, they provided private connectivity for the coaches to talk to the sidelines. And that's a mission critical application, right? In the NFL, if one side can't talk, the other side gets shut down. You can't communicate with the quarterback or the coaches. There's a lot of risk at that. So, but you know, there's a case there, though, I think where they could have even made that fan facing. Right? And if you're paying 2000 bucks to go to a game, would you pay 50 bucks more to have a higher tier of bandwidth so you can post things on social? People that go there, they want people to know they were there. >> Every football game you go to, you can't use your cell. >> Analyst: Yeah, I know, right? >> All right, let's talk about developers because we saw the eight APIs come out. I think ISVs are going to be a big part of this. But it's like Dee Arthur said. Hey, eight's better than zero, I guess. Okay, so, but so the innovation is going to come from ISVs and developers, but what are your hot takes from this show and now day two, we're a day and a half in, almost two days in. >> Yeah, yeah. There's a thing that we have talked, I mentioned many times is skills gravity, right? Skills have gravity, and also, to outcompete, you have to also educate. That's another theme actually of my talks is, or my research is that to puts your technology out there to the practitioners, you have to educate them. And that's the only way to democratize your technology. What telcos have been doing is they have been stuck to the proprietary software and proprietary hardware for too long, from Nokia's of the world and other vendors like that. So now with the open sourcing of some of the components and a few others, right? And they're open source space and antenna, you know? Antennas are becoming software now. So with the invent of these things, which is open source, it helps us democratize that to the other sort of skirts of the practitioners, if you will. And that will bring in more applications first into the IOT space, and then maybe into the core sort of California, if you will. >> So what does a telco developer look like? I mean, all the blockchain developers and crypto developers are moving into generative AI, right? So maybe those worlds come together. >> You'd like to think though that the developers would understand everything's network centric today. So you'd like to think they'd understand that how the network responds, you know, you'd take a simple app like Zoom or something. If it notices the bandwidth changes, it should knock down the resolution. If it goes up it, then you can add different features and things and you can make apps a lot smarter that way. >> Well, G2 was saying today that they did a deal with Mercedes, you know this probably better than I do, where they're going to embed WebEx in the car. And if you're driving, it'll shut off the camera. >> Of course. >> I'm like, okay. >> I'll give you a better example though. >> But that's my point. Like, isn't there more that we can do? >> You noticed down on the SKT stand the little helicopter. That's a vertical lift helicopter. So it's an electric vertical lift helicopter. Just think of that for a second. And then think of the connectivity to control that, to securely control that. And then I was recently at an event with Zeus actually where we saw an air traffic control system where there was no people manning the tower. It was managed by someone remotely with all the cameras around them. So managing all of those different elements, we call it IOT, but actually it's way more than what we thought of as IOT. All those components connecting, communicating securely and safely. 'Cause I don't want that helicopter to come down on my head, do you? (men laugh) >> Especially if you're in there. (men laugh) >> Okay, so you mentioned sustainability. Everybody's talking about power. I don't know if you guys have a lot of experience around TCO, but I'm trying to get to, well, is this just because energy costs are so high, and then when the energy becomes cheap again, nobody's going to pay any attention to it? Or is this the real deal? >> So one of the issues around the, if we want to experience all that connectivity locally or that helicopter wants to have that connectivity, we have to ultimately build denser, more reliable networks. So there's a CapEx, we're going to put more base stations in place. We need more fiber in the ground to support them. Therefore, the energy consumption will go up. So we need to be more efficient in the use of energy. Simple as that. >> How much of the operating expense is energy? Like what percent of it? Is it 10%? Is it 20%? Is it, does anybody know? >> It depends who you ask and it depends on the- >> I can't get an answer to that. I mean, in the enterprise- >> Analyst: The data centers? >> Yeah, the data centers. >> We have the numbers. I think 10 to 15%. >> It's 10 to 12%, something like that. Is it much higher? >> I've got feeling it's 30%. >> Okay, so if it's 30%, that's pretty good. >> I do think we have to get better at understanding how to measure too. You know, like I was talking with John Davidson at Sysco about this that every rev of silicon they come out with uses more power, but it's a lot more dense. So at the surface, you go, well, that's using a lot more power. But you can consolidate 10 switches down to two switches. >> Well, Intel was on early and talking about how they can intelligently control the cores. >> But it's based off workload, right? That's the thing. So what are you running over it? You know, and so, I don't think our industry measures that very well. I think we look at things kind of boxed by box versus look at total consumption. >> Well, somebody else in theCUBE was saying they go full throttle. That the networks just say just full throttle everything. And that obviously has to change from the power consumption standpoint. >> Obviously sustainability and sensory or sensors from IOT side, they go hand in hand. Just simple examples like, you know, lights in the restrooms, like in public areas. Somebody goes in there and just only then turns. The same concept is being applied to servers and compute and storage and every aspects and to networks as well. >> Cell tower. >> Yeah. >> Cut 'em off, right? >> Like the serverless telco? (crosstalk) >> Cell towers. >> Well, no, I'm saying, right, but like serverless, you're not paying for the compute when you're not using it, you know? >> It is serverless from the economics point of view. Yes, it's like that, you know? It goes to the lowest level almost like sleep on our laptops, sleep level when you need more power, more compute. >> I mean, some of that stuff's been in networking equipment for a long time, it just never really got turned on. >> I want to ask you about private networks. You wrote a piece, Athenet was acquired by HPE right after Dell announced a relationship with Athenet, which was kind of, that was kind of funny. And so a good move, good judo move by by HP. I asked Dell about it, and they said, look, we're open. They said the right things. We'll see, but I think it's up to HP. >> Well, and the network inside Dell is. >> Yeah, okay, so. Okay, cool. So, but you said something in that article you wrote on Silicon Angle that a lot of people feel like P5G is going to basically replace wireless or cannibalize wireless. You said you didn't agree with that. Explain why? >> Analyst: Wifi. >> Wifi, sorry, I said wireless. >> No, that's, I mean that's ridiculous. Pat Gelsinger said that in his last VMware, which I thought was completely irresponsible. >> That it was going to cannibalize? >> Cannibalize wifi globally is what he said, right? Now he had Verizon on stage with him, so. >> Analyst: Wifi's too inexpensive and flexible. >> Wifi's cheap- >> Analyst: It's going to embed really well. Embedded in that. >> It's reached near ubiquity. It's unlicensed. So a lot of businesses don't want to manage their own spectrum, right? And it's great for this, right? >> Analyst: It does the job. >> For casual connectivity. >> Not today. >> Well, it does for the most part. Right now- >> For the most part. But never at these events. >> If it's engineered correctly, it will. Right? Where you need private 5G is when reliability is an absolute must. So, Chris, you and I visited the Port of Rotterdam, right? So they're putting 5G, private 5G there, but there's metal containers everywhere, right? And that's going to disrupt it. And so there are certain use cases where it makes sense. >> I've been in your basement, and you got some pretty intense equipment in there. You have private 5G in there. >> But for carpeted offices, it does not make sense to bring private. The economics don't make any sense. And you know, it runs hot. >> So where's it going to be used? Give us some examples of where we should be looking for. >> The early ones are obviously in mining, and you say in ports, in airports. It broadens cities because you've got so many moving parts in there, and always think about it, very expensive moving parts. The cranes in the port are normally expensive piece of kits. You're moving that, all that logistics around. So managing that over a distance where the wifi won't work over the distance. And in mining, we're going to see enormous expensive trucks moving around trying to- >> I think a great new use case though, so the Cleveland Browns actually the first NFL team to use it for facial recognition to enter the stadium. So instead of having to even pull your phone out, it says, hey Dave Vellante. You've got four tickets, can we check you all in? And you just walk through. You could apply that to airports. You could do put that in a hotel. You could walk up and check in. >> Analyst: Retail. >> Yeah, retail. And so I think video, realtime video analytics, I think it's a perfect use case for that. >> But you don't need 5G to do that. You could do that through another mechanism, couldn't you? >> You could do wire depending on how mobile you want to do it. Like in a stadium, you're pulling those things in and out all the time. You're moving 'em around and things, so. >> Yeah, but you're coming in at a static point. >> I'll take the contrary view here. >> See, we can't even agree on that. (men laugh) >> Yeah, I love it. Let's go. >> I believe the reliability of connection is very important, right? And the moving parts. What are the moving parts in wifi? We have the NIC card, you know, the wifi card in these suckers, right? In a machine, you know? They're bigger in size, and the radios for 5G are smaller in size. So neutralization is important part of the whole sort of progress to future, right? >> I think 5G costs as well. Yes, cost as well. But cost, we know that it goes down with time, right? We're already talking about 60, and the 5G stuff will be good. >> Actually, sorry, so one of the big boom areas at the moment is 4G LTE because the component price has come down so much, so it is affordable, you can afford to bring it all together. People don't, because we're still on 5G, if 5G standalone everywhere, you're not going to get a consistent service. So those components are unbelievably important. The skillsets of the people doing integration to bring them all together, unbelievably important. And the business case within the business. So I was talking to one of the heads of one of the big retail outlets in the UK, and I said, when are you going to do 5G in the stores? He said, well, why would I tear out all the wifi? I've got perfectly functioning wifi. >> Yeah, that's true. It's already there. But I think the technology which disappears in front of you, that's the best technology. Like you don't worry about it. You don't think it's there. Wifi, we think we think about that like it's there. >> And I do think wifi 5G switching's got to get easier too. Like for most users, you don't know which is better. You don't even know how to test it. And to your point, it does need to be invisible where the user doesn't need to think about it, right? >> Invisible. See, we came back to invisible. We talked about that yesterday. Telecom should be invisible. >> And it should be, you know? You don't want to be thinking about telecom, but at the same time, telecoms want to be more visible. They want to be visible like Netflix, don't they? I still don't see the path. It's fuzzy to me the path of how they're not going to repeat what happened with the over the top providers if they're invisible. >> Well, if you think about what telcos delivers to consumers, to businesses, then extending that connectivity into your home to help you support secure and extend your connection into Zeus's basement, whatever it is. Obviously that's- >> His awesome setup down there. >> And then in the business environment, there's a big change going on from the old NPLS networks, the old rigid structures of networks to SD1 where the control point is moved outside, which can be under control of the telco, could be under the control of a third party integrator. So there's a lot changing. I think we obsess about the relative role of the telco. The demand is phenomenal for connectivity. So address that, fulfill that. And if they do that, then they'll start to build trust in other areas. >> But don't you think they're going to address that and fulfill that? I mean, they're good at it. That's their wheelhouse. >> And it's a 1.6 trillion market, right? So it's not to be sniffed at. That's fixed on mobile together, obviously. But no, it's a big market. And do we keep changing? As long as the service is good, we don't move away from it. >> So back to the APIs, the eight APIs, right? >> I mean- >> Eight APIs is a joke actually almost. I think they released it too early. The release release on the main stage, you know? Like, what? What is this, right? But of course they will grow into hundreds and thousands of APIs. But they have to spend a lot of time and effort in that sort of context. >> I'd actually like to see the GSMA work with like AWS and Microsoft and VMware and software companies and create some standardization across their APIs. >> Yeah. >> I spoke to them yes- >> We're trying to reinvent them. >> Is that not what they're doing? >> No, they said we are not in the business of a defining standards. And they used a different term, not standard. I mean, seriously. I was like, are you kidding me? >> Let's face it, there aren't just eight APIs out there. There's so many of them. The TM forum's been defining when it's open data architecture. You know, the telcos themselves are defining them. The standards we talked about too earlier with Danielle. There's a lot of APIs out there, but the consistency of APIs, so we can bring them together, to bring all the different services together that will support us in our different lives is really important. I think telcos will do it, it's in their interest to do it. >> All right, guys, we got to wrap. Let's go around the horn here, starting with Chris, Zeus, and then Sarbjeet, just bring us home. Number one hot take from Mobile World Congress MWC23 day two. >> My favorite hot take is the willingness of all the participants who have been traditional telco players who looked inwardly at the industry looking outside for help for partnerships, and to build an ecosystem, a more open ecosystem, which will address our requirements. >> Zeus? >> Yeah, I was going to talk about ecosystem. I think for the first time ever, when I've met with the telcos here, I think they're actually, I don't think they know how to get there yet, but they're at least aware of the fact that they need to understand how to build a big ecosystem around them. So if you think back like 50 years ago, IBM and compute was the center of everything in your company, and then the ecosystem surrounded it. I think today with digital transformation being network centric, the telcos actually have the opportunity to be that center of excellence, and then build an ecosystem around them. I think the SIs are actually in a really interesting place to help them do that 'cause they understand everything top to bottom that I, you know, pre pandemic, I'm not sure the telcos were really understand. I think they understand it today, I'm just not sure they know how to get there. . >> Sarbjeet? >> I've seen the lot of RN demos and testing companies and I'm amazed by it. Everything is turning into software, almost everything. The parts which are not turned into software. I mean every, they will soon. But everybody says that we need the hardware to run something, right? But that hardware, in my view, is getting miniaturized, and it's becoming smaller and smaller. The antennas are becoming smaller. The equipment is getting smaller. That means the cost on the physicality of the assets is going down. But the cost on the software side will go up for telcos in future. And telco is a messy business. Not everybody can do it. So only few will survive, I believe. So that's what- >> Software defined telco. So I'm on a mission. I'm looking for the monetization path. And what I haven't seen yet is, you know, you want to follow the money, follow the data, I say. So next two days, I'm going to be looking for that data play, that potential, the way in which this industry is going to break down the data silos I think there's potential goldmine there, but I haven't figured out yet. >> That's a subject for another day. >> Guys, thanks so much for coming on. You guys are extraordinary partners of theCUBE friends, and great analysts and congratulations and thank you for all you do. Really appreciate it. >> Analyst: Thank you. >> Thanks a lot. >> All right, this is a wrap on day two MWC 23. Go to siliconangle.com for all the news. Where Rob Hope and team are just covering all the news. John Furrier is in the Palo Alto studio. We're rocking all that news, taking all that news and putting it on video. Go to theCUBE.net, you'll see everything on demand. Thanks for watching. This is a wrap on day two. We'll see you tomorrow. (soft music)
SUMMARY :
that drive human progress. Good to see you again, And so, in the past, we had technologies have evolved in the last five years? is that the users often don't even know So embedding the connectivity somewhere along the line. at the Super Bowl this year, I'm not familiar with it. for the coaches to talk to the sidelines. you can't use your cell. Okay, so, but so the innovation of the practitioners, if you will. I mean, all the blockchain developers that how the network responds, embed WebEx in the car. Like, isn't there more that we can do? You noticed down on the SKT Especially if you're in there. I don't know if you guys So one of the issues around the, I mean, in the enterprise- I think 10 to 15%. It's 10 to 12%, something like that. Okay, so if it's So at the surface, you go, control the cores. That's the thing. And that obviously has to change and to networks as well. the economics point of view. I mean, some of that stuff's I want to ask you P5G is going to basically replace wireless Pat Gelsinger said that is what he said, right? Analyst: Wifi's too to embed really well. So a lot of businesses Well, it does for the most part. For the most part. And that's going to disrupt it. and you got some pretty it does not make sense to bring private. So where's it going to be used? The cranes in the port are You could apply that to airports. I think it's a perfect use case for that. But you don't need 5G to do that. in and out all the time. Yeah, but you're coming See, we can't even agree on that. Yeah, I love it. I believe the reliability of connection and the 5G stuff will be good. I tear out all the wifi? that's the best technology. And I do think wifi 5G We talked about that yesterday. I still don't see the path. to help you support secure from the old NPLS networks, But don't you think So it's not to be sniffed at. the main stage, you know? the GSMA work with like AWS are not in the business You know, the telcos Let's go around the horn here, of all the participants that they need to understand But the cost on the the data silos I think there's and thank you for all you do. John Furrier is in the Palo Alto studio.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Vellante | PERSON | 0.99+ |
David Nicholson | PERSON | 0.99+ |
Chris Lewis | PERSON | 0.99+ |
Chris | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
10 | QUANTITY | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Mercedes | ORGANIZATION | 0.99+ |
Zeus Kerravala | PERSON | 0.99+ |
Pat Gelsinger | PERSON | 0.99+ |
Nokia | ORGANIZATION | 0.99+ |
50 bucks | QUANTITY | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
UK | LOCATION | 0.99+ |
Z. | PERSON | 0.99+ |
10 switches | QUANTITY | 0.99+ |
Sysco | ORGANIZATION | 0.99+ |
Verizon | ORGANIZATION | 0.99+ |
2000 bucks | QUANTITY | 0.99+ |
HP | ORGANIZATION | 0.99+ |
Cleveland Browns | ORGANIZATION | 0.99+ |
30% | QUANTITY | 0.99+ |
Spain | LOCATION | 0.99+ |
20% | QUANTITY | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
10% | QUANTITY | 0.99+ |
telco | ORGANIZATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
two switches | QUANTITY | 0.99+ |
HPE | ORGANIZATION | 0.99+ |
80,000 people | QUANTITY | 0.99+ |
Athenet | ORGANIZATION | 0.99+ |
tomorrow | DATE | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
John Davidson | PERSON | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
Super Bowl | EVENT | 0.99+ |
Netflix | ORGANIZATION | 0.99+ |
Dee Arthur | PERSON | 0.99+ |
G2 | ORGANIZATION | 0.99+ |
Zeus | ORGANIZATION | 0.99+ |
Dell Technologies | ORGANIZATION | 0.99+ |
15% | QUANTITY | 0.99+ |
Rob Hope | PERSON | 0.99+ |
five years ago | DATE | 0.99+ |
yesterday | DATE | 0.99+ |
first time | QUANTITY | 0.99+ |
California | LOCATION | 0.99+ |
siliconangle.com | OTHER | 0.99+ |
MWC23 | LOCATION | 0.99+ |
SKT | ORGANIZATION | 0.99+ |
theCUBE.net | OTHER | 0.99+ |
12% | QUANTITY | 0.98+ |
GSMA | ORGANIZATION | 0.98+ |
Eight APIs | QUANTITY | 0.98+ |
Danielle | PERSON | 0.98+ |
Telco | ORGANIZATION | 0.98+ |
today | DATE | 0.98+ |
eight APIs | QUANTITY | 0.98+ |
5G | ORGANIZATION | 0.98+ |
telcos | ORGANIZATION | 0.98+ |
three friends | QUANTITY | 0.98+ |
one | QUANTITY | 0.97+ |
first | QUANTITY | 0.97+ |
Mobile World Congress | EVENT | 0.97+ |
CapEx | ORGANIZATION | 0.97+ |
50 years ago | DATE | 0.97+ |
day two | QUANTITY | 0.97+ |
theCUBE | ORGANIZATION | 0.97+ |
four tickets | QUANTITY | 0.96+ |
a day and a half | QUANTITY | 0.96+ |
MWC | EVENT | 0.96+ |
TheCUBE | ORGANIZATION | 0.96+ |
pandemic | EVENT | 0.95+ |
Zeus | PERSON | 0.95+ |
Driving Business Results with Cloud Transformation | Aditi Banerjee and Todd Edmunds
>> Welcome back to the program. My name is Dave Valante and in this session, we're going to explore one of the more interesting topics of the day. IoT for Smart Factories. And with me are, Todd Edmunds,the Global CTO of Smart Manufacturing Edge and Digital Twins at Dell Technologies. That is such a cool title. (chuckles) I want to be you. And Dr. Aditi Banerjee, who's the Vice President, General Manager for Aerospace Defense and Manufacturing at DXC Technology. Another really cool title. Folks, welcome to the program. Thanks for coming on. >> Thanks Dave. >> Thank you. Great to be here. >> Nice to be here. >> Todd, let's start with you. We hear a lot about Industry 4.0, Smart Factories, IIoT. Can you briefly explain, what is Industry 4.0 all about and why is it important for the manufacturing industry? >> Yeah. Sure, Dave. You know, it's been around for quite a while and it's gone by multiple different names, as you said. Industry 4.0, Smart Manufacturing, Industrial IoT, Smart Factory. But it all really means the same thing, its really applying technology to get more out of the factories and the facilities that you have to do your manufacturing. So, being much more efficient, implementing really good sustainability initiatives. And so, we really look at that by saying, okay, what are we going to do with technology to really accelerate what we've been doing for a long, long time? So it's really not- it's not new. It's been around for a long time. What's new is that manufacturers are looking at this, not as a one-of, two-of individual Use Case point of view but instead they're saying, we really need to look at this holistically, thinking about a strategic investment in how we do this. Not to just enable one or two Use Cases, but enable many many Use Cases across the spectrum. I mean, there's tons of them out there. There's Predictive maintenance and there's OEE, Overall Equipment Effectiveness and there's Computer Vision and all of these things are starting to percolate down to the factory floor, but it needs to be done in a little bit different way and really to really get those outcomes that they're looking for in Smart Factory or Industry 4.0 or however you want to call it. And truly transform, not just throw an Industry 4.0 Use Case out there but to do the digital transformation that's really necessary and to be able to stay relevant for the future. I heard it once said that you have three options. Either you digitally transform and stay relevant for the future or you don't and fade into history. Like, 52% of the companies that used to be on the Fortune 500 since 2000. Right? And so, really that's a key thing and we're seeing that really, really being adopted by manufacturers all across the globe. >> Yeah. So, Aditi, it's like digital transformation is almost synonymous with business transformation. So, is there anything you'd add to what Todd just said? >> Absolutely. Though, I would really add that what really drives Industry 4.0 is the business transformation. What we are able to deliver in terms of improving the manufacturing KPIs and the KPIs for customer satisfaction, right? For example, improving the downtime or decreasing the maintenance cycle of the equipments or improving the quality of products, right? So, I think these are lot of business outcomes that our customers are looking at while using Industry 4.0 and the technologies of Industry 4.0 to deliver these outcomes. >> So, Aditi, I wonder if I could stay with you and maybe this is a bit esoteric but when I first first started researching IoT and Industrial IoT 4.0, et cetera, I felt, well, there could be some disruptions in the ecosystem. I kind of came to the conclusion that large manufacturing firms, Aerospace Defense companies the firms building out critical infrastructure actually had kind of an incumbent advantage and a great opportunity. Of course, then I saw on TV somebody now they're building homes with 3D printers. It like blows your mind. So that's pretty disruptive. But, so- But they got to continue, the incumbents have to continue to invest in the future. They're well-capitalized. They're pretty good businesses, very good businesses but there's a lot of complexities involved in kind of connecting the old house to the new addition that's being built, if you will, or this transformation that we're talking about. So, my question is, how are your customers preparing for this new era? What are the key challenges that they're facing in the the blockers, if you will? >> Yeah, I mean the customers are looking at Industry 4.0 for Greenfield Factories, right? That is where the investments are going directly into building the factories with the new technologies, with the new connectivities, right? For the machines, for example, Industrial IoT having the right type of data platforms to drive computational analytics and outcomes, as well as looking at Edge versus Cloud type of technologies, right? Those are all getting built in the Greenfield Factories. However, for the Install-Based Factories, right? That is where our customers are looking at how do I modernize these factories? How do I connect the existing machine? And that is where some of the challenges come in on the legacy system connectivity that they need to think about. Also, they need to start thinking about cybersecurity and operation technology security because now you are connecting the factories to each other. So, cybersecurity becomes top of mind, right? So, there is definitely investment that is involved. Clients are creating roadmaps for digitizing and modernizing these factories and investments in a very strategic way. So, perhaps they start with the innovation program and then they look at the business case and they scale it up, right? >> Todd, I'm glad you did brought up security, because if you think about the operations technology folks, historically they air-gaped the systems, that's how they created security. That's changed. The business came in and said, 'Hey, we got to connect. We got to make it intelligence.' So, that's got to be a big challenge as well. >> It absolutely is, Dave. And, you know, you can no longer just segment that because really to get all of those efficiencies that we talk about, that IoT and Industrial IoT and Industry 4.0 promise, you have to get data out of the factory but then you got to put data back in the factory. So, no longer is it just firewalling everything is really the answer. So, you really have to have a comprehensive approach to security, but you also have to have a comprehensive approach to the Cloud and what that means. And does it mean a continuum of Cloud all the way down to the Edge, right down to the factory? It absolutely does. Because no one approach has the answer to everything. The more you go to the Cloud the broader the attack surface is. So, what we're seeing is a lot of our customers approaching this from kind of that hybrid right ones run anywhere on the factory floor down to the Edge. And one of the things we're seeing too, is to help distinguish between what is the Edge and bridge that gap between, like, Dave, you talked about IT and OT and also help what Aditi talked about is the Greenfield Plants versus the Brownfield Plants that they call it, that are the legacy ones and modernizing those. It's great to kind of start to delineate what does that mean? Where's the Edge? Where's the IT and the OT? We see that from a couple of different ways. We start to think about really two Edges in a manufacturing floor. We talk about an Industrial Edge that sits... or some people call it a Far Edge or a Thin Edge, sits way down on that plant, consists of industrial hardened devices that do that connectivity. The hard stuff about how do I connect to this obsolete legacy protocol and what do I do with it? And create that next generation of data that has context. And then we see another Edge evolving above that, which is much more of a data and analytics and enterprise grade application layer that sits down in the factory itself; that helps figure out where we're going to run this? Does it connect to the Cloud? Do we run Applications On-Prem? Because a lot of times that On-Prem Application it needs to be done. 'Cause that's the only way that it's going to work because of security requirements, because of latency requirements performance and a lot of times, cost. It's really helpful to build that Multiple-Edge strategy because then you kind of, you consolidate all of those resources, applications, infrastructure, hardware into a centralized location. Makes it much, much easier to really deploy and manage that security. But it also makes it easier to deploy new Applications, new Use Cases and become the foundation for DXC'S expertise and Applications that they deliver to our customers as well. >> Todd, how complex are these projects? I mean, I feel like it's kind of the the digital equivalent of building the Hoover Dam. I mean, its.. so yeah. How long does a typical project take? I know it varies, but what are the critical success factors in terms of delivering business value quickly? >> Yeah, that's a great question in that we're- you know, like I said at the beginning, this is not new. Smart Factory and Industry 4.0 is not new. It's been, it's people have been trying to implement the Holy Grail of Smart Factory for a long time. And what we're seeing is a switch, a little bit of a switch or quite a bit of a switch to where the enterprises and the IT folks are having a much bigger say and they have a lot to offer to be able to help that complexity. So, instead of deploying a computer here and a Gateway there and a Server there, I mean, you go walk into any manufacturing plant and you can see Servers sitting underneath someone's desk or a PC in a closet somewhere running a critical production application. So, we're seeing the enterprise have a much bigger say at the table, much louder voice at the table to say, we've been doing this enterprise all the time. We know how to really consolidate, bring Hyper-Converged Applications, Hyper-Converged Infrastructure to really accelerate these kind of applications. Really accelerate the outcomes that are needed to really drive that Smart Factory and start to bring that same capabilities down into the Mac on the factory floor. That way, if you do it once to make it easier to implement, you can repeat that. You can scale that. You can manage it much easily and you can then bring that all together because you have the security in one centralized location. So, we're seeing manufacturers that first Use Case may be fairly difficult to implement and we got to go down in and see exactly what their problems are. But when the infrastructure is done the correct way when that- Think about how you're going to run that and how are you going to optimize the engineering. Well, let's take that what you've done in that one factory and then set. Let's make that across all the factories including the factory that we're in, then across the globe. That makes it much, much easier. You really do the hard work once and then repeat. Almost like cookie cutter. >> Got it. Thank you. >> Aditi, what about the skillsets available to apply these to these projects? You got to have knowledge of digital, AI, Data, Integration. Is there a talent shortage to get all this stuff done? >> Yeah, I mean, definitely. Lot different types of skillsets are needed from a traditional manufacturing skillset, right? Of course, the basic knowledge of manufacturing is important. But the digital skillsets like IoT, having a skillset in in different Protocols for connecting the machines, right? That experience that comes with it. Data and Analytics, Security, Augmented Virtual Reality Programming. Again, looking at Robotics and the Digital Twin. So, the... It's a lot more connectivity software, data-driven skillsets that are needed to Smart Factory to life at scale. And, you know, lots of firms are recruiting these types of resources with these skill sets to accelerate their Smart Factory implementation, as well as consulting firms like DXC Technology and others. We recruit, we train our talent to provide these services. >> Got it. Aditi, I wonder if we could stay on you. Let's talk about the partnership between DXC and Dell. What are you doing specifically to simplify the move to Industry 4.0 for customers? What solutions are you offering? How are you working together, Dell and DXC to bring these to market? >> Yeah, Dell and DXC have a very strong partnership and we work very closely together to create solutions, to create strategies and how we are going to jointly help our clients, right? So, areas that we have worked closely together is Edge Compute, right? How that impacts the Smart Factory. So, we have worked pretty closely in that area. We're also looked at Vision Technologies. How do we use that at the Edge to improve the quality of products, right? So, we have several areas that we collaborate in and our approaches that we want to bring solutions to our client and as well as help them scale those solutions with the right infrastructure, the right talent and the right level of security. So, we bring a comprehensive solution to our clients. >> So, Todd, last question. Kind of similar but different, you know. Why Dell, DXC, pitch me? What's different about this partnership? Where are you confident that you're going to be to deliver the best value to customers? >> Absolutely. Great question. You know, there's no shortage of Bespoke Solutions that are out there. There's hundreds of people that can come in and do individual Use Cases and do these things and just, and that's where it ends. What Dell and DXC Technology together bring to the table is we do the optimization of the engineering of those previously Bespoke Solutions upfront, together. The power of our scalable enterprise grade structured industry standard infrastructure, as well as our expertise in delivering package solutions that really accelerate with DXC's expertise and reputation as a global trusted advisor. Be able to really scale and repeat those solutions that DXC is so really, really good at. And Dell's infrastructure and our, 30,000 people across the globe that are really, really good at that scalable infrastructure to be able to repeat. And then it really lessens the risk that our customers have and really accelerates those solutions. So it's again, not just one individual solutions it's all of the solutions that not just drive Use Cases but drive outcomes with those solutions. >> Yeah, you're right. The partnership has gone, I mean I first encountered it back in, I think it was 2010. May of 2010. We had guys both on the, I think you were talking about converged infrastructure and I had a customer on, and it was actually the manufacturing customer. It was quite interesting. And back then it was how do we kind of replicate what's coming in the Cloud? And you guys have obviously taken it into the digital world. Really want to thank you for your time today. Great conversation and love to have you back. >> Thank you so much. It was a pleasure speaking with you. I agree. >> All right, keep it right there for more discussions that educate and inspire on "The Cube."
SUMMARY :
Welcome back to the program. Great to be here. the manufacturing industry? and the facilities that you add to what Todd just said? and the KPIs for customer the incumbents have to continue that they need to think about. So, that's got to be a the answer to everything. of the the digital equivalent and they have a lot to offer Thank you. to apply these to these projects? and the Digital Twin. to simplify the move to and the right level of security. the best value to customers? it's all of the solutions love to have you back. Thank you so much. for more discussions that educate
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Valante | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
DXC | ORGANIZATION | 0.99+ |
Aditi Banerjee | PERSON | 0.99+ |
Todd | PERSON | 0.99+ |
one | QUANTITY | 0.99+ |
Todd Edmunds | PERSON | 0.99+ |
2010 | DATE | 0.99+ |
May of 2010 | DATE | 0.99+ |
DXC Technology | ORGANIZATION | 0.99+ |
Dell Technologies | ORGANIZATION | 0.99+ |
Greenfield Factories | ORGANIZATION | 0.99+ |
52% | QUANTITY | 0.99+ |
30,000 people | QUANTITY | 0.99+ |
Aditi | PERSON | 0.99+ |
two | QUANTITY | 0.98+ |
first | QUANTITY | 0.98+ |
2000 | DATE | 0.98+ |
Edge | ORGANIZATION | 0.97+ |
today | DATE | 0.97+ |
Smart Factory | ORGANIZATION | 0.97+ |
three options | QUANTITY | 0.97+ |
two Use Cases | QUANTITY | 0.96+ |
both | QUANTITY | 0.96+ |
Digital Twin | ORGANIZATION | 0.95+ |
hundreds of people | QUANTITY | 0.95+ |
one factory | QUANTITY | 0.95+ |
Mac | COMMERCIAL_ITEM | 0.95+ |
Aerospace | ORGANIZATION | 0.95+ |
Smart Factory | ORGANIZATION | 0.95+ |
Hoover Dam | LOCATION | 0.94+ |
Vision Technologies | ORGANIZATION | 0.92+ |
Edge Compute | ORGANIZATION | 0.91+ |
Digital Twins | ORGANIZATION | 0.91+ |
one individual | QUANTITY | 0.86+ |
Smart Manufacturing Edge and | ORGANIZATION | 0.83+ |
two Edges | QUANTITY | 0.83+ |
Aerospace Defense | ORGANIZATION | 0.77+ |
Greenfield Plants | ORGANIZATION | 0.76+ |
Brownfield Plants | ORGANIZATION | 0.7+ |
Cases | QUANTITY | 0.67+ |
Cloud | TITLE | 0.64+ |
Vice President | PERSON | 0.62+ |
General | PERSON | 0.54+ |
IIoT | ORGANIZATION | 0.52+ |
Install | ORGANIZATION | 0.51+ |
4.0 | TITLE | 0.47+ |
Cube | TITLE | 0.47+ |
Smart Factories | ORGANIZATION | 0.46+ |
Fortune | ORGANIZATION | 0.45+ |
Factories | ORGANIZATION | 0.37+ |
4.0 | EVENT | 0.34+ |
4.0 | ORGANIZATION | 0.34+ |
Industry 4.0 | ORGANIZATION | 0.32+ |
4.0 | OTHER | 0.31+ |
500 | QUANTITY | 0.24+ |
Humphreys & Ferron-Jones | Trusted security by design, Compute Engineered for your Hybrid World
(upbeat music) >> Welcome back, everyone, to our Cube special programming on "Securing Compute, Engineered for the Hybrid World." We got Cole Humphreys who's with HPE, global server security product manager, and Mike Ferron-Jones with Intel. He's the product manager for data security technology. Gentlemen, thank you for coming on this special presentation. >> All right, thanks for having us. >> So, securing compute, I mean, compute, everyone wants more compute. You can't have enough compute as far as we're concerned. You know, more bits are flying around the internet. Hardware's mattering more than ever. Performance markets hot right now for next-gen solutions. When you're talking about security, it's at the center of every single conversation. And Gen11 for the HPE has been big-time focus here. So let's get into the story. What's the market for Gen11, Cole, on the security piece? What's going on? How do you see this impacting the marketplace? >> Hey, you know, thanks. I think this is, again, just a moment in time where we're all working towards solving a problem that doesn't stop. You know, because we are looking at data protection. You know, in compute, you're looking out there, there's international impacts, there's federal impacts, there's state-level impacts, and even regulation to protect the data. So, you know, how do we do this stuff in an environment that keeps changing? >> And on the Intel side, you guys are a Tier 1 combination partner, Better Together. HPE has a deep bench on security, Intel, We know what your history is. You guys have a real root of trust with your code, down to the silicon level, continuing to be, and you're on the 4th Gen Xeon here. Mike, take us through the Intel's relationship with HPE. Super important. You guys have been working together for many, many years. Data security, chips, HPE, Gen11. Take us through the relationship. What's the update? >> Yeah, thanks and I mean, HPE and Intel have been partners in delivering technology and delivering security for decades. And when a customer invests in an HPE server, like at one of the new Gen11s, they're getting the benefit of the combined investment that these two great companies are putting into product security. On the Intel side, for example, we invest heavily in the way that we develop our products for security from the ground up, and also continue to support them once they're in the market. You know, launching a product isn't the end of our security investment. You know, our Intel Red Teams continue to hammer on Intel products looking for any kind of security vulnerability for a platform that's in the field. As well as we invest heavily in the external research community through our bug bounty programs to harness the entire creativity of the security community to find those vulnerabilities, because that allows us to patch them and make sure our customers are staying safe throughout that platform's deployed lifecycle. You know, in 2021, between Intel's internal red teams and our investments in external research, we found 93% of our own vulnerabilities. Only a small percentage were found by unaffiliated external entities. >> Cole, HPE has a great track record and long history serving customers around security, actually, with the solutions you guys had. With Gen11, it's more important than ever. Can you share your thoughts on the talent gap out there? People want to move faster, breaches are happening at a higher velocity. They need more protection now than ever before. Can you share your thoughts on why these breaches are happening, and what you guys are doing, and how you guys see this happening from a customer standpoint? What you guys fill in with Gen11 with solution? >> You bet, you know, because when you hear about the relentless pursuit of innovation from our partners, and we in our engineering organizations in India, and Taiwan, and the Americas all collaborating together years in advance, are about delivering solutions that help protect our customer's environments. But what you hear Mike talking about is it's also about keeping 'em safe. Because you look to the market, right? What you see in, at least from our data from 2021, we have that breaches are still happening, and lot of it has to do with the fact that there is just a lack of adequate security staff with the necessary skills to protect the customer's application and ultimately the workloads. And then that's how these breaches are happening. Because ultimately you need to see some sort of control and visibility of what's going on out there. And what we were talking about earlier is you see time. Time to seeing some incident happen, the blast radius can be tremendous in today's technical, advanced world. And so you have to identify it and then correct it quickly, and that's why this continued innovation and partnership is so important, to help work together to keep up. >> You guys have had a great track record with Intel-based platforms with HPE. Gen11's a really big part of the story. Where do you see that impacting customers? Can you explain the benefits of what's going on with Gen11? What's the key story? What's the most important thing we should be paying attention to here? >> I think there's probably three areas as we look into this generation. And again, this is a point in time, we will continue to evolve. But at this particular point it's about, you know, a fundamental approach to our security enablement, right? Partnering as a Tier 1 OEM with one of the best in the industry, right? We can deliver systems that help protect some of the most critical infrastructure on earth, right? I know of some things that are required to have a non-disclosure because it is some of the most important jobs that you would see out there. And working together with Intel to protect those specific compute workloads, that's a serious deal that protects not only state, and local, and federal interests, but, really, a global one. >> This is a really- >> And then there's another one- Oh sorry. >> No, go ahead. Finish your thought. >> And then there's another one that I would call our uncompromising focus. We work in the industry, we lead and partner with those in the, I would say, in the good side. And we want to focus on enablement through a specific capability set, let's call it our global operations, and that ability to protect our supply chain and deliver infrastructure that can be trusted and into an operating environment. You put all those together and you see very significant and meaningful solutions together. >> The operating benefits are significant. I just want to go back to something you just said before about the joint NDAs and kind of the relationship you kind of unpacked, that to me, you know, I heard you guys say from sand to server, I love that phrase, because, you know, silicone into the server. But this is a combination you guys have with HPE and Intel supply-chain security. I mean, it's not just like you're getting chips and sticking them into a machine. This is, like, there's an in-depth relationship on the supply chain that has a very intricate piece to it. Can you guys just double down on that and share that, how that works and why it's important? >> Sure, so why don't I go ahead and start on that one. So, you know, as you mentioned the, you know, the supply chain that ultimately results in an end user pulling, you know, a new Gen11 HPE server out of the box, you know, started, you know, way, way back in it. And we've been, you know, Intel, from our part are, you know, invest heavily in making sure that all of our entire supply chain to deliver all of the Intel components that are inside that HPE platform have been protected and monitored ever since, you know, their inception at one of any of our 14,000, you know, Intel vendors that we monitor as part of our supply-chain assurance program. I mean we, you know, Intel, you know, invests heavily in compliance with guidelines from places like NIST and ISO, as well as, you know, doing best practices under things like the Transported Asset Protection Alliance, TAPA. You know, we have been intensely invested in making sure that when a customer gets an Intel processor, or any other Intel silicone product, that it has not been tampered with or altered during its trip through the supply chain. HPE then is able to pick up that, those components that we deliver, and add onto that their own supply-chain assurance when it comes down to delivering, you know, the final product to the customer. >> Cole, do you want to- >> That's exactly right. Yeah, I feel like that integration point is a really good segue into why we're talking today, right? Because that then comes into a global operations network that is pulling together these servers and able to deploy 'em all over the world. And as part of the Gen11 launch, we have security services that allow 'em to be hardened from our factories to that next stage into that trusted partner ecosystem for system integration, or directly to customers, right? So that ability to have that chain of trust. And it's not only about attestation and knowing what, you know, came from whom, because, obviously, you want to trust and make sure you're get getting the parts from Intel to build your technical solutions. But it's also about some of the provisioning we're doing in our global operations where we're putting cryptographic identities and manifests of the server and its components and moving it through that supply chain. So you talked about this common challenge we have of assuring no tampering of that device through the supply chain, and that's why this partnering is so important. We deliver secure solutions, we move them, you're able to see and control that information to verify they've not been tampered with, and you move on to your next stage of this very complicated and necessary chain of trust to build, you know, what some people are calling zero-trust type ecosystems. >> Yeah, it's interesting. You know, a lot goes on under the covers. That's good though, right? You want to have greater security and platform integrity, if you can abstract the way the complexity, that's key. Now one of the things I like about this conversation is that you mentioned this idea of a hardware-root-of-trust set of technologies. Can you guys just quickly touch on that, because that's one of the major benefits we see from this combination of the partnership, is that it's not just one, each party doing something, it's the combination. But this notion of hardware-root-of-trust technologies, what is that? >> Yeah, well let me, why don't I go ahead and start on that, and then, you know, Cole can take it from there. Because we provide some of the foundational technologies that underlie a root of trust. Now the idea behind a root of trust, of course, is that you want your platform to, you know, from the moment that first electron hits it from the power supply, that it has a chain of trust that all of the software, firmware, BIOS is loading, to bring that platform up into an operational state is trusted. If you have a breach in one of those lower-level code bases, like in the BIOS or in the system firmware, that can be a huge problem. It can undermine every other software-based security protection that you may have implemented up the stack. So, you know, Intel and HPE work together to coordinate our trusted boot and root-of-trust technologies to make sure that when a customer, you know, boots that platform up, it boots up into a known good state so that it is ready for the customer's workload. So on the Intel side, we've got technologies like our trusted execution technology, or Intel Boot Guard, that then feed into the HPE iLO system to help, you know, create that chain of trust that's rooted in silicon to be able to deliver that known good state to the customer so it's ready for workloads. >> All right, Cole, I got to ask you, with Gen11 HPE platforms that has 4th Gen Intel Xeon, what are the customers really getting? >> So, you know, what a great setup. I'm smiling because it's, like, it has a good answer, because one, this, you know, to be clear, this isn't the first time we've worked on this root-of-trust problem. You know, we have a construct that we call the HPE Silicon Root of Trust. You know, there are, it's an industry standard construct, it's not a proprietary solution to HPE, but it does follow some differentiated steps that we like to say make a little difference in how it's best implemented. And where you see that is that tight, you know, Intel Trusted Execution exchange. The Intel Trusted Execution exchange is a very important step to assuring that route of trust in that HPE Silicon Root of Trust construct, right? So they're not different things, right? We just have an umbrella that we pull under our ProLiant, because there's ILO, our BIOS team, CPLDs, firmware, but I'll tell you this, Gen11, you know, while all that, keeping that moving forward would be good enough, we are not holding to that. We are moving forward. Our uncompromising focus, we want to drive more visibility into that Gen11 server, specifically into the PCIE lanes. And now you're going to be able to see, and measure, and make policies to have control and visibility of the PCI devices, like storage controllers, NICs, direct connect, NVME drives, et cetera. You know, if you follow the trends of where the industry would like to go, all the components in a server would be able to be seen and attested for full infrastructure integrity, right? So, but this is a meaningful step forward between not only the greatness we do together, but, I would say, a little uncompromising focus on this problem and doing a little bit more to make Gen11 Intel's server just a little better for the challenges of the future. >> Yeah, the Tier 1 partnership is really kind of highlighted there. Great, great point. I got to ask you, Mike, on the 4th Gen Xeon Scalable capabilities, what does it do for the customer with Gen11 now that they have these breaches? Does it eliminate stuff? What's in it for the customer? What are some of the new things coming out with the Xeon? You're at Gen4, Gen11 for HP, but you guys have new stuff. What does it do for the customer? Does it help eliminate breaches? Are there things that are inherent in the product that HP is jointly working with you on or you were contributing in to the relationship that we should know about? What's new? >> Yeah, well there's so much great new stuff in our new 4th Gen Xeon Scalable processor. This is the one that was codenamed Sapphire Rapids. I mean, you know, more cores, more performance, AI acceleration, crypto acceleration, it's all in there. But one of my favorite security features, and it is one that's called Intel Control-Flow Enforcement Technology, or Intel CET. And why I like CET is because I find the attack that it is designed to mitigate is just evil genius. This type of attack, which is called a return, a jump, or a call-oriented programming attack, is designed to not bring a whole bunch of new identifiable malware into the system, you know, which could be picked up by security software. What it is designed to do is to look for little bits of existing, little bits of existing code already on the server. So if you're running, say, a web server, it's looking for little bits of that web-server code that it can then execute in a particular order to achieve a malicious outcome, something like open a command prompt, or escalate its privileges. Now in order to get those little code bits to execute in an order, it has a control mechanism. And there are different, each of the different types of attacks uses a different control mechanism. But what CET does is it gets in there and it disrupts those control mechanisms, uses hardware to prevent those particular techniques from being able to dig in and take effect. So CET can, you know, disrupt it and make sure that software behaves safely and as the programmer intended, rather than picking off these little arbitrary bits in one of these return, or jump, or call-oriented programming attacks. Now it is a technology that is included in every single one of the new 4th Gen Xeon Scalable processors. And so it's going to be an inherent characteristic the customers can benefit from when they buy a new Gen11 HPE server. >> Cole, more goodness from Intel there impacting Gen11 on the HPE side. What's your reaction to that? >> I mean, I feel like this is exactly why you do business with the big Tier 1 partners, because you can put, you know, trust in from where it comes from, through the global operations, literally, having it hardened from the factory it's finished in, moving into your operating environment, and then now protecting against attacks in your web hosting services, right? I mean, this is great. I mean, you'll always have an attack on data, you know, as you're seeing in the data. But the more contained, the more information, and the more control and trust we can give to our customers, it's going to make their job a little easier in protecting whatever job they're trying to do. >> Yeah, and enterprise customers, as you know, they're always trying to keep up to date on the skills and battle the threats. Having that built in under the covers is a real good way to kind of help them free up their time, and also protect them is really killer. This is a big, big part of the Gen11 story here. Securing the data, securing compute, that's the topic here for this special cube conversation, engineering for a hybrid world. Cole, I'll give you the final word. What should people pay attention to, Gen11 from HPE, bottom line, what's the story? >> You know, it's, you know, it's not the first time, it's not the last time, but it's our fundamental security approach to just helping customers through their digital transformation defend in an uncompromising focus to help protect our infrastructure in these technical solutions. >> Cole Humphreys is the global server security product manager at HPE. He's got his finger on the pulse and keeping everyone secure in the platform integrity there. Mike Ferron-Jones is the Intel product manager for data security technology. Gentlemen, thank you for this great conversation, getting into the weeds a little bit with Gen11, which is great. Love the hardware route-of-trust technologies, Better Together. Congratulations on Gen11 and your 4th Gen Xeon Scalable. Thanks for coming on. >> All right, thanks, John. >> Thank you very much, guys, appreciate it. Okay, you're watching "theCube's" special presentation, "Securing Compute, Engineered for the Hybrid World." I'm John Furrier, your host. Thanks for watching. (upbeat music)
SUMMARY :
for the Hybrid World." And Gen11 for the HPE has So, you know, how do we do this stuff And on the Intel side, you guys in the way that we develop and how you guys see this happening and lot of it has to do with the fact that Gen11's a really big part of the story. that you would see out there. And then Finish your thought. and that ability to that to me, you know, I heard you guys say out of the box, you know, and manifests of the is that you mentioned this idea is that you want your is that tight, you know, that HP is jointly working with you on and as the programmer intended, impacting Gen11 on the HPE side. and the more control and trust and battle the threats. you know, it's not the first time, is the global server security for the Hybrid World."
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
India | LOCATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
NIST | ORGANIZATION | 0.99+ |
ISO | ORGANIZATION | 0.99+ |
Mike | PERSON | 0.99+ |
Taiwan | LOCATION | 0.99+ |
John | PERSON | 0.99+ |
Cole | PERSON | 0.99+ |
Transported Asset Protection Alliance | ORGANIZATION | 0.99+ |
HP | ORGANIZATION | 0.99+ |
HPE | ORGANIZATION | 0.99+ |
93% | QUANTITY | 0.99+ |
2021 | DATE | 0.99+ |
Mike Ferron-Jones | PERSON | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
Cole Humphreys | PERSON | 0.99+ |
TAPA | ORGANIZATION | 0.99+ |
Gen11 | ORGANIZATION | 0.99+ |
today | DATE | 0.98+ |
first time | QUANTITY | 0.98+ |
14,000 | QUANTITY | 0.98+ |
one | QUANTITY | 0.98+ |
Humphreys | PERSON | 0.98+ |
each party | QUANTITY | 0.98+ |
earth | LOCATION | 0.97+ |
Gen11 | COMMERCIAL_ITEM | 0.97+ |
Americas | LOCATION | 0.97+ |
Gen11s | COMMERCIAL_ITEM | 0.96+ |
Securing Compute, Engineered for the Hybrid World | TITLE | 0.96+ |
Xeon | COMMERCIAL_ITEM | 0.94+ |
4th Gen Xeon Scalable processor | COMMERCIAL_ITEM | 0.94+ |
each | QUANTITY | 0.93+ |
4th Gen Xeon | COMMERCIAL_ITEM | 0.92+ |
Ferron-Jones | PERSON | 0.91+ |
Sapphire Rapids | COMMERCIAL_ITEM | 0.91+ |
first electron | QUANTITY | 0.9+ |
two great companies | QUANTITY | 0.89+ |
decades | QUANTITY | 0.86+ |
three areas | QUANTITY | 0.85+ |
Gen11 | EVENT | 0.84+ |
ILO | ORGANIZATION | 0.83+ |
Control-Flow Enforcement Technology | OTHER | 0.82+ |
Meet the new HPE ProLiant Gen11 Servers
>> Hello, everyone. Welcome to theCUBE's coverage of Compute Engineered For Your Hybrid World, sponsored by HPE and Intel. I'm John Furrier, host of theCUBE. I'm pleased to be joined by Krista Satterthwaite, SVP and general manager for HPE Mainstream Compute, and Lisa Spelman, corporate vice president, and general manager of Intel Xeon Products, here to discuss the major announcement. Thanks for joining us today. Thanks for coming on theCUBE. >> Thanks for having us. >> Great to be here. >> Great to see you guys. And exciting announcement. Krista, Compute continues to evolve to meet the challenges of businesses. We're seeing more and more high performance, more Compute, I mean, it's getting more Compute every day. You guys officially announced this next generation of ProLiant Gen11s in November. Can you share and talk about what this means? >> Yeah, so first of all, thanks so much for having me. I'm really excited about this announcement. And yeah, in November we announced our HPE ProLiant NextGen, and it really was about one thing. It's about engineering Compute for customers' hybrid world. And we have three different design principles when we designed this generation. First is intuitive cloud operating experience, and that's with our HPE GreenLake for Compute Ops Management. And that's all about management that is simple, unified, and automated. So it's all about seeing everything from one council. So you have a customer that's using this, and they were so surprised at how much they could see, and they were excited because they had servers in multiple locations. This was a hotel, so they had servers everywhere, and they can now see all their different firmware levels. And with that type of visibility, they thought their planning was going to be much, much easier. And then when it comes to updates, they're much quicker and much easier, so it's an exciting thing, whether you have servers just in the data center, or you have them distributed, you could see and do more than you ever could before with HPE GreenLake for Compute Ops Management. So that's number one. Number two is trusted security by design. Now, when we launched our HPE ProLiant Gen10 servers years ago, we launched groundbreaking innovative security features, and we haven't stopped, we've continued to enhance that every since then. And this generation's no exception. So we have new innovations around security. Security is a huge focus area for us, and so we're excited about delivering those. And then lastly, performance for every workload. We have a huge increase in performance with HPE ProLiant Gen11, and we have customers that are clamoring for this additional performance right now. And what's great about this is that, it doesn't matter where the bottleneck is, whether it's CPU, memory or IO, we have advancements across the board that are going to make real differences in what customers are going to be able to get out of their workloads. And then we have customers that are trying to build headroom in. So even if they don't need a today, what they put in their environment today, they know needs to last and need to be built for the future. >> That's awesome. Thanks for the recap. And that's great news for folks looking to power those workloads, more and more optimizations needed. I got to ask though, how is what you guys are announcing today, meeting these customer needs for the future, and what are your customers looking for and what are HPE and Intel announcing today? >> Yeah, so customers are doing more than ever before with their servers. So they're really pushing things to the max. I'll give you an example. There's a retail customer that is waiting to get their hands on our ProLiant Gen11 servers, because they want to do video streaming in every one of their retail stores and what they're building, when they're building what they need, we started talking to 'em about what their needs were today, and they were like, "Forget about what my needs are today. We're buying for headroom. We don't want to touch these servers for a while." So they're maxing things out, because they know the needs are coming. And so what you'll see with this generation is that we've built all of that in so that customers can deploy with confidence and know they have the headroom for all the things they want to do. The applications that we see and what people are trying to do with their servers is light years different than the last big announcement we had, which was our ProLiant Gen10 servers. People are trying to do more than ever before and they're trying to do that at the Edge as well as as the data center. So I'll tell you a little bit about the servers we have. So in partnership with Intel, we're really excited to announce a new batch of servers. And these servers feature the 4th Gen Intel Xeon scalable processors, bringing a lot more performance and efficiency. And I'll talk about the servers, one, the first one is a HPE ProLiant DL320 Gen11. Now, I told you about that retail customer that's trying to do video streaming in their stores. This is the server they were looking at. This server is a new server, we didn't have a Gen10 or a Gen10+ version of the server. This is a new server and it's optimized for Edge use cases. It's a rack-based server and it's very, very flexible. So different types of storage, different types of GPU configurations, really designed to take care of many, many use cases at the Edge and doing more at the Edge than ever before. So I mentioned video streaming, but also VDI and analytics at the Edge. The next two servers are some of our most popular servers, our HPE ProLiant DL360 Gen11, and that's our density-optimized server for enterprise. And that is getting an upgrade across the board as well, big, big improvements in terms of performance, and expansion. And for those customers that need even more expansion when it comes to, let's say, storage or accelerators then the DL 380 Gen11 is a server that's new as well. And that's really for folks that need more expandability than the DL360, which is a one use server. And then lastly, our ML350, which is a tower server. These tower servers are typically used at remote sites, branch offices and this particular server holds a world record for energy efficiency for tower servers. So those are some of the servers we have today that we're announcing. I also want to talk a little bit about our Cray portfolio. So we're announcing two new servers with our HPE Cray portfolio. And what's great about this is that these servers make super computing more accessible to more enterprise customers. These servers are going to be smaller, they're going to come in at lower price points, and deliver tremendous energy efficiency. So these are the Cray XD servers, and there's more servers to come, but these are the ones that we're announcing with this first iteration. >> Great stuff. I can talk about servers all day long, I love server innovation. It's been following for many, many years, and you guys know. Lisa, we'll bring you in. Servers have been powered by Intel Xeon, we've been talking a lot about the scalable processors. This is your 4th Gen, they're in Gen11 and you're at 4th Gen. Krista mentioned this generation's about Security Edge, which is essentially becoming like a data center model now, the Edges are exploding. What are some of the design principles that went into the 4th Gen this time around the scalable processor? Can you share the Intel role here? >> Sure. I love what Krista said about headroom. If there's anything we've learned in these past few years, it's that you can plan for today, and you can even plan for tomorrow, but your tomorrow might look a lot different than what you thought it was going to. So to meet these business challenges, as we think about the underlying processor that powers all that amazing server lineup that Krista just went through, we are really looking at delivering that increased performance, the power efficient compute and then strong security. And of course, attention to the overall operating cost of the customer environment. Intel's focused on a very workload-first approach to solving our customers' real problems. So this is the applications that they're running every day to drive their digital transformation, and we really like to focus our innovation, and leadership for those highest value, and also the highest growth workloads. Some of those that we've uniquely focused on in 4th Gen Xeon, our artificial intelligence, high performance computing, network, storage, and as well as the deployments, like you were mentioning, ranging from the cloud all the way out to the Edge. And those are all satisfied by 4th Gen Xeon scalable. So our strategy for architecting is based off of all of that. And in addition to doing things like adding core count, improving the platform, updating the memory and the IO, all those standard things that you do, we've invested deeply in delivering the industry's CPU with the most built-in accelerators. And I'll just give an example, in artificial intelligence with built-in AMX acceleration, plus the framework optimizations, customers can see a 10X performance improvement gen over gen, that's on both training and inference. So it further cements Xeon as the world's foundation for inference, and it now delivers performance equivalent of a modern GPU, but all within your CPU. The flexibility that, that opens up for customers is tremendous and it's so many new ways to utilize their infrastructure. And like Krista said, I just want to say that, that best-in-class security, and security solutions are an absolute requirement. We believe that starts at the hardware level, and we continue to invest in our security features with that full ecosystem support so that our customers, like HPE, can deliver that full stacked solution to really deliver on that promise. >> I love that scalable processor messaging too around the silicon and all those advanced features, the accelerators. AI's certainly seeing a lot of that in demand now. Krista, similar question to you on your end. How do you guys look at these, your core design principles around the ProLiant Gen11, and how that helps solve the challenges for your customers that are living in this hybrid world today? >> Yeah, so we see how fast things are changing and we kept that in mind when we decided to design this generation. We talked all already about distributed environments. We see the intensity of the requirements that are at the Edge, and that's part of what we're trying to address with the new platform that I mentioned. It's also part of what we're trying to address with our management, making sure that people can manage no matter where a server is and get a great experience. The other thing we're realizing when it comes to what's happening is customers are looking at how they operate. Many want to buy as a service and with HPE GreenLake, we see that becoming more and more popular. With HPE GreenLake, we can offer that to customers, which is really helpful, especially when they're trying to get new technology like this. Sometimes they don't have it in the budget. With something like HP GreenLake, there's no upfront costs so they can enjoy this technology without having to come up with a big capital outlay for it. So that's great. Another one is around, I liked what Lisa said about security starting at the hardware. And that's exactly, the foundation has to be secure, or you're starting at the wrong place. So that's also something that we feel like we've advanced this time around. This secure root of trust that we started in Gen10, we've extended that to additional partners, so we're excited about that as well. >> That's great, Krista. We're seeing and hearing a lot about customers challenges at the Edge. Lisa, I want to bring you back in on this one. What are the needs that you see at the Edge from an Intel perspective? How is Intel addressing the Edge? >> Yeah, thanks, John. You know, one of the best things about Xeon is that it can span workloads and environments all the way from the Edge back to the core data center all within the same software environment. Customers really love that portability. For the Edge, we have seen an explosion of use cases coming from all industries and I think Krista would say the same. Where we're focused on delivering is that performant-enough compute that can fit into a constrained environment, and those constraints can be physical space, they can be the thermal environment. The Network Edge has been a big focus for us. Not only adding features and integrating acceleration, but investing deeply in that software environment so that more and more critical applications can be ported to Xeon and HPE industry standard servers versus requiring expensive, proprietary systems that were quite frankly not designed for this explosion of use cases that we're seeing. Across a variety of Edge to cloud use cases, we have identified ways to provide step function improvements in both performance and that power efficiency. For example, in this generation, we're delivering an up to 2.9X average improvement in performance per watt versus not using accelerators, and up to 70 watt power savings per CPU opportunity with some unique power management features, and improve total cost of ownership, and just overall power- >> What's the closing thoughts? What should people take away from this announcement around scalable processors, 4th Gen Intel, and then Gen11 ProLiant? What's the walkaway? What's the main super thought here? >> So I can go first. I think the main thought is that, obviously, we have partnered with Intel for many, many years. We continue to partner this generation with years in the making. In fact, we've been working on this for years, so we're both very excited that it's finally here. But we're laser focused on making sure that customers get the most out of their workloads, the most out of their infrastructure, and that they can meet those challenges that people are throwing at 'em. I think IT is under more pressure than ever before and the demands are there. They're critical to the business success with digital transformation and our job is to make sure they have everything they need, and they could do and meet the business needs as they come at 'em. >> Lisa, your thoughts on this reflection point we're in right now? >> Well, I agree with everything that Krista said. It's just a really exciting time right now. There's a ton of challenges in front of us, but the opportunity to bring technology solutions to our customers' digital transformation is tremendous right now. I think I would also like our customers to take away that between the work that Intel and HPE have done together for generations, they have a community that they can trust. We are committed to delivering customer-led solutions that do solve these business transformation challenges that we know are in front of everyone, and we're pretty excited for this launch. >> Yeah, I'm super enthusiastic right now. I think you guys are on the right track. This title Compute Engineered for Hybrid World really kind of highlights the word, "Engineered." You're starting to see this distributed computing architecture take shape with the Edge. Cloud on-premise computing is everywhere. This is real relevant to your customers, and it's a great announcement. Thanks for taking the time and joining us today. >> Thank you. >> Yeah, thank you. >> This is the first episode of theCUBE's coverage of Compute Engineered For Your Hybrid World. Please continue to check out thecube.net, our site, for the future episodes where we'll discuss how to build high performance AI applications, transforming compute management experiences, and accelerating VDI at the Edge. Also, to learn more about the new HPE ProLiant servers with the 4th Gen Intel Xeon processors, you can go to hpe.com. And check out the URL below, click on it. I'm John Furrier at theCUBE. You're watching theCUBE, the leader in high tech, enterprise coverage. (bright music)
SUMMARY :
and general manager of Great to see you guys. that are going to make real differences Thanks for the recap. This is the server they were looking at. into the 4th Gen this time and also the highest growth workloads. and how that helps solve the challenges that are at the Edge, How is Intel addressing the Edge? from the Edge back to the core data center and that they can meet those challenges but the opportunity to Thanks for taking the and accelerating VDI at the Edge.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Krista | PERSON | 0.99+ |
Lisa Spelman | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
Lisa | PERSON | 0.99+ |
John | PERSON | 0.99+ |
HPE | ORGANIZATION | 0.99+ |
Krista Satterthwaite | PERSON | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
tomorrow | DATE | 0.99+ |
November | DATE | 0.99+ |
10X | QUANTITY | 0.99+ |
DL360 | COMMERCIAL_ITEM | 0.99+ |
First | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
DL 380 Gen11 | COMMERCIAL_ITEM | 0.99+ |
ProLiant Gen11 | COMMERCIAL_ITEM | 0.99+ |
both | QUANTITY | 0.98+ |
first iteration | QUANTITY | 0.98+ |
ML350 | COMMERCIAL_ITEM | 0.98+ |
first | QUANTITY | 0.98+ |
Xeon | COMMERCIAL_ITEM | 0.98+ |
theCUBE | ORGANIZATION | 0.97+ |
ProLiant Gen11s | COMMERCIAL_ITEM | 0.97+ |
first episode | QUANTITY | 0.97+ |
HPE Mainstream Compute | ORGANIZATION | 0.97+ |
thecube.net | OTHER | 0.97+ |
two servers | QUANTITY | 0.97+ |
4th Gen | QUANTITY | 0.96+ |
Edge | ORGANIZATION | 0.96+ |
Intel Xeon Products | ORGANIZATION | 0.96+ |
hpe.com | OTHER | 0.95+ |
one | QUANTITY | 0.95+ |
4th Gen. | QUANTITY | 0.95+ |
HPE GreenLake | ORGANIZATION | 0.93+ |
Gen10 | COMMERCIAL_ITEM | 0.93+ |
two new servers | QUANTITY | 0.92+ |
up to 70 watt | QUANTITY | 0.92+ |
one thing | QUANTITY | 0.91+ |
HPE ProLiant Gen11 | COMMERCIAL_ITEM | 0.91+ |
one council | QUANTITY | 0.91+ |
HPE ProLiant NextGen | COMMERCIAL_ITEM | 0.89+ |
first one | QUANTITY | 0.87+ |
Cray | ORGANIZATION | 0.86+ |
Gen11 ProLiant | COMMERCIAL_ITEM | 0.85+ |
Edge | TITLE | 0.83+ |
three different design principles | QUANTITY | 0.83+ |
HP GreenLake | ORGANIZATION | 0.82+ |
Number two | QUANTITY | 0.81+ |
HPE Compute Engineered for your Hybrid World - Transform Your Compute Management Experience
>> Welcome everyone to "theCUBE's" coverage of "Compute engineered for your hybrid world," sponsored by HP and Intel. Today we're going to going to discuss how to transform your compute management experience with the new 4th Gen Intel Xeon scalable processors. Hello, I'm John Furrier, host of "theCUBE," and my guests today are Chinmay Ashok, director cloud engineering at Intel, and Koichiro Nakajima, principal product manager, compute at cloud services with HPE. Gentlemen, thanks for coming on this segment, "Transform your compute management experience." >> Thanks for having us. >> Great topic. A lot of people want to see that system management one pane of glass and want to manage everything. This is a really important topic and they started getting into distributed computing and cloud and hybrid. This is a major discussion point. What are some of the major trends you guys see in the system management space? >> Yeah, so system management is trying to help user manage their IT infrastructure effectively and efficiently. So, the system management is evolving along with the IT infrastructures which is trying to accommodate market trends. We have been observing the continuous trends like digital transformation, edge computing, and exponential data growth never stops. AI, machine learning, deep learning, cloud native applications, hybrid cloud, multi-cloud strategies. There's a lot of things going on. Also, COVID-19 pandemic has changed the way we live and work. These are all the things that, given a profound implication to the system design architectures that system management has to consider. Also, security has always been the very important topic, but it has become more important than ever before. Some of the research is saying that the cyber criminals becoming like a $10.5 trillion per year. We all do our efforts on the solution provider size and on the user side, but still cyber criminals are growing 15% year by year. So, with all this kind of thing in the mind, system management really have to evolve in a way to help user efficiently and effectively manage their more and more distributed IT infrastructure. >> Chinmay, what's your thoughts on the major trends in system management space? >> Thanks, John, Yeah, to add to what Koichiro said, I think especially with the view of the system or the service provider, as he was saying, is changing, is evolving over the last few years, especially with the advent of the cloud and the different types of cloud usage models like platform as a service, on-premises, of course, infrastructure is a service, but the traditional software as a service implies that the service provider needs a different view of the system and the context in which we need the CPU vendor, or the platform vendor needs to provide that, is changing. That includes both in-band telemetry being able to monitor what is going on on the system through traditional in-band methods, but also the advent of the out-of-band methods to do this without end user disruption is a key element to the enhancements that our customers are expecting from us as we deploy CPUs and platforms. >> That's great. You know what I love about this discussion is we had multiple generation enhancements, 4th Gen Xeon, 11th Gen ProLiant, iLOs going to come up with got another generation increase on that one. We'll get into that on the next segment, but while we're here, what is iLO? Can you guys define what that is and why it's important? >> Yeah, great question. Real quick, so HPE Integrated Lights-Out is the formal name of the product and we tend to call it as a iLO for short. iLO is HPE'S BMC. If you're familiar with this topic it's a Baseboard Management Controller. If not, this is a small computer on the server mother board and it runs independently from host CPU and the operating system. So, that's why it's named as Lights-Out. Now what can you do with the iLO? iLO really helps a user manage and use and monitor the server remotely, securely, throughout its life from the deployment to the retirement. So, you can really do things like, you know, turning a server power on, off, install operating system, access to IT, firmware update, and when you decide to retire server, you can completely wipe the data off that server so then it's ready to trash. iLO is really a best solution to manage a single server, but when you try to manage hundreds or thousand of servers in a larger scale environment, then managing server one by one by one through the iLO is not practical. So, HPE has two options. One of them is a HPE OneView. OneView is a best solution to manage a very complex, on-prem IT infrastructure that involves a thousand of servers as well as the other IT elements like fiber channel storage through the storage agent network and so on. Another option that we have is HPE for GreenLake Compute Ops Management. This is our latest, greatest product that we recently launched and this is a best solution to manage a distributed IT environment with multiple edge points or multiple clouds. And I recently involved in the customer conversation about the computer office management and with the hotel chain, global hotel chain with 9,000 locations worldwide and each of the location only have like a couple of servers to manage, but combined it's, you know, 27,000 servers and over the 9,000 locations, we didn't really have a great answer for that kind of environment before, but now HPE has GreenLake for computer office management for also deal with, you know, such kind of environment. >> Awesome. We're going to do a big dive on iLO in the next segment, but Chinmay, before we end this segment, what is PMT? >> Sure, so yeah, with the introduction of the 4th Gen Intel Xeon scalable processor, we of course introduce many new technologies like PCI Gen 5, DDR5, et cetera. And these are very key to general system provision, if you will. But with all of these new technologies come new sources of telemetry that the service provider now has to manage, right? So, the PMT is a technology called Platform Monitoring Technology. That is a capability that we introduced with the Intel 4th Gen Xeon scalable processor that allows the service provider to monitor all of these sources of telemetry within the system, within the system on chip, the CPU SOC, in all of these contexts that we talked about, like the hybrid cloud and cloud infrastructure as a service or platform as a service, but both in their in-band traditional telemetry collection models, but also out-of-band collection models such as the ones that Koichiro was talking about through the BMC et cetera. So, this is a key enhancement that we believe that takes the Intel product line closer to what the service providers require for managing their end user experience. >> Awesome, well thanks so much for spending the time in this segment. We're going to take a quick break, we're going to come back and we're going to discuss more what's new with Gen 11 and iLO 6. You're watching "theCUBE," the leader in high tech enterprise coverage. We'll be right back. (light music) Welcome back. We're continuing the coverage of "theCUBE's" coverage of compute engineered for your hybrid world. I'm John Furrier, I'm joined by Chinmay Ashok who's from Intel and Koichiro Nakajima with HPE. We're going to dive deeper into transforming your compute management experience with 4th Gen Intel Xeon scalable processors and HP ProLiant Gen11. Okay, let's get into it. We want to talk about Gen11. What's new with Gen11? What's new with iLO 6? So, NexGen increases in performance capabilities. What's new, what's new at Gen11 and iLO 6 let's go. >> Yeah, iLO 6 accommodates a lot of new features and the latest, greatest technology advancements like a new generation CPUs, DDR5 memories, PCI Gen 5, GPGPUs, SmartNICs. There's a lot of great feature functions. So, it's an iLO, make sure that supports all the use cases that associate with those latest, greatest advancements. For instance, like you know, some of the higher thermal design point CPU SKUs that requires a liquid cooling. We all support those kind of things. And also iLO6 accommodates latest, greatest industry standard system management, standard specifications, for instance, like an DMTF, TLDN, DMTF, RDE, SPDM. And what are these means for the iLO6 and Gen11? iLO6 really offers the greatest manageability and monitoring user experiences as well as the greatest automation through the refresh APIs. >> Chinmay, what's your thoughts on the Gen11 and iLO6? You're at Intel, you're enabling all this innovation. >> Yeah. >> What's the new features? >> Yeah, thanks John. Yeah, so yeah, to add to what Koichiro said, I think with the introduction of Gen11, 4th Gen Intel Xeon scalable processor, we have all of these rich new feature sets, right? With the DDR5, PCI Gen5, liquid cooling, et cetera. And then all of these new accelerators for various specific workloads that customers can use using this processor. So, as we were discussing previously, what this brings is all of these different sources of telemetry, right? So, our sources of data that the system provider or the service provider then needs to utilize to manage the compute experience for their end user. And so, what's new from that perspective is Intel realized that these new different sources of telemetry and the new mechanisms by which the service provider has to extract this telemetry required us to fundamentally think about how we provide the telemetry experience to the service provider. And that meant extending our existing best-in-class, in-band telemetry capabilities that we have today already built into in market Intel processors. But now, extending that with the introduction of the PMT, the Platform Monitoring Technology, that allows us to expand on that in-band telemetry, but also include all of these new sources of telemetry data through all of these new accelerators through the new features like PCI Gen5, DDR5, et cetera, but also bring in that out-of-band telemetry management experience. And so, I think that's a key innovation here, helping prepare for the world that the cloud is enabling. >> It's interesting, you know, Koichiro you had mentioned on the previous segment, COVID-19, we all know the impact of how that changed, how IT at the managed, you know, all of a sudden remote work, right? So, as you have cloud go to hybrid, now we got the edge coming, we're talking about a distributed computing environment, we got telemetry, you got management. This is a huge shift and it's happening super fast. What's the Gen11 iLO6 mean for architects as they start to look at going beyond hybrid and going to the edge, you're going to need all this telemetry. What's the impact? Can you guys just riff and share your thoughts on what this means for that kind of NexGen cloud that we see coming on on which is essentially distributed computing. >> Yeah, that's a great topic to discuss. So, there's a couple of the things. Really, to make sure those remote environment and also the management distributed IT environments, the system management has to reach across the remote location, across the internet connections, and the connectivities. So, the system management protocol, for instance, like traditionally IPMI or SNMP, or those things, got to be modernized into more restful API and those modern integration friendly to the modern tool chains. So, we're investing on those like refresh APIs and also again, the security becomes paramount importance because those are exposed to the bad people to snoop and trying to do some bad thing like men in a middle attacks, things like that. So we really, you know, focus on the security side on the two aspects on the iLO6 and Gen11. One other thing is we continue our industry unique silicon root of trust technology. So, that one is fortunate platform making sure the platform firmware, only the authentic and legitimate image of the firmware can run on HP server. And when you check in, validating the firmware images, the root of the trust reside in the silicon. So, no one can change it. Even the bad people trying to change the root of trust, it's bond in the chips so you cannot really change. And that's why, even bad people trying to compromise, you know, install compromise the firmware image on the HPE servers, you cannot do that. Another thing is we're making a lot of enhancements to make sure security on board our HP server into your network or onto a services like a GreenLake. Give you a couple of example, for instance, like a IDevID, Initial Device ID. That one is conforming to IEEE 802.1AR and it's immutable so no one can change it. And by using the IDevID, you can really identify you are not onboarding a rogue server or unknown server, but the server that you you want to onboard, right? It's absolutely important. Another thing is like platform certificate. Platform certificate really is the measurement of the configuration. So again, this is a great feature that makes sure you receive a server from the factory and no one during the transportation touch the server and alter the configuration. >> Chinmay, what's your reaction to this new distributed NextGen cloud? You got data, security, edge, move the compute to the data, don't move the data around. These are big conversations. >> Yeah, great question, John. I think this is an important thing to consider for the end user, the service provider in all of these contexts, right? I think Koichiro mentioned some of these key elements that go into as we develop and design these new products. But for example, from a security perspective, we introduce the trust domain extensions, TDX feature, for confidential computing in Intel 4th Generation Xeon scalable processors. And that enables the isolation of user workloads in these cloud environments, et cetera. But again, going back to the point Koichiro was making where if you go to the edge, you go to the cloud and then have the edge connect to the cloud you have independent networks for system management, independent networks for user data, et cetera. So, you need the ability to create that isolation. All of this telemetry data that needs to be isolated from the user, but used by the service provider to provide the best experience. All of these are built on the foundations of technologies such as TDX, PMT, iLO6, et cetera. >> Great stuff, gentlemen. Well, we have a lot more to discuss on our next segment. We're going to take a break here before wrapping up. We'll be right back with more. You're watching "theCUBE," the leader in high tech coverage. (light music) Okay, welcome back here, on "theCUBE's" coverage of "Compute engineered for your hybrid world." I'm John Furrier, host of the Cube. We're wrapping up our discussion here on transforming compute management experience with 4th Gen Intel Xeon scalable processors and obviously HPE ProLiant Gen11. Gentlemen, welcome back. Let's get into the takeaways for this discussion. Obviously, systems management has been around for a while, but transforming that experience on the management side is super important as the environment just radically changing for the better. What are some of the key takeaways for the audience watching here that they should put into their kind of tickler file and/or put on their to-do list to keep an eye on? >> Yeah, so Gen11 and iLO6 offers the latest, greatest technologies with new generation CPUs, DDR5, PCI Gen5, and so on and on. There's a lot of things in there and also iLO6 is the most mature version of iLO and it offers the best manageability and security. On top of iLO, HP offers the best of read management options like HP OneView and Compute Ops Management. It's really a lot of the things that help user achieve a lot of the things regardless of the use case like edge computing, or distributed IT, or hybrid strategy and so on and on. And you could also have a great system management that you can unleash all the full potential of latest, greatest technology. >> Chinmay, what's your thoughts on the key takeaways? Obviously as the world's changing, more gen chips are coming out, specialized workloads, performance. I mean, I've never met anyone that says they want to run on slower infrastructure. I mean, come on, performance matters. >> Yes, no, it definitely, I think one of the key things I would say is yes, with Gen11 Intel for gen scalable we're introducing all of these technologies, but I think one of the key things that has grown over the last few years is the view of the system provider, the abstraction that's needed, right? Like the end user today is migrating a lot of what they're traditionally used to from a physical compute perspective to the cloud. Everything goes to the cloud and when that happens there's a lot of just the experience that the end user sees, but everything underneath is abstracted away and then managed by the system provider, right? So we at Intel, and of course, our partners at HP, we have spent a lot of time figuring out what are the best sets of features that provide that best system management experience that allow for that abstraction to work seamlessly without the end user noticing? And I think from that perspective, the 4th Gen Intel Xeon scalable processors is so far the best Intel product that we have introduced that is prepared for that type of abstraction. >> So, I'm going to put my customer hat on for a second. I'll ask you both. What's in it for me? I'm the customer. What's in it for me? What's the benefit to me? What does this all mean to me? What's my win? >> Yeah, I can start there. I think the key thing here is that when we create capabilities that allow you to build the best cloud, at the end of the day that efficiency, that performance, all of that translates to a better experience for the consumer, right? So, as the service provider is able to have all of these myriad capabilities to use and choose from and then manage the system experience, what that implies is that the end user sees a seamless experience as they go from one application to another as they go about their daily lives. >> Koichiro, what's your thoughts on what's in it for me? You guys got a lot of engineering going on in Gen11, every gen increase always is a step function and increase of value. What's in it for me? What do I care? What's in it for me? I'm the customer. >> Alright. Yeah, so I fully agree with Chinmay's point. You know, he lays out the all the good points, right? Again, you know what the Gen11 and iLO6 offer all the latest, greatest features and all the technology and advancements are packed in the Gen11 platform and iLO6 unleash all full potentials for those benefits. And things are really dynamic in today's world and IT system also going to be agile and the system management get really far, to the point like we never imagine what the system management can do in the past. For instance, the managing on-prem devices across multiple locations from a single point, like a single pane of glass on the cloud management system, management on the cloud, that's what really the compute office management that HP offers. It's all new and it's really help customers unleash full potential of the gear and their investment and provide the best TCO and ROIs, right? I'm very excited that all the things that all the teams have worked for the multiple years have finally come to their life and to the public. And I can't really wait to see our customers start putting their hands on and enjoy the benefit of the latest, greatest offerings. >> Yeah, 4th Gen Xeon, Gen11 ProLiant, I mean, all the things coming together, accelerators, more cores. You got data, you got compute, and you got now this idea of security, I mean, you got hitting all the points, data and security big features here, right? Data being computed in a way with Gen4 and Gen11. This is like the big theme, data security, kind of the the big part of the core here in this announcement, in this relationship. >> Absolutely. I believe, I think the key things as these new generations of processors enable is new types of compute which imply is more types of data, more types of and hence, with more types of data, more types of compute. You have more types of system management more differentiation that the service provider has to then deal with, the disaggregation that they have to deal with. So yes, absolutely this is, I think exciting times for end users, but also for new frontiers for service providers to go tackle. And we believe that the features that we're introducing with this CPU and this platform will enable them to do so. >> Well Chinmay thank you so much for sharing your Intel perspective, Koichiro with HPE. Congratulations on all that hard work and engineering coming together. Bearing fruit, as you said, Koichiro, this is an exciting time. And again, keep moving the needle. This is an important inflection point in the industry and now more than ever this compute is needed and this kind of specialization's all awesome. So, congratulations and participating in the "Transforming your compute management experience" segment. >> Thank you very much. >> Okay. I'm John Furrier with "theCUBE." You're watching the "Compute Engineered for your Hybrid World Series" sponsored by HP and Intel. Thanks for watching. (light music)
SUMMARY :
how to transform your in the system management space? that the cyber criminals becoming of the out-of-band methods to do this We'll get into that on the next segment, of the product and we tend to on iLO in the next segment, of telemetry that the service provider now for spending the time in this segment. and the latest, greatest on the Gen11 and iLO6? that the system provider at the managed, you know, and legitimate image of the move the compute to the data, by the service provider to I'm John Furrier, host of the Cube. a lot of the things Obviously as the world's experience that the end user sees, What's the benefit to me? that the end user sees I'm the customer. that all the things that kind of the the big part of the core here that the service provider And again, keep moving the needle. for your Hybrid World Series"
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Koichiro | PERSON | 0.99+ |
Koichiro Nakajima | PERSON | 0.99+ |
John | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
Chinmay Ashok | PERSON | 0.99+ |
hundreds | QUANTITY | 0.99+ |
iLO 6 | COMMERCIAL_ITEM | 0.99+ |
HP | ORGANIZATION | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
HPE | ORGANIZATION | 0.99+ |
27,000 servers | QUANTITY | 0.99+ |
9,000 locations | QUANTITY | 0.99+ |
One | QUANTITY | 0.99+ |
each | QUANTITY | 0.99+ |
COVID-19 | OTHER | 0.99+ |
two options | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
iLO6 | COMMERCIAL_ITEM | 0.99+ |
Chinmay | PERSON | 0.99+ |
BMC | ORGANIZATION | 0.98+ |
two aspects | QUANTITY | 0.98+ |
COVID-19 pandemic | EVENT | 0.97+ |
iLO | TITLE | 0.97+ |
single point | QUANTITY | 0.96+ |
IEEE 802.1AR | OTHER | 0.96+ |
Gen11 | COMMERCIAL_ITEM | 0.96+ |
PCI Gen 5 | OTHER | 0.96+ |
one | QUANTITY | 0.96+ |
Today | DATE | 0.96+ |
4th Generation Xeon | COMMERCIAL_ITEM | 0.95+ |
today | DATE | 0.95+ |
PCI Gen5 | OTHER | 0.95+ |
single server | QUANTITY | 0.94+ |
HPE ProLiant Gen11 | COMMERCIAL_ITEM | 0.94+ |
Gen11 ProLiant | COMMERCIAL_ITEM | 0.93+ |
4th Gen Xeon | COMMERCIAL_ITEM | 0.91+ |
NexGen | COMMERCIAL_ITEM | 0.91+ |
$10.5 trillion per year | QUANTITY | 0.9+ |
Xeon | COMMERCIAL_ITEM | 0.89+ |
HPE Compute Engineered for your Hybrid World - Next Gen Enhanced Scalable processors
>> Welcome to "theCUBE's" coverage of "Compute Engineered for Your Hybrid World" sponsored by HPE and Intel. I'm John Furrier, host of "theCUBE" with the new fourth gen Intel Z on scalable process being announced, HPE is releasing four new HPE ProLiant Gen 11 servers and here to talk about the feature of those servers as well as the partnership between HPE and Intel, we have Darren Anthony, director compute server product manager with HPE, and Suzi Jewett, general manager of the Zion products with Intel. Thanks for joining us folks. Appreciate you coming on. >> Thanks for having us. (Suzi's speech drowned out) >> This segment is about NextGen enhanced scale of process. Obviously the Zion fourth gen. This is really cool stuff. What's the most exciting element of the new Intel fourth gen Zion processor? >> Yeah, John, thanks for asking. Of course, I'm very excited about the fourth gen Intel Zion processor. I think the best thing that we'll be delivering is our new ong package accelerators, which you know allows us to service the majority of the server market, which still is buying in that mid core count range and provide workload acceleration that matters for every one of the products that we sell. And that workload acceleration allows us to drive better efficiency and allows us to really dive into improved sustainability and workload optimizations for the data center. >> It's about al the rage about the cores. Now we got the acceleration continued to innovate with Zion. Congratulations. Darren what does the new Intel fourth Gen Zion processes mean for HPE from the ProLiant perspective? You're on Gen 11 servers. What's in it? What's it mean for you guys and for your customers? >> Well, John, first we got to talk about the great partnership. HPE and Intel have been partners delivering innovation for our server products for over 30 years, and we're continuing that partnership with HP ProLiant Gen 11 servers to deliver compelling business outcomes for our customers. Customers are on a digital transformation journey, and they need the right compute to power applications, accelerate analytics, and turn data into value. HP ProLiant Compute is engineered for your hybrid world and delivers optimized performance for your workloads. With HP ProLiant Gen 11 servers and Intel fourth gen Zion processors, you can have the performance to accelerate workloads from the data center to the edge. With Gen 11, we have more. More performance to meet new workload demands. With PCI Gen five which delivers increased bandwidth with room for more data and graphics accelerators for workloads like VDI, our new demands at the edge. DDR5 memory springs greater bandwidth and performance increases for low latency and memory solutions for database and analytics workloads and higher clock speed CPU chipset combinations for processor intensive AI and machine learning applications. >> Got to love the low latency. Got to love the more performance. Got to love the engineered for the hybrid world. You mentioned that. Can you elaborate more on engineered for the hybrid world? What does that mean? Can you elaborate? >> Well, HP ProLiant Compute is based on three pillars. First, an intuitive cloud operating experience with HPE GreenLake compute ops management. Second, trusted security by design with a zero trust approach from silicone to cloud. And third, optimize for performance for your workloads, whether you deploy as a traditional infrastructure or a pay-as-you-go model with HPE GreenLake on-premise at the edge in a colo and in the public cloud. >> Well, thanks Suzi and Darren, we'll be right back. We're going to take a quick break. We're going to come back and do a deep dive and get into the ProLiant Gen 11 servers. We're going to dig into it. You're watching "theCUBE," the leader in high tech enterprise coverage. We'll be right back. (upbeat music) >> Hello everyone. Welcome back continuing coverage of "theCUBE's" "Compute Engineered for Your Hybrid World" with HP and Intel. I'm John Furrier, host of "theCUBE'" joined back by Darren Anthony from HPE and Suzie Jewitt from Intel. as we continue our conversation on the fourth gen Zion scalable processor and HP Gen 11 servers. Suzi, we'll start with you first. Can you give us some use cases around the new fourth gen, Intel Zion scalable processors? >> Yeah, I'd love to. What we're really seeing with an ever-changing market, and you know, adapting to that is we're leading with that workload focus approach. Some examples, you know, that we see are with vRAN. For in vRAN, we estimate the 2021 market size was about 150 million, and we expect a CAG of almost 30% all the way through 2030. So we're really focused on that, on, you know deployed edge use cases, growing about 10% to over 50% in 2026. And HPC use cases, of course, continue to grow at a study CAGR around, you know, about 7%. Then last but not least is cloud. So we're, you know, targeting a growth rate of almost 20% over a five year CAGR. And the fourth G Zion is targeted to all of those workloads, both through our architectural improvements that, you know deliver node level performance as well as our operational improvements that deliver data center performance. And wrapping that all around with the accelerators that I talked about earlier that provide that workload specific improvements that get us to where our customers need to operationalize in their data center. >> I love the focus solutions around seeing compute used that way and the processors. Great stuff. Darren, how do you see the new ProLiant Gen 11 servers being used on your side? I mean obviously, you've got the customers deploying the servers. What are you seeing on those workloads? Those targeted workloads? (John chuckling) >> Well, you know, very much in line with what Suzi was talking about. The generational improvements that we're seeing in performance for Gen 11. They're outstanding for many different use cases. You know, obviously VDI. what we're seeing a lot is around the analytics. You know, with moving to the edge, there's a lot more data. Customers need to convert that data into something tangible. Something that's actionable. And so we're really seeing the strong use cases around analytics in order to mine that data and to make better, faster decisions for the customers. >> You know what I love about this market is people really want to hear about performance. They love speed, they love the power, and low power, by the way on the other side. So, you know, this has really been a big part of the focus now this year. We're seeing a lot more discussion. Suzi, can you tell us more about the key performance improvements on the processors? And Darren, if you don't mind, if you can follow up on the benefits of the new servers relative to the performance. Suzi? >> Sure, so, you know, at a standard expectant rate we're looking at, you know, 60% gen over gen, from our previous third gen Zion, but more importantly as we've been mentioning is the performance improvement we get with the accelerators. As an example, an average accelerator proof point that we have is 2.9 times improvement in performance per wat for accelerated workloads versus non-accelerated workloads. Additionally, we're seeing really great and performance improvement in low jitter so almost 20 to 50 times improvement versus previous gen in jitter on particular workloads which is really important, you know to our cloud service providers. >> Darren, what's your follow up on this? This is obviously translates into the the gen 11 servers. >> Well, you know, this generation. Huge improvements across the board. And what we're seeing is that not only customers are prepared for what they need now you know, workloads are evolving and transitioning. Customers need more. They're doing more. They're doing more analytics. And so not only do you have the performance you need now, but it's actually built for the future. We know that customers are looking to take in that data and do something and work with the data wherever it resides within their infrastructure. We also see customers that are beginning to move servers out of a centralized data center more to the edge, closer to the way that where the data resides. And so this new generation really tremendous for that. Seeing a lot of benefits for the customers from that perspective. >> Okay, Suzi, Darren, I want to get your thoughts on one of the hottest trends happening right now. Obviously machine learning and AI has always been hot, but recently more and more focus has been on AI. As you start to see this kind of next gen kind of AI coming on, and the younger generation of developers, you know, they're all into this. This is really the one of the hottest trends of AI. We've seen the momentum and accelerations kind of going next level. Can you guys comment on how Zion here and Gen 11 are tying into that? What's that mean for AI? >> So, exactly. With the fourth gen Intel Zion, we have one of our key you know, on package accelerators in every core is our AMX. It delivers up to 10 times improvement on inference and training versus previous gens, and, you know throws the competition out of the water. So we are really excited for our AI performance leading with Zion >> And- >> And John, what we're seeing is that this next generation, you know you're absolutely right, you know. Workloads a lot more focused. A lot more taking more advantage of AI machine learning capabilities. And with this generation together with the Intel Zion fourth gen, you know what we're seeing is the opportunity with that increase in IO bandwidth that now we have an opportunity for those applications and those use cases and those workloads to take advantage of this capability. We haven't had that before, but now more than ever, we've actually, you know opened the throttle with the performance and with the capabilities to support those workloads. >> That's great stuff. And you know, the AI stuff also does all lot on differentiated heavy lifting, and it needs processing power. It needs the servers. This is just, (John chuckling) it creates more and more value. This is right in line. Congratulations. Super excited by that call out. Really appreciate it. Thanks Suzi and Darren. Really appreciate. A lot more discuss with you guys as we go a little bit deeper. We're going to talk about security and wrap things up after this short break. I'm John Furrier, "theCUBE," the leader in enterprise tech coverage. (upbeat music) >> Welcome back to "theCUBE's" coverage of "Compute Engineered for Your Hybrid World." I'm John Furrier, host of "theCUBE" joined by Darren Anthony from HPE and Suzi Jewett from Intel as we turn our discussion to security. A lot of great features with the new Zion scalable processor's gen four and the ProLiant gen 11. Let's get into it. Suzi, what are some of the cool features of the fourth gen Intel Zion scalable processors? >> Sure, John, I'd love to talk about it. With fourth gen, Intel offers the most comprehensive confidential computing portfolio to really enhance data security and ingest regulatory compliance and sovereignty concerns. A couple examples of those features and technologies that we've included are a larger baseline enclave with the SGX technology, which is our application isolation technology and our Intel CET substantially reduces the risk of whole class software-based attacks. That wrapped around at a platform level really allows us, you know, to secure workload acceleration software and ensure platform integrity. >> Darren, this is a great enablement for HPE. Can you tell us about the security with the the new HP ProLiant Gen 11 servers? >> Absolutely, John. So HP ProLiant engineered with a fundamental security approach to defend against increasingly complex threats and uncompromising focus on state-of-the-art security innovations that are built right into our DNA, from silicon to software, from the factory to the cloud. It's our goal to protect the customer's infrastructure, workloads, and the data from threats to hardware and risk from third party software and devices. So Gen 11 is just a continuation of the the great technological innovations that we've had around providing zero trust architecture. We're extending our Silicon Root of Trust, and it's just a motion forward for innovating on that Silicon Root of Trust that we've had. So with Silicon Root of Trust, we protect millions of lines of firmware code from malware and ransomware with the digital footprint that's unique to the server. With this Silicon Root of Trust, we're securing over 4 million HPE servers around the world and beyond that Silicon, the authentication of and extending this to our partner ecosystem, the authentication of platform components, such as network interface cards and storage controllers just gives us that protection against additional entry points of security threats that can compromise the entire server infrastructure. With this latest version, we're also doing authentication integrity with those components using the security protocol and data model protocol or SPDM. But we know that trusted and protected infrastructure begins with a secure supply chain, a layer of protection that starts at the manufacturing floor. HP provides you optimized protection for ProLiant servers from trusted suppliers to the factories and into transit to the customer. >> Any final messages Darren you'd like to share with your audience on the hybrid world engineering for the hybrid world security overall the new Gen 11 servers with the Zion fourth generation process scalable processors? >> Well, it's really about choice. Having the right choice for your compute, and we know HPE ProLiant servers, together, ProLiant Gen 11 servers together with the new Zion processors is the right choice. Delivering the capabilities to performance and the efficiency that customers need to run their most complex workloads and their most performance hungry work workloads. We're really excited about this next generation of platforms. >> ProLiant Gen 11. Suzi, great customer for Intel. You got the fourth generation Zion scalable processes. We've been tracking multiple generations for both of you guys for many, many years now, the past decade. A lot of growth, a lot of innovation. I'll give you the last word on the series here on this segment. Can you share the the collaboration between Intel and HP? What does it mean and what's that mean for customers? Can you give your thoughts and share your views on the relationship with with HPE? >> Yeah, we value, obviously HPE as one of our key customers. We partner with them from the beginning of when we are defining the product all the way through the development and validation. HP has been a great partner in making sure that we deliver collaboratively to the needs of their customers and our customers all together to make sure that we get the best product in the market that meets our customer needs allowing for the flexibility, the operational efficiency, the security that our markets demand. >> Darren, Suzi, thank you so much. You know, "Compute for an Engineered Hybrid World" is really important. Compute is... (John stuttering) We need more compute. (John chuckling) Give us more power and less power on the sustainability side. So a lot of great advances. Thank you so much for spending the time and give us an overview on the innovation around the Zion and, and the ProLiant Gen 11. Appreciate your time. Appreciate it. >> You're welcome. Thanks for having us. >> You're watching "theCUBE's" coverage of "Compute Engineered for Your Hybrid World" sponsored by HPE and Intel. I'm John Furrier with "theCUBE." Thanks for watching. (upbeat music)
SUMMARY :
and here to talk about the Thanks for having us. of the new Intel fourth of the server market, continued to innovate with Zion. from the data center to the edge. engineered for the hybrid world? and in the public cloud. and get into the ProLiant Gen 11 servers. on the fourth gen Zion scalable processor and you know, adapting I love the focus solutions decisions for the customers. and low power, by the the performance improvement into the the gen 11 servers. the performance you need now, This is really the one of With the fourth gen Intel with the Intel Zion fourth gen, you know A lot more discuss with you guys and the ProLiant gen 11. Intel offers the most Can you tell us about the security from the factory to the cloud. and the efficiency that customers need on the series here on this segment. allowing for the flexibility, and the ProLiant Gen 11. Thanks for having us. I'm John Furrier with
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
John | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
Ed Macosky | PERSON | 0.99+ |
Darren Anthony | PERSON | 0.99+ |
Yaron Haviv | PERSON | 0.99+ |
Mandy Dolly | PERSON | 0.99+ |
Mandy Dhaliwal | PERSON | 0.99+ |
David Richards | PERSON | 0.99+ |
Suzi Jewett | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
HP | ORGANIZATION | 0.99+ |
two | QUANTITY | 0.99+ |
2.9 times | QUANTITY | 0.99+ |
Darren | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
Suzi | PERSON | 0.99+ |
Silicon Angle Media | ORGANIZATION | 0.99+ |
RenDisco | ORGANIZATION | 0.99+ |
2009 | DATE | 0.99+ |
Suzie Jewitt | PERSON | 0.99+ |
HPE | ORGANIZATION | 0.99+ |
2022 | DATE | 0.99+ |
Yahoo | ORGANIZATION | 0.99+ |
Lisa | PERSON | 0.99+ |
2008 | DATE | 0.99+ |
AKS | ORGANIZATION | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
500 terabytes | QUANTITY | 0.99+ |
60% | QUANTITY | 0.99+ |
2021 | DATE | 0.99+ |
Hadoop | TITLE | 0.99+ |
1,000 camera | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
18,000 customers | QUANTITY | 0.99+ |
five | QUANTITY | 0.99+ |
Amsterdam | LOCATION | 0.99+ |
2030 | DATE | 0.99+ |
One | QUANTITY | 0.99+ |
HIPAA | TITLE | 0.99+ |
tomorrow | DATE | 0.99+ |
2026 | DATE | 0.99+ |
Yaron | PERSON | 0.99+ |
two days | QUANTITY | 0.99+ |
Europe | LOCATION | 0.99+ |
First | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
Telco | ORGANIZATION | 0.99+ |
both | QUANTITY | 0.99+ |
three | QUANTITY | 0.99+ |
HPE Compute Engineered for your Hybrid World-Containers to Deploy Higher Performance AI Applications
>> Hello, everyone. Welcome to theCUBE's coverage of "Compute Engineered for your Hybrid World," sponsored by HPE and Intel. Today we're going to discuss the new 4th Gen Intel Xeon Scalable process impact on containers and AI. I'm John Furrier, your host of theCUBE, and I'm joined by three experts to guide us along. We have Jordan Plum, Senior Director of AI and products for Intel, Bradley Sweeney, Big Data and AI Product Manager, Mainstream Compute Workloads at HPE, and Gary Wang, Containers Product Manager, Mainstream Compute Workloads at HPE. Welcome to the program gentlemen. Thanks for coming on. >> Thanks John. >> Thank you for having us. >> This segment is going to be talking about containers to deploy high performance AI applications. This is a really important area right now. We're seeing a lot more AI deployed, kind of next gen AI coming. How is HPE supporting and testing and delivering containers for AI? >> Yeah, so what we're doing from HPE's perspective is we're taking these container platforms, combining with the next generation Intel servers to fully validate the deployment of the containers. So what we're doing is we're publishing the reference architectures. We're creating these automation scripts, and also creating a monitoring and security strategy for these container platforms. So for customers to easily deploy these Kubernete clusters and to easily secure their community environments. >> Gary, give us a quick overview of the new Proliant DL 360 and 380 Gen 11 servers. >> Yeah, the load, for example, for container platforms what we're seeing mostly is the DL 360 and DL 380 for matching really well for container use cases, especially for AI. The DL 360, with the expended now the DDR five memory and the new PCI five slots really, really helps the speeds to deploy these container environments and also to grow the data that's required to store it within these container environments. So for example, like the DL 380 if you want to deploy a data fabric whether it's the Ezmeral data fabric or different vendors data fabric software you can do so with the DL 360 and DL 380 with the new Intel Xeon processors. >> How does HP help customers with Kubernetes deployments? >> Yeah, like I mentioned earlier so we do a full validation to ensure the container deployment is easy and it's fast. So we create these automation scripts and then we publish them on GitHub for customers to use and to reference. So they can take that and then they can adjust as they need to. But following the deployment guide that we provide will make the, deploy the community deployment much easier, much faster. So we also have demo videos that's also published and then for reference architecture document that's published to guide the customer step by step through the process. >> Great stuff. Thanks everyone. We'll be going to take a quick break here and come back. We're going to do a deep dive on the fourth gen Intel Xeon scalable process and the impact on AI and containers. You're watching theCUBE, the leader in tech coverage. We'll be right back. (intense music) Hey, welcome back to theCUBE's continuing coverage of "Compute Engineered for your Hybrid World" series. I'm John Furrier with the Cube, joined by Jordan Plum with Intel, Bradley Sweeney with HPE, and Gary Wang from HPE. We're going to do a drill down and do a deeper dive into the AI containers with the fourth gen Intel Xeon scalable processors we appreciate your time coming in. Jordan, great to see you. I got to ask you right out of the gate, what is the view right now in terms of Intel's approach to containers for AI? It's hot right now. AI is booming. You're seeing kind of next gen use cases. What's your approach to containers relative to AI? >> Thanks John and thanks for the question. With the fourth generation Xeon scalable processor launch we have tested and validated this platform with over 400 deep learning and machine learning models and workloads. These models and workloads are publicly available in the framework repositories and they can be downloaded by anybody. Yet customers are not only looking for model validation they're looking for model performance and performance is usually a combination of a given throughput at a target latency. And to do that in the data center all the way to the factory floor, this is not always delivered from these generic proxy models that are publicly available in the industry. >> You know, performance is critical. We're seeing more and more developers saying, "Hey, I want to go faster on a better platform, faster all the time." No one wants to run slower stuff, that's for sure. Can you talk more about the different container approaches Intel is pursuing? >> Sure. First our approach is to meet the customers where they are and help them build and deploy AI everywhere. Some customers just want to focus on deployment they have more mature use cases, and they just want to download a model that works that's high performing and run. Others are really focused more on development and innovation. They want to build and train models from scratch or at least highly customize them. Therefore we have several container approaches to accelerate the customer's time to solution and help them meet their business SLA along their AI journey. >> So what developers can just download these containers and just go? >> Yeah, so let me talk about the different kinds of containers we have. We start off with pre-trained containers. We'll have about 55 or more of these containers where the model is actually pre-trained, highly performant, some are optimized for low latency, others are optimized for throughput and the customers can just download these from Intel's website or from HPE and they can just go into production right away. >> That's great. A lot of choice. People can just get jump right in. That's awesome. Good, good choice for developers. They want more faster velocity. We know that. What else does Intel provide? Can you share some thoughts there? What you guys else provide developers? >> Yeah, so we talked about how hey some are just focused on deployment and they maybe they have more mature use cases. Other customers really want to do some more customization or optimization. So we have another class of containers called development containers and this includes not just the kind of a model itself but it's integrated with the framework and some other capabilities and techniques like model serving. So now that customers can download just not only the model but an entire AI stack and they can be sort of do some optimizations but they can also be sure that Intel has optimized that specific stack on top of the HPE servers. >> So it sounds simple to just get started using the DL model and containers. Is that it? Where, what else are customers looking for? What can you take a little bit deeper? >> Yeah, not quite. Well, while the customer customer's ability to reproduce performance on their site that HPE and Intel have measured in our own labs is fantastic. That's not actually what the customer is only trying to do. They're actually building very complex end-to-end AI pipelines, okay? And a lot of data scientists are really good at building models, really good at building algorithms but they're less experienced in building end-to-end pipelines especially 'cause the number of use cases end-to-end are kind of infinite. So we are building end-to-end pipeline containers for use cases like media analytics and sentiment analysis, anomaly detection. Therefore a customer can download these end-to-end containers, right? They can either use them as a reference, just like, see how we built them and maybe they have some changes in their own data center where they like to use different tools, but they can just see, "Okay this is what's possible with an end-to-end container on top of an HPE server." And other cases they could actually, if the overlap in the use case is pretty close, they can just take our containers and go directly into production. So this provides developers, all three types of containers that I discussed provide developers an easy starting point to get them up and running quickly and make them productive. And that's a really important point. You talked a lot about performance, John. But really when we talk to data scientists what they really want to be is productive, right? They're under pressure to change the business to transform the business and containers is a great way to get started fast >> People take product productivity, you know, seriously now with developer productivity is the hottest trend obviously they want performance. Totally nailed it. Where can customers get these containers? >> Right. Great, thank you John. Our pre-trained model containers, our developmental containers, and our end-to-end containers are available at intel.com at the developer catalog. But we'd also post these on many third party marketplaces that other people like to pull containers from. And they're frequently updated. >> Love the developer productivity angle. Great stuff. We've still got more to discuss with Jordan, Bradley, and Gary. We're going to take a short break here. You're watching theCUBE, the leader in high tech coverage. We'll be right back. (intense music) Welcome back to theCUBE's coverage of "Compute Engineered for your Hybrid World." I'm John Furrier with theCUBE and we'll be discussing and wrapping up our discussion on containers to deploy high performance AI. This is a great segment on really a lot of demand for AI and the applications involved. And we got the fourth gen Intel Xeon scalable processors with HP Gen 11 servers. Bradley, what is the top AI use case that Gen 11 HP Proliant servers are optimized for? >> Yeah, thanks John. I would have to say intelligent video analytics. It's a use case that's supplied across industries and verticals. For example, a smart hospital solution that we conducted with Nvidia and Artisight in our previous customer success we've seen 5% more hospital procedures, a 16 times return on investment using operating room coordination. With that IVA, so with the Gen 11 DL 380 that we provide using the the Intel four gen Xeon processors it can really support workloads at scale. Whether that is a smart hospital solution whether that's manufacturing at the edge security camera integration, we can do it all with Intel. >> You know what's really great about AI right now you're starting to see people starting to figure out kind of where the value is does a lot of the heavy lifting on setting things up to make humans more productive. This has been clearly now kind of going neck level. You're seeing it all in the media now and all these new tools coming out. How does HPE make it easier for customers to manage their AI workloads? I imagine there's going to be a surge in demand. How are you guys making it easier to manage their AI workloads? >> Well, I would say the biggest way we do this is through GreenLake, which is our IT as a service model. So customers deploying AI workloads can get fully-managed services to optimize not only their operations but also their spending and the cost that they're putting towards it. In addition to that we have our Gen 11 reliance servers equipped with iLO 6 technology. What this does is allows customers to securely manage their server complete environment from anywhere in the world remotely. >> Any last thoughts or message on the overall fourth gen intel Xeon based Proliant Gen 11 servers? How they will improve workload performance? >> You know, with this generation, obviously the performance is only getting ramped up as the needs and requirements for customers grow. We partner with Intel to support that. >> Jordan, gimme the last word on the container's effect on AI applications. Your thoughts as we close out. >> Yeah, great. I think it's important to remember that containers themselves don't deliver performance, right? The AI stack is a very complex set of software that's compiled together and what we're doing together is to make it easier for customers to get access to that software, to make sure it all works well together and that it can be easily installed and run on sort of a cloud native infrastructure that's hosted by HPE Proliant servers. Hence the title of this talk. How to use Containers to Deploy High Performance AI Applications. Thank you. >> Gentlemen. Thank you for your time on the Compute Engineered for your Hybrid World sponsored by HPE and Intel. Again, I love this segment for AI applications Containers to Deploy Higher Performance. This is a great topic. Thanks for your time. >> Thank you. >> Thanks John. >> Okay, I'm John. We'll be back with more coverage. See you soon. (soft music)
SUMMARY :
Welcome to the program gentlemen. and delivering containers for AI? and to easily secure their of the new Proliant DL 360 and also to grow the data that's required and then they can adjust as they need to. and the impact on AI and containers. And to do that in the about the different container and they just want to download a model and they can just go into A lot of choice. and they can be sort of So it sounds simple to just to use different tools, is the hottest trend to pull containers from. on containers to deploy we can do it all with Intel. for customers to manage and the cost that they're obviously the performance on the container's effect How to use Containers on the Compute Engineered We'll be back with more coverage.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Jordan Plum | PERSON | 0.99+ |
Gary | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Nvidia | ORGANIZATION | 0.99+ |
Gary Wang | PERSON | 0.99+ |
Bradley | PERSON | 0.99+ |
HPE | ORGANIZATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
16 times | QUANTITY | 0.99+ |
5% | QUANTITY | 0.99+ |
Jordan | PERSON | 0.99+ |
Artisight | ORGANIZATION | 0.99+ |
DL 360 | COMMERCIAL_ITEM | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
three experts | QUANTITY | 0.99+ |
DL 380 | COMMERCIAL_ITEM | 0.99+ |
HP | ORGANIZATION | 0.99+ |
Compute Engineered for your Hybrid World | TITLE | 0.98+ |
First | QUANTITY | 0.98+ |
Bradley Sweeney | PERSON | 0.98+ |
over 400 deep learning | QUANTITY | 0.97+ |
intel | ORGANIZATION | 0.97+ |
theCUBE | ORGANIZATION | 0.96+ |
Gen 11 DL 380 | COMMERCIAL_ITEM | 0.95+ |
Xeon | COMMERCIAL_ITEM | 0.95+ |
Today | DATE | 0.95+ |
fourth gen | QUANTITY | 0.92+ |
GitHub | ORGANIZATION | 0.91+ |
380 Gen 11 | COMMERCIAL_ITEM | 0.9+ |
about 55 or more | QUANTITY | 0.89+ |
four gen Xeon | COMMERCIAL_ITEM | 0.88+ |
Big Data | ORGANIZATION | 0.88+ |
Gen 11 | COMMERCIAL_ITEM | 0.87+ |
five slots | QUANTITY | 0.86+ |
Proliant | COMMERCIAL_ITEM | 0.84+ |
GreenLake | ORGANIZATION | 0.75+ |
Compute Engineered for your Hybrid | TITLE | 0.7+ |
Ezmeral | ORGANIZATION | 0.68+ |
HPE Compute Security - Kevin Depew, HPE & David Chang, AMD
>>Hey everyone, welcome to this event, HPE Compute Security. I'm your host, Lisa Martin. Kevin Dee joins me next Senior director, future Surfer Architecture at hpe. Kevin, it's great to have you back on the program. >>Thanks, Lisa. I'm glad to be here. >>One of the topics that we're gonna unpack in this segment is, is all about cybersecurity. And if we think of how dramatically the landscape has changed in the last couple of years, I was looking at some numbers that H P V E had provided. Cybercrime will reach 10.5 trillion by 2025. It's a couple years away. The average total cost of a data breach is now over 4 million, 15% year over year crime growth predicted over the next five years. It's no longer if we get hit, it's when it's how often. What's the severity? Talk to me about the current situation with the cybersecurity landscape that you're seeing. >>Yeah, I mean the, the numbers you're talking about are just staggering and then that's exactly what we're seeing and that's exactly what we're hearing from our customers is just absolutely key. Customers have too much to lose. The, the dollar cost is just, like I said, staggering. And, and here at HP we know we have a huge part to play, but we also know that we need partnerships across the industry to solve these problems. So we have partnered with, with our, our various partners to deliver these Gen 11 products. Whether we're talking about partners like a M D or partners like our Nick vendors, storage card vendors. We know we can't solve the problem alone. And we know this, the issue is huge. And like you said, the numbers are staggering. So we're really, we're really partnering with, with all the right players to ensure we have a secure solution so we can stay ahead of the bad guys to try to limit the, the attacks on our customers. >>Right. Limit the damage. What are some of the things that you've seen particularly change in the last 18 months or so? Anything that you can share with us that's eye-opening, more eye-opening than some of the stats we already shared? >>Well, there, there's been a massive number of attacks just in the last 12 months, but I wouldn't really say it's so much changed because the amount of attacks has been increasing dramatically over the years for many, many, many years. It's just a very lucrative area for the bad guys, whether it's ransomware or stealing personal data, whatever it is, it's there. There's unfortunately a lot of money to be made into it, made from it, and a lot of money to be lost by the good guys, the good guys being our customers. So it's not so much that it's changed, it's just that it's even accelerating faster. So the real change is, it's accelerating even faster because it's becoming even more lucrative. So we have to stay ahead of these bad guys. One of the statistics of Microsoft operating environments, the number of tax in the last year, up 50% year over year, that's a huge acceleration and we've gotta stay ahead of that. We have to make sure our customers don't get impacted to the level that these, these staggering number of attacks are. The, the bad guys are out there. We've gotta protect, protect our customers from the bad guys. >>Absolutely. The acceleration that you talked about is, it's, it's kind of frightening. It's very eye-opening. We do know that security, you know, we've talked about it for so long as a, as a a C-suite priority, a board level priority. We know that as some of the data that HPE e also sent over organizations are risking are, are listing cyber risks as a top five concern in their organization. IT budgets spend is going up where security is concerned. And so security security's on everyone's mind. In fact, the cube did, I guess in the middle part of last, I did a series on this really focusing on cybersecurity as a board issue and they went into how companies are structuring security teams changing their assumptions about the right security model, offense versus defense. But security's gone beyond the board, it's top of mind and it's on, it's in an integral part of every conversation. So my question for you is, when you're talking to customers, what are some of the key challenges that they're saying, Kevin, these are some of the things the landscape is accelerating, we know it's a matter of time. What are some of those challenges and that they're key pain points that they're coming to you to help solve? >>Yeah, at the highest level it's simply that security is incredibly important to them. We talked about the numbers. There's so much money to be lost that what they come to us and say, is security's important for us? What can you do to protect us? What can you do to prevent us from being one of those statistics? So at a high level, that's kind of what we're seeing at a, with a little more detail. We know that there's customers doing digital transformations. We know that there's customers going hybrid cloud, they've got a lot of initiatives on their own. They've gotta spend a lot of time and a lot of bandwidth tackling things that are important to their business. They just don't have the bandwidth to worry about yet. Another thing which is security. So we are doing everything we can and partnering with everyone we can to help solve those problems for customers. >>Cuz we're hearing, hey, this is huge, this is too big of a risk. How do you protect us? And by the way, we only have limited bandwidth, so what can we do? What we can do is make them assured that that platform is secure, that we're, we are creating a foundation for a very secure platform and that we've worked with our partners to secure all the pieces. So yes, they still have to worry about security, but there's pieces that we've taken care of that they don't have to worry about and there's capabilities that we've provided that they can use and we've made that easy so they can build su secure solutions on top of it. >>What are some of the things when you're in customer conversations, Kevin, that you talk about with customers in terms of what makes HPE E'S approach to security really unique? >>Well, I think a big thing is security is part of our, our dna. It's part of everything we do. Whether we're designing our own asics for our bmc, the ilo ASIC ILO six used on Gen 11, or whether it's our firmware stack, the ILO firmware, our our system, UFI firmware, all those pieces in everything we do. We're thinking about security. When we're building products in our factory, we're thinking about security. When we're think designing our supply chain, we're thinking about security. When we make requirements on our suppliers, we're driving security to be a key part of those components. So security is in our D N a security's top of mind. Security is something we think about in everything we do. We have to think like the bad guys, what could the bad guy take advantage of? What could the bad guy exploit? So we try to think like them so that we can protect our customers. >>And so security is something that that really is pervasive across all of our development organizations, our supply chain organizations, our factories, and our partners. So that's what we think is unique about HPE is because security is so important and there's a whole lot of pieces of our reliance servers that we do ourselves that many others don't do themselves. And since we do it ourselves, we can make sure that security's in the design from the start, that those pieces work together in a secure manner. So we think that gives us a, an advantage from a security standpoint. >>Security is very much intention based at HPE e I was reading in some notes, and you just did a great job of talking about this, that fundamental security approach, security is fundamental to defend against threats that are increasingly complex through what you also call an uncompromising focus to state-of-the-art security and in in innovations built into your D N A. And then organizations can protect their infrastructure, their workloads, their data from the bad guys. Talk to us briefly in our final few minutes here, Kevin, about fundamental uncompromising protected the value in it for me as an HPE customer. >>Yeah, when we talk about fundamental, we're talking about the those fundamental technologies that are part of our platform. Things like we've integrated TPMS and sorted them down in our platforms. We now have platform certificates as a standard part of the platform. We have I dev id and probably most importantly, our platforms continue to support what we really believe was a groundbreaking technology, Silicon Root of trust and what that's able to do. We have millions of lines of firmware code in our platforms and with Silicon Root of trust, we can authenticate all of those lines of firmware. Whether we're talking about the the ILO six firmware, our U E I firmware, our C P L D in the system, there's other pieces of firmware. We authenticate all those to make sure that not a single line of code, not a single bit has been changed by a bad guy, even if the bad guy has physical access to the platform. >>So that silicon route of trust technology is making sure that when that system boots off and that hands off to the operating system and then eventually the customer's application stack that it's starting with a solid foundation, that it's starting with a system that hasn't been compromised. And then we build other things into that silicon root of trust, such as the ability to do the scans and the authentications at runtime, the ability to automatically recover if we detect something has been compromised, we can automatically update that compromised piece of firmware to a good piece before we've run it because we never want to run firmware that's been compromised. So that's all part of that Silicon Root of Trust solution and that's a fundamental piece of the platform. And then when we talk about uncompromising, what we're really talking about there is how we don't compromise security. >>And one of the ways we do that is through an extension of our Silicon Root of trust with a capability called S Spdm. And this is a technology that we saw the need for, we saw the need to authenticate our option cards and the firmware in those option cards. Silicon Root Prota, Silicon Root Trust protects against many attacks, but one piece it didn't do is verify the actual option card firmware and the option cards. So we knew to solve that problem we would have to partner with others in the industry, our nick vendors, our storage controller vendors, our G vendors. So we worked with industry standards bodies and those other partners to design a capability that allows us to authenticate all of those devices. And we worked with those vendors to get the support both in their side and in our platform side so that now Silicon Rivers and trust has been extended to where we protect and we trust those option cards as well. >>So that's when, when what we're talking about with Uncompromising and with with Protect, what we're talking about there is our capabilities around protecting against, for example, supply chain attacks. We have our, our trusted supply chain solution, which allows us to guarantee that our server, when it leaves our factory, what the server is, when it leaves our factory, will be what it is when it arrives at the customer. And if a bad guy does anything in that transition, the transit from our factory to the customer, they'll be able to detect that. So we enable certain capabilities by default capability called server configuration lock, which can ensure that nothing in the server exchange, whether it's firmware, hardware, configurations, swapping out processors, whatever it is, we'll detect if a bad guy did any of that and the customer will know it before they deploy the system. That gets enabled by default. >>We have an intrusion detection technology option when you use by the, the trusted supply chain that is included by default. That lets you know, did anybody open that system up, even if the system's not plugged in, did somebody take the hood off and potentially do something malicious to it? We also enable a capability called U EFI secure Boot, which can go authenticate some of the drivers that are located on the option card itself. Those kind of capabilities. Also ilo high security mode gets enabled by default. So all these things are enabled in the platform to ensure that if it's attacked going from our factory to the customer, it will be detected and the customer won't deploy a system that's been maliciously attacked. So that's got >>It, >>How we protect the customer through those capabilities. >>Outstanding. You mentioned partners, my last question for you, we've got about a minute left, Kevin is bring AMD into the conversation, where do they fit in this >>AMD's an absolutely crucial partner. No one company even HP can do it all themselves. There's a lot of partnerships, there's a lot of synergies working with amd. We've been working with AMD for almost 20 years since we delivered our first AM MD base ProLiant back in 2004 H HP ProLiant, DL 5 85. So we've been working with them a long time. We work with them years ahead of when a processor is announced, we benefit each other. We look at their designs and help them make their designs better. They let us know about their technology so we can take advantage of it in our designs. So they have a lot of security capabilities, like their memory encryption technologies, their a MD secure processor, their secure encrypted virtualization, which is an absolutely unique and breakthrough technology to protect virtual machines and hypervisor environments and protect them from malicious hypervisors. So they have some really great capabilities that they've built into their processor, and we also take advantage of the capabilities they have and ensure those are used in our solutions and in securing the platform. So a really such >>A great, great partnership. Great synergies there. Kevin, thank you so much for joining me on the program, talking about compute security, what HPE is doing to ensure that security is fundamental, that it is unpromised and that your customers are protected end to end. We appreciate your insights, we appreciate your time. >>Thank you very much, Lisa. >>We've just had a great conversation with Kevin Depu. Now I get to talk with David Chang, data center solutions marketing lead at a md. David, welcome to the program. >>Thank, thank you. And thank you for having me. >>So one of the hot topics of conversation that we can't avoid is security. Talk to me about some of the things that AMD is seeing from the customer's perspective, why security is so important for businesses across industries. >>Yeah, sure. Yeah. Security is, is top of mind for, for almost every, every customer I'm talking to right now. You know, there's several key market drivers and, and trends, you know, in, out there today that's really needing a better and innovative solution for, for security, right? So, you know, the high cost of data breaches, for example, will cost enterprises in downtime of, of the data center. And that time is time that you're not making money, right? And potentially even leading to your, to the loss of customer confidence in your, in your cust in your company's offerings. So there's real costs that you, you know, our customers are facing every day not being prepared and not having proper security measures set up in the data center. In fact, according to to one report, over 400 high-tech threats are being introduced every minute. So every day, numerous new threats are popping up and they're just, you know, the, you know, the bad guys are just getting more and more sophisticated. So you have to take, you know, measures today and you have to protect yourself, you know, end to end with solutions like what a AM MD and HPE has to offer. >>Yeah, you talked about some of the costs there. They're exorbitant. I've seen recent figures about the average, you know, cost of data breacher ransomware is, is close to, is over $4 million, the cost of, of brand reputation you brought up. That's a great point because nobody wants to be the next headline and security, I'm sure in your experiences. It's a board level conversation. It's, it's absolutely table stakes for every organization. Let's talk a little bit about some of the specific things now that A M D and HPE E are doing. I know that you have a really solid focus on building security features into the EPIC processors. Talk to me a little bit about that focus and some of the great things that you're doing there. >>Yeah, so, you know, we partner with H P E for a long time now. I think it's almost 20 years that we've been in business together. And, and you know, we, we help, you know, we, we work together design in security features even before the silicons even, you know, even born. So, you know, we have a great relationship with, with, with all our partners, including hpe and you know, HPE has, you know, an end really great end to end security story and AMD fits really well into that. You know, if you kind of think about how security all started, you know, in, in the data center, you, you've had strategies around encryption of the, you know, the data in, in flight, the network security, you know, you know, VPNs and, and, and security on the NS. And, and even on the, on the hard drives, you know, data that's at rest. >>You know, encryption has, you know, security has been sort of part of that strategy for a a long time and really for, you know, for ages, nobody really thought about the, the actual data in use, which is, you know, the, the information that's being passed from the C P U to the, the, the memory and, and even in virtualized environments to the, the, the virtual machines that, that everybody uses now. So, you know, for a long time nobody really thought about that app, you know, that third leg of, of encryption. And so a d comes in and says, Hey, you know, this is things that as, as the bad guys are getting more sophisticated, you, you have to start worrying about that, right? And, you know, for example, you know, you know, think, think people think about memory, you know, being sort of, you know, non-persistent and you know, when after, you know, after a certain time, the, the, you know, the, the data in the memory kind of goes away, right? >>But that's not true anymore because even in in memory data now, you know, there's a lot of memory modules that still can retain data up to 90 minutes even after p power loss. And with something as simple as compressed, compressed air or, or liquid nitrogen, you can actually freeze memory dams now long enough to extract the data from that memory module for up, you know, up, up to two or three hours, right? So lo more than enough time to read valuable data and, and, and even encryption keys off of that memory module. So our, our world's getting more complex and you know, more, the more data out there, the more insatiable need for compute and storage. You know, data management is becoming all, all the more important, you know, to keep all of that going and secure, you know, and, and creating security for those threats. It becomes more and more important. And, and again, especially in virtualized environments where, you know, like hyperconverged infrastructure or vir virtual desktop memories, it's really hard to keep up with all those different attacks, all those different attack surfaces. >>It sounds like what you were just talking about is what AMD has been able to do is identify yet another vulnerability Yes. Another attack surface in memory to be able to, to plug that hole for organizations that didn't, weren't able to do that before. >>Yeah. And, you know, and, and we kind of started out with that belief that security needed to be scalable and, and able to adapt to, to changing environments. So, you know, we, we came up with, you know, the, you know, the, the philosophy or the design philosophy that we're gonna continue to build on those security features generational generations and stay ahead of those evolving attacks. You know, great example is in, in the third gen, you know, epic C P U, that family that we had, we actually created this feature called S E V S N P, which stands for SECURENESS Paging. And it's really all around this, this new attack where, you know, your, the, the, you know, it's basically hypervisor based attacks where people are, you know, the bad actors are writing in to the memory and writing in basically bad data to corrupt the mem, you know, to corrupt the data in the memory. So s e V S and P is, was put in place to help, you know, secure that, you know, before that became a problem. And, you know, you heard in the news just recently that that becoming a more and more, more of a bigger issue. And the great news is that we had that feature built in, you know, before that became a big problem. >>And now you're on the fourth gen, those epic crosses talk of those epic processes. Talk to me a little bit about some of the innovations that are now in fourth gen. >>Yeah, so in fourth gen we actually added, you know, on top of that. So we've, we've got, you know, the sec the, the base of our, our, what we call infinity guard is, is all around the secure boot. The, you know, the, the, the, the secure root of trust that, you know, that we, we work with HPE on the, the strong memory encryption and the S E V, which is the secure encrypted virtualization. And so remember those s s and p, you know, incap capabilities that I talked about earlier. We've actually, in the fourth gen added two x the number of sev v s and P guests for even higher number of confidential VMs to support even more customers than before. Right? We've also added more guest protection from simultaneous multi threading or S M T side channel attacks. And, you know, while it's not officially part of Infinity Guard, we've actually added more APEC acceleration, which greatly benefits the security of those confidential VMs with the larger number of VCPUs, which basically means that you can build larger VMs and still be secured. And then lastly, we actually added even stronger a e s encryption. So we went from 128 bit to 256 bit, which is now military grade encryption on top of that. And, you know, and, and that's really, you know, the de facto crypto cryptography that is used for most of the applications for, you know, customers like the US federal government and, and all, you know, the, is really an essential element for memory security and the H B C applications. And I always say if it's good enough for the US government, it's good enough for you. >>Exactly. Well, it's got to be, talk a little bit about how AMD is doing this together with HPE a little bit about the partnership as we round out our conversation. >>Sure, absolutely. So security is only as strong as the layer below it, right? So, you know, that's why modern security must be built in rather than, than, you know, bolted on or, or, or, you know, added after the fact, right? So HPE and a MD actually developed this layered approach for protecting critical data together, right? Through our leadership and, and security features and innovations, we really deliver a set of hardware based features that, that help decrease potential attack surfaces. With, with that holistic approach that, you know, that safeguards the critical information across system, you know, the, the entire system lifecycle. And we provide the confidence of built-in silicon authentication on the world's most secure industry standard servers. And with a 360 degree approach that brings high availability to critical workloads while helping to defend, you know, against internal and external threats. So things like h hp, root of silicon root of trust with the trusted supply chain, which, you know, obviously AMD's part of that supply chain combined with AMD's Infinity guard technology really helps provide that end-to-end data protection in today's business. >>And that is so critical for businesses in every industry. As you mentioned, the attackers are getting more and more sophisticated, the vulnerabilities are increasing. The ability to have a pa, a partnership like H P E and a MD to deliver that end-to-end data protection is table stakes for businesses. David, thank you so much for joining me on the program, really walking us through what am MD is doing, the the fourth gen epic processors and how you're working together with HPE to really enable security to be successfully accomplished by businesses across industries. We appreciate your insights. >>Well, thank you again for having me, and we appreciate the partnership with hpe. >>Well, you wanna thank you for watching our special program HPE Compute Security. I do have a call to action for you. Go ahead and visit hpe com slash security slash compute. Thanks for watching.
SUMMARY :
Kevin, it's great to have you back on the program. One of the topics that we're gonna unpack in this segment is, is all about cybersecurity. And like you said, the numbers are staggering. Anything that you can share with us that's eye-opening, more eye-opening than some of the stats we already shared? So the real change is, it's accelerating even faster because it's becoming We do know that security, you know, we've talked about it for so long as a, as a a C-suite Yeah, at the highest level it's simply that security is incredibly important to them. And by the way, we only have limited bandwidth, So we try to think like them so that we can protect our customers. our reliance servers that we do ourselves that many others don't do themselves. and you just did a great job of talking about this, that fundamental security approach, of code, not a single bit has been changed by a bad guy, even if the bad guy has the ability to automatically recover if we detect something has been compromised, And one of the ways we do that is through an extension of our Silicon Root of trust with a capability ensure that nothing in the server exchange, whether it's firmware, hardware, configurations, That lets you know, into the conversation, where do they fit in this and in securing the platform. Kevin, thank you so much for joining me on the program, Now I get to talk with David Chang, And thank you for having me. So one of the hot topics of conversation that we can't avoid is security. numerous new threats are popping up and they're just, you know, the, you know, the cost of, of brand reputation you brought up. know, the data in, in flight, the network security, you know, you know, that app, you know, that third leg of, of encryption. the data from that memory module for up, you know, up, up to two or three hours, It sounds like what you were just talking about is what AMD has been able to do is identify yet another in the third gen, you know, epic C P U, that family that we had, Talk to me a little bit about some of the innovations Yeah, so in fourth gen we actually added, you know, Well, it's got to be, talk a little bit about how AMD is with that holistic approach that, you know, that safeguards the David, thank you so much for joining me on the program, Well, you wanna thank you for watching our special program HPE Compute Security.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Lisa Martin | PERSON | 0.99+ |
David Chang | PERSON | 0.99+ |
Kevin | PERSON | 0.99+ |
David | PERSON | 0.99+ |
Kevin Dee | PERSON | 0.99+ |
AMD | ORGANIZATION | 0.99+ |
Kevin Depew | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Lisa | PERSON | 0.99+ |
2004 | DATE | 0.99+ |
15% | QUANTITY | 0.99+ |
HP | ORGANIZATION | 0.99+ |
10.5 trillion | QUANTITY | 0.99+ |
HPE E | ORGANIZATION | 0.99+ |
H P E | ORGANIZATION | 0.99+ |
360 degree | QUANTITY | 0.99+ |
over $4 million | QUANTITY | 0.99+ |
2025 | DATE | 0.99+ |
fourth gen. | QUANTITY | 0.99+ |
fourth gen | QUANTITY | 0.99+ |
over 4 million | QUANTITY | 0.99+ |
DL 5 85 | COMMERCIAL_ITEM | 0.99+ |
256 bit | QUANTITY | 0.99+ |
last year | DATE | 0.99+ |
three hours | QUANTITY | 0.98+ |
amd | ORGANIZATION | 0.98+ |
128 bit | QUANTITY | 0.98+ |
over 400 high-tech threats | QUANTITY | 0.98+ |
HPE | ORGANIZATION | 0.98+ |
Infinity Guard | ORGANIZATION | 0.98+ |
one piece | QUANTITY | 0.98+ |
almost 20 years | QUANTITY | 0.98+ |
one | QUANTITY | 0.97+ |
millions of lines | QUANTITY | 0.97+ |
single bit | QUANTITY | 0.97+ |
50% | QUANTITY | 0.97+ |
one report | QUANTITY | 0.97+ |
One | QUANTITY | 0.97+ |
hpe | ORGANIZATION | 0.96+ |
third gen | QUANTITY | 0.96+ |
today | DATE | 0.96+ |
both | QUANTITY | 0.96+ |
H P V E | ORGANIZATION | 0.96+ |
first | QUANTITY | 0.95+ |
two | QUANTITY | 0.95+ |
third leg | QUANTITY | 0.94+ |
last couple of years | DATE | 0.93+ |
Silicon Rivers | ORGANIZATION | 0.92+ |
up to 90 minutes | QUANTITY | 0.92+ |
S Spdm | ORGANIZATION | 0.9+ |
ILO | ORGANIZATION | 0.88+ |
AM | ORGANIZATION | 0.88+ |
US government | ORGANIZATION | 0.86+ |
single line | QUANTITY | 0.85+ |
last 18 months | DATE | 0.82+ |
Gen 11 | QUANTITY | 0.81+ |
last 12 months | DATE | 0.81+ |
AM MD base ProLiant | COMMERCIAL_ITEM | 0.8+ |
next five years | DATE | 0.8+ |
up to two | QUANTITY | 0.8+ |
Protect | ORGANIZATION | 0.79+ |
couple years | QUANTITY | 0.79+ |
Kevin Depew | HPE ProLiant Gen11 – Trusted Security by Design
>>Hey everyone, welcome to the cube. Lisa Martin here with Kevin Depu, senior Director Future Server Architecture at hpe. Kevin, it's great to have you on the program. You're gonna be breaking down everything that's exciting and compelling about Gen 11. How are you today? >>Thanks Lisa, and I'm doing great. >>Good, good, good. So let's talk about ProLiant Gen 11, the next generation of compute. I read some great stats on hpe.com. I saw that Gen 11 added 28 new world records while delivering up to 99% higher performance and 43% more energy efficiency than the previous version. That's amazing. Talk to me about Gen 11. What makes this update so compelling? >>Well, you talked about some of the stats regarding the performance and the power efficiency, and those are excellent. We partnered with amd, we've got excellent performance on these platforms. We have excellent power efficiency, but the advantage of this platform go beyond that. Today we're gonna talk a lot about cybersecurity and we've got a lot of security capabilities in these platforms. We've built on top of the security capabilities that we've had, generation over generation, we've got some new exciting capabilities we'll be talking about. So whether it's the performance, whether it's power efficient, whether it's security, all those capabilities are in this platform. Security is part of our dna. We put it into the design from the very beginning, and we've partnered with AMD to deliver what we think is a very compelling story. >>The security piece is absolutely critical. The to, we could have a, you know, an entire separate conversation on the cybersecurity landscape and the changes there. But one of the things I also noticed in the material on Gen 11 is that HPE says it's fundamental. What do you mean by that and what's new that makes it so fundamental? >>Well, by saying it's fundamental is security is a fundamental part of the platform. You need systems that are reliable. You need systems that have excellent performance. You need systems that are, have very good power efficiency, those things you talked about before, those are all very important to have a good server, but security's a part that's absolutely critical as well. So security is one of the fundamental capabilities of the platform. I had mentioned. We built on top of capabilities, capabilities like our silicon root of trust, which ensures that the firmware stack on these platforms is not compromised. Those are continuing this platform and have been expanded on. We have our trusted supply chain and we've expanded on that as well. We have a lot of security capabilities, our platform certificates, our IEB IDs. There's just a lot of security capabilities that are absolutely fundamental to these being a good solution because as we said, security is fundamental. It's an absolutely critical part of these platforms. >>Absolutely. For companies in every industry. I wanna talk a little bit about about one of the other things that HPE describes Gen 11 as as being uncompromising. And I wanted to understand what that means and what's the value add in it for customers? >>Yeah. Well, by uncompromising means we can't compromise on security. Security to what I said before, it's fundamental. It can't be promised. You have to have security be strong on these platforms. So one of the capabilities, which we're specifically talking about when we talk about Uncompromising is a capability called spdm. We've extended our silicon root of trust, which is one of our key technologies we've had since our Gen 10 platforms. We've extended that through something called spdm. We saw a problem in the industry with the ability to authenticate option cards and other devices in the system. Silicon Root of Trust verified many pieces of firmware in the platform, but one piece that it wasn't verifying was the option cards. And we needed, we knew we needed to solve this problem and we knew we couldn't do it a hundred percent on our own because we needed to work with our partners, whether it's a storage option card, a nick, or even devices in the future, we needed to make sure that we could verify that those were what they were meant to be. >>They weren't compromised, they weren't maliciously compromised and that we could authenticate them. So we worked with industry standards bodies to create the S P M specification. And what that allows us to do is authenticate the option cards in the systems. So that's one of our new capabilities that we've added in these platforms. So we've gone beyond securing all of the things that Silicon Real Trust secured in the past to extending that to the option cards and their firmware as well. So when we boot up one of these platforms, when we hand off to the OS and to the the customers software solution, they can be, they can rest assured that all the things that have run all that, that platform is not compromised. A bad guy has not gone in and changed things and that includes a bad guy with physical access to the platform. So that's why we have unpromised security in these platforms. >>Outstanding. That sounds like great work that's been done there and giving customers that piece of mind where security is concerned is table stakes for everybody across the organization. Kevin, you mentioned partners. I know HPE is extending protection to the partner ecosystem. I wanted to get a little bit more info on that from you. >>Yeah, we've worked with our option co card vendors, numerous partners across the industry to support spdm. We were the ones who kind of went to the, the industry standards bodies and said, we need to solve this problem. And we had agreement from everybody. Everybody agrees this is a problem that had to be solved. So, but to solve it, you've gotta have a partnership. We can't just do it on our own. There's a lot of things that we HPE can solve on our own. This is not one of them to be able to get a method that we could authenticate and trust the option cards in the system. We needed to work with our option card vendors. So that's something that we, we did. And we use also some capabilities that we work with some of our processor vendor partners as well. So working with partners across the industry, we were able to deliver spdm. >>So we know that option card, whether it's a storage card or a Nick Card or, or GPUs in the future, those, those may not be there from day one, but we know that those option cards are what they intended because you could do an attack where you compromise the option card, you compromise the firmware in that option card and option cards have the ability to read and write to memory using something called dma. And if those cards are running firmware that's being created by a bad guy, they can do a lot of, of very costly attacks. I mean we, there's a lot of statistics that showed just how, how costly cybersecurity attacks are. If option cards have been compromised, you can do some really bad things. So this is how we can trust those option cards. And we had to partner with those, those partners in the industry to both define the spec and both sides had to implement to that specification so that we could deliver the solution we're delivering. >>HPE is such a strong partner ecosystem. You did a great job of articulating the value in this for customers. From a security perspective, I know that you're also doing a lot of collaboration and work with amd. Talk to me a little bit about that and the value in it for your joint customers. >>Yeah, absolutely. AMD is a longstanding partner. We actually started working with AMD about 20 years ago when we delivered our first AMD opton based platform, the HP pro, HP Pliant, DL 5 85. So we've got a long engineering relationship with AMD and we've been making products with AMD since they introduced their epic generation processor in 2017. That's when AMD really upped the secure their security game. They created capabilities with their AMD secure processor, their secure encryption virtualization, their memory encryption technologies. And we work with AMD long before platforms actually release. So they come to us with their ideas, their designs, we collaborate with them on things we think are valuable when we see areas where they can do things better, we provide feedback. So we really have a partnership to make these processors better. And it's not something where we just work with them for a short amount of time and deliver a product. >>We're working with them for years before those products come out. So that partnership allows both parties to create better platforms cuz we understand what they're capable of, they understand what our needs are as a, as a server provider. And so we help them make their processors better and they help us make our products better. And that extends in all areas, whether it's performance, power, efficiency, but very importantly in what we're talking about here, security. So they have got an excellent security story with all of their technologies. Again, memory encryption. They, they've got some exceptional technologies there. All their secure encryption, virtualization to secure virtualized environments, those are all things that they excel at. And we take advantage of those in our designs. We make sure that those so work with our servers as part of a solution >>Sounds like a very deeply technically integrated and longstanding relationship that's really symbiotic for both sides. I wanted to get some information from you on HPE server security optimized service. Talk to me about what that is. How does that help HP help its customers get around some of those supply chain challenges that are persistent? >>Yeah, what that is is with our previous generation of products, we announced something called our HPE trusted supply chain and but that was focused on the US market with the solution for gen 11. We've expanded that to other markets. It's, it's available from factories other than the ones in our us it's available for shipping products to other geographies. So what that really was is taking the HPE trusted supply chain and expanding it to additional geographies throughout the world, which provides a big, big benefit for our non-US based customers. And what that is, is we're trying to make sure that the server that we ship out of our factories is indeed exactly what that customer is getting. So try to prevent any possibility of attack in the supply chain going from our factories to the customer. And if there is an attack, we can detect it and the customer knows about it. >>So they won't deploy a system that's been compromised cuz there, there have been high profile cases of supply chain attacks. We don't want to have that with our, our customers buying our Reliant products. So we do things like enable you I Secure Boot, which is an ability to authenticate the, what's called a u i option ROM driver on option cards. That's enabled by default. Normally that's not enabled by default. We enable our high security mode in our ILO product. We include our intrusion tech detection technology option, which is an optional feature, but it's their standard when you buy one of the boxes with this, this capability, this trusted supply chain capability. So there's a lot of capabilities that get enabled at the factory. We also enable server configuration lock, which allows a customer to detect, get a bad guy, modify anything in the platform when it transits from our factory to them. So what it allows a customer to do is get that platform and know that it is indeed what it is intended to be and that it hasn't been attacked and we've now expanded that to many geographies throughout the world. >>Excellent. So much more coverage across the world, which is so incredibly important. As cyber attacks continue to rise year over year, the the ransomware becomes a household word, the ransoms get even more expensive, especially considering the cybersecurity skills gap. I'm just wondering what are some of the, the ways in which everything that you've described with Gen 11 and the HPE partner ecosystem with A and B for example, how does that help customers to get around that security skills gap that is present? >>Well, the key thing there is we care about our customer security. So as I mentioned, security is in our dna. We do, we consider security in everything. We do every update to firm where we make, when we do the hardware design, whatever we're doing, we're always considering what could a bad guy do? What could a bad guy take advantage of and attempt to prevent it. And AMD does the same thing. You can look at all the technologies they have in their AMD processor. They're, they're making sure their processor is secure. We're making sure our platform is secure so the customer doesn't have to worry about it. So that's something the customer can trust us. They can trust the amd so they know that that's not the area where they, they have to expend their bandwidth. They can extend their bandwidth on the security on other parts of the, the solution versus knowing that the platform and the CPU is secure. >>And beyond that, we create features and capabilities that they can take advantage of in the, in the case of amd, a lot of their capabilities are things that the software stack and the OS can take advantage of. We have capabilities on the client side that the software and that they can take advantage of, whether it's server configuration lock or whatever. We try to create features that are easy for them to use to make their environments more secure. So we're making things that can trust the platform, they can trust the processor, they don't have to worry about that. And then we have features and capabilities that lets them solve some of the problems easier. So we're, we're trying to, to help them with that skills gap by making certain things easier and making certain things that they don't even have to worry about. >>Right. It sounds like allowing them to be much more strategic about the security skills that they do have. My last question for you, Kevin, is Gen 11 available now? Where can folks go to get their hands on it? >>So Gen 11 was announced earlier this month. The products will actually be shipping before the end of this year, before the end of 2022. And you can go to our website and find all about our compute security. So it all that information's available on our website. >>Awesome. Kevin, it's been a pleasure talking to you, unpacking Gen 11, the value in it, why security is fundamental to the uncompromising nature with which HPE and partners have really updated the system and the rest of world coverage that you guys are enabling. We appreciate your insights on your time, Kevin. >>Thank you very much, Lisa. Appreciate >>It. And we want to let you and the audience know, check out hpe.com/info/compute for more info on 11. Thanks for watching.
SUMMARY :
Kevin, it's great to have you on the program. So let's talk about ProLiant Gen 11, the next generation of compute. We put it into the design from the very beginning, The to, we could have a, you know, an entire separate conversation So security is one of the fundamental capabilities of the platform. And I wanted to understand what that means and what's the value add in it for customers? a nick, or even devices in the future, we needed to make sure that we could verify in the past to extending that to the option cards and their firmware as well. is table stakes for everybody across the organization. the industry standards bodies and said, we need to solve this problem. the spec and both sides had to implement to that specification so that we could deliver You did a great job of articulating the value in this for customers. So they come to us with their ideas, their designs, we collaborate parties to create better platforms cuz we understand what they're capable of, Talk to me about what that is. possibility of attack in the supply chain going from our factories to the customer. So we do things like enable you I Secure Boot, So much more coverage across the world, which is so incredibly important. So that's something the customer can trust us. We have capabilities on the client side that the It sounds like allowing them to be much more strategic about the security skills that they do have. So it all that information's available on our website. Kevin, it's been a pleasure talking to you, unpacking Gen 11, the value in It. And we want to let you and the audience know, check out hpe.com/info/compute
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Lisa | PERSON | 0.99+ |
Kevin | PERSON | 0.99+ |
AMD | ORGANIZATION | 0.99+ |
2017 | DATE | 0.99+ |
Kevin Depu | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
HPE | ORGANIZATION | 0.99+ |
Kevin Depew | PERSON | 0.99+ |
43% | QUANTITY | 0.99+ |
amd | ORGANIZATION | 0.99+ |
one | QUANTITY | 0.99+ |
both sides | QUANTITY | 0.99+ |
Silicon Real Trust | ORGANIZATION | 0.99+ |
US | LOCATION | 0.99+ |
both | QUANTITY | 0.99+ |
end of 2022 | DATE | 0.99+ |
first | QUANTITY | 0.99+ |
HP | ORGANIZATION | 0.99+ |
both parties | QUANTITY | 0.98+ |
one piece | QUANTITY | 0.98+ |
Today | DATE | 0.97+ |
hpe | ORGANIZATION | 0.97+ |
today | DATE | 0.97+ |
hpe.com/info/compute | OTHER | 0.97+ |
end of this year | DATE | 0.97+ |
hpe.com | ORGANIZATION | 0.96+ |
DL 5 85 | COMMERCIAL_ITEM | 0.96+ |
earlier this month | DATE | 0.95+ |
up to 99% | QUANTITY | 0.95+ |
hundred percent | QUANTITY | 0.93+ |
day one | QUANTITY | 0.9+ |
ILO | ORGANIZATION | 0.89+ |
ProLiant | TITLE | 0.87+ |
Gen 10 | QUANTITY | 0.86+ |
Pliant | COMMERCIAL_ITEM | 0.84+ |
28 new world records | QUANTITY | 0.83+ |
gen 11 | QUANTITY | 0.83+ |
Gen 11 | QUANTITY | 0.82+ |
about 20 years ago | DATE | 0.81+ |
one of | QUANTITY | 0.77+ |
11 | OTHER | 0.7+ |
Nick Card | COMMERCIAL_ITEM | 0.69+ |
Gen11 | QUANTITY | 0.64+ |
HPE ProLiant | ORGANIZATION | 0.64+ |
Gen 11 | QUANTITY | 0.62+ |
years | QUANTITY | 0.62+ |
Gen | OTHER | 0.6+ |
Gen 11 | OTHER | 0.59+ |
11 | QUANTITY | 0.57+ |
Gen | QUANTITY | 0.52+ |
boxes | QUANTITY | 0.47+ |
spdm | TITLE | 0.44+ |
spdm | OTHER | 0.41+ |
pro | COMMERCIAL_ITEM | 0.38+ |
Dhabaleswar “DK” Panda, Ohio State State University | SuperComputing 22
>>Welcome back to The Cube's coverage of Supercomputing Conference 2022, otherwise known as SC 22 here in Dallas, Texas. This is day three of our coverage, the final day of coverage here on the exhibition floor. I'm Dave Nicholson, and I'm here with my co-host, tech journalist extraordinaire, Paul Gillum. How's it going, >>Paul? Hi, Dave. It's going good. >>And we have a wonderful guest with us this morning, Dr. Panda from the Ohio State University. Welcome Dr. Panda to the Cube. >>Thanks a lot. Thanks a lot to >>Paul. I know you're, you're chopping at >>The bit, you have incredible credentials, over 500 papers published. The, the impact that you've had on HPC is truly remarkable. But I wanted to talk to you specifically about a product project you've been working on for over 20 years now called mva, high Performance Computing platform that's used by more than 32 organ, 3,200 organizations across 90 countries. You've shepherded this from, its, its infancy. What is the vision for what MVA will be and and how is it a proof of concept that others can learn from? >>Yeah, Paul, that's a great question to start with. I mean, I, I started with this conference in 2001. That was the first time I came. It's very coincidental. If you remember the Finman Networking Technology, it was introduced in October of 2000. Okay. So in my group, we were working on NPI for Marinette Quadrics. Those are the old technology, if you can recollect when Finman was there, we were the very first one in the world to really jump in. Nobody knew how to use Infin van in an HPC system. So that's how the Happy Project was born. And in fact, in super computing 2002 on this exhibition floor in Baltimore, we had the first demonstration, the open source happy, actually is running on an eight node infinite van clusters, eight no zeros. And that was a big challenge. But now over the years, I means we have continuously worked with all infinite van vendors, MPI Forum. >>We are a member of the MPI Forum and also all other network interconnect. So we have steadily evolved this project over the last 21 years. I'm very proud of my team members working nonstop, continuously bringing not only performance, but scalability. If you see now INFIN event are being deployed in 8,000, 10,000 node clusters, and many of these clusters actually use our software, stack them rapid. So, so we have done a lot of, like our focuses, like we first do research because we are in academia. We come up with good designs, we publish, and in six to nine months, we actually bring it to the open source version and people can just download and then use it. And that's how currently it's been used by more than 3000 orange in 90 countries. And, but the interesting thing is happening, your second part of the question. Now, as you know, the field is moving into not just hvc, but ai, big data, and we have those support. This is where like we look at the vision for the next 20 years, we want to design this MPI library so that not only HPC but also all other workloads can take advantage of it. >>Oh, we have seen libraries that become a critical develop platform supporting ai, TensorFlow, and, and the pie torch and, and the emergence of, of, of some sort of default languages that are, that are driving the community. How, how important are these frameworks to the, the development of the progress making progress in the HPC world? >>Yeah, no, those are great. I mean, spite our stencil flow, I mean, those are the, the now the bread and butter of deep learning machine learning. Am I right? But the challenge is that people use these frameworks, but continuously models are becoming larger. You need very first turnaround time. So how do you train faster? How do you do influencing faster? So this is where HPC comes in and what exactly what we have done is actually we have linked floor fighters to our happy page because now you see the MPI library is running on a million core system. Now your fighters and tenor four clan also be scaled to to, to those number of, large number of course and gps. So we have actually done that kind of a tight coupling and that helps the research to really take advantage of hpc. >>So if, if a high school student is thinking in terms of interesting computer science, looking for a place, looking for a university, Ohio State University, bruns, world renowned, widely known, but talk about what that looks like from a day on a day to day basis in terms of the opportunity for undergrad and graduate students to participate in, in the kind of work that you do. What is, what does that look like? And is, and is that, and is that a good pitch to for, for people to consider the university? >>Yes. I mean, we continuously, from a university perspective, by the way, the Ohio State University is one of the largest single campus in, in us, one of the top three, top four. We have 65,000 students. Wow. It's one of the very largest campus. And especially within computer science where I am located, high performance computing is a very big focus. And we are one of the, again, the top schools all over the world for high performance computing. And we also have very strength in ai. So we always encourage, like the new students who like to really work on top of the art solutions, get exposed to the concepts, principles, and also practice. Okay. So, so we encourage those people that wish you can really bring you those kind of experience. And many of my past students, staff, they're all in top companies now, have become all big managers. >>How, how long, how long did you say you've been >>At 31 >>Years? 31 years. 31 years. So, so you, you've had people who weren't alive when you were already doing this stuff? That's correct. They then were born. Yes. They then grew up, yes. Went to university graduate school, and now they're on, >>Now they're in many top companies, national labs, all over the universities, all over the world. So they have been trained very well. Well, >>You've, you've touched a lot of lives, sir. >>Yes, thank you. Thank >>You. We've seen really a, a burgeoning of AI specific hardware emerge over the last five years or so. And, and architectures going beyond just CPUs and GPUs, but to Asics and f PGAs and, and accelerators, does this excite you? I mean, are there innovations that you're seeing in this area that you think have, have great promise? >>Yeah, there is a lot of promise. I think every time you see now supercomputing technology, you see there is sometime a big barrier comes barrier jump. Rather I'll say, new technology comes some disruptive technology, then you move to the next level. So that's what we are seeing now. A lot of these AI chips and AI systems are coming up, which takes you to the next level. But the bigger challenge is whether it is cost effective or not, can that be sustained longer? And this is where commodity technology comes in, which commodity technology tries to take you far longer. So we might see like all these likes, Gaudi, a lot of new chips are coming up, can they really bring down the cost? If that cost can be reduced, you will see a much more bigger push for AI solutions, which are cost effective. >>What, what about on the interconnect side of things, obvi, you, you, your, your start sort of coincided with the initial standards for Infin band, you know, Intel was very, very, was really big in that, in that architecture originally. Do you see interconnects like RDMA over converged ethernet playing a part in that sort of democratization or commoditization of things? Yes. Yes. What, what are your thoughts >>There for internet? No, this is a great thing. So, so we saw the infinite man coming. Of course, infinite Man is, commod is available. But then over the years people have been trying to see how those RDMA mechanisms can be used for ethernet. And then Rocky has been born. So Rocky has been also being deployed. But besides these, I mean now you talk about Slingshot, the gray slingshot, it is also an ethernet based systems. And a lot of those RMA principles are actually being used under the hood. Okay. So any modern networks you see, whether it is a Infin and Rocky Links art network, rock board network, you name any of these networks, they are using all the very latest principles. And of course everybody wants to make it commodity. And this is what you see on the, on the slow floor. Everybody's trying to compete against each other to give you the best performance with the lowest cost, and we'll see whoever wins over the years. >>Sort of a macroeconomic question, Japan, the US and China have been leapfrogging each other for a number of years in terms of the fastest supercomputer performance. How important do you think it is for the US to maintain leadership in this area? >>Big, big thing, significantly, right? We are saying that I think for the last five to seven years, I think we lost that lead. But now with the frontier being the number one, starting from the June ranking, I think we are getting that leadership back. And I think it is very critical not only for fundamental research, but for national security trying to really move the US to the leading edge. So I hope us will continue to lead the trend for the next few years until another new system comes out. >>And one of the gating factors, there is a shortage of people with data science skills. Obviously you're doing what you can at the university level. What do you think can change at the secondary school level to prepare students better to, for data science careers? >>Yeah, I mean that is also very important. I mean, we, we always call like a pipeline, you know, that means when PhD levels we are expecting like this even we want to students to get exposed to, to, to many of these concerts from the high school level. And, and things are actually changing. I mean, these days I see a lot of high school students, they, they know Python, how to program in Python, how to program in sea object oriented things. Even they're being exposed to AI at that level. So I think that is a very healthy sign. And in fact we, even from Ohio State side, we are always engaged with all this K to 12 in many different programs and then gradually trying to take them to the next level. And I think we need to accelerate also that in a very significant manner because we need those kind of a workforce. It is not just like a building a system number one, but how do we really utilize it? How do we utilize that science? How do we propagate that to the community? Then we need all these trained personal. So in fact in my group, we are also involved in a lot of cyber training activities for HPC professionals. So in fact, today there is a bar at 1 1 15 I, yeah, I think 1215 to one 15. We'll be talking more about that. >>About education. >>Yeah. Cyber training, how do we do for professionals? So we had a funding together with my co-pi, Dr. Karen Tom Cook from Ohio Super Center. We have a grant from NASA Science Foundation to really educate HPT professionals about cyber infrastructure and ai. Even though they work on some of these things, they don't have the complete knowledge. They don't get the time to, to learn. And the field is moving so fast. So this is how it has been. We got the initial funding, and in fact, the first time we advertised in 24 hours, we got 120 application, 24 hours. We couldn't even take all of them. So, so we are trying to offer that in multiple phases. So, so there is a big need for those kind of training sessions to take place. I also offer a lot of tutorials at all. Different conference. We had a high performance networking tutorial. Here we have a high performance deep learning tutorial, high performance, big data tutorial. So I've been offering tutorials at, even at this conference since 2001. Good. So, >>So in the last 31 years, the Ohio State University, as my friends remind me, it is properly >>Called, >>You've seen the world get a lot smaller. Yes. Because 31 years ago, Ohio, in this, you know, of roughly in the, in the middle of North America and the United States was not as connected as it was to everywhere else in the globe. So that's, that's pro that's, I i it kind of boggles the mind when you think of that progression over 31 years, but globally, and we talk about the world getting smaller, we're sort of in the thick of, of the celebratory seasons where, where many, many groups of people exchange gifts for varieties of reasons. If I were to offer you a holiday gift, that is the result of what AI can deliver the world. Yes. What would that be? What would, what would, what would the first thing be? This is, this is, this is like, it's, it's like the genie, but you only get one wish. >>I know, I know. >>So what would the first one be? >>Yeah, it's very hard to answer one way, but let me bring a little bit different context and I can answer this. I, I talked about the happy project and all, but recently last year actually we got awarded an S f I institute award. It's a 20 million award. I am the overall pi, but there are 14 universities involved. >>And who is that in that institute? >>What does that Oh, the I ici. C e. Okay. I cycle. You can just do I cycle.ai. Okay. And that lies with what exactly what you are trying to do, how to bring lot of AI for masses, democratizing ai. That's what is the overall goal of this, this institute, think of like a, we have three verticals we are working think of like one is digital agriculture. So I'll be, that will be my like the first ways. How do you take HPC and AI to agriculture the world as though we just crossed 8 billion people. Yeah, that's right. We need continuous food and food security. How do we grow food with the lowest cost and with the highest yield? >>Water >>Consumption. Water consumption. Can we minimize or minimize the water consumption or the fertilization? Don't do blindly. Technologies are out there. Like, let's say there is a weak field, A traditional farmer see that, yeah, there is some disease, they will just go and spray pesticides. It is not good for the environment. Now I can fly it drone, get images of the field in the real time, check it against the models, and then it'll tell that, okay, this part of the field has disease. One, this part of the field has disease. Two, I indicate to the, to the tractor or the sprayer saying, okay, spray only pesticide one, you have pesticide two here. That has a big impact. So this is what we are developing in that NSF A I institute I cycle ai. We also have, we have chosen two additional verticals. One is animal ecology, because that is very much related to wildlife conservation, climate change, how do you understand how the animals move? Can we learn from them? And then see how human beings need to act in future. And the third one is the food insecurity and logistics. Smart food distribution. So these are our three broad goals in that institute. How do we develop cyber infrastructure from below? Combining HP c AI security? We have, we have a large team, like as I said, there are 40 PIs there, 60 students. We are a hundred members team. We are working together. So, so that will be my wish. How do we really democratize ai? >>Fantastic. I think that's a great place to wrap the conversation here On day three at Supercomputing conference 2022 on the cube, it was an honor, Dr. Panda working tirelessly at the Ohio State University with his team for 31 years toiling in the field of computer science and the end result, improving the lives of everyone on Earth. That's not a stretch. If you're in high school thinking about a career in computer science, keep that in mind. It isn't just about the bits and the bobs and the speeds and the feeds. It's about serving humanity. Maybe, maybe a little, little, little too profound a statement, I would argue not even close. I'm Dave Nicholson with the Queue, with my cohost Paul Gillin. Thank you again, Dr. Panda. Stay tuned for more coverage from the Cube at Super Compute 2022 coming up shortly. >>Thanks a lot.
SUMMARY :
Welcome back to The Cube's coverage of Supercomputing Conference 2022, And we have a wonderful guest with us this morning, Dr. Thanks a lot to But I wanted to talk to you specifically about a product project you've So in my group, we were working on NPI for So we have steadily evolved this project over the last 21 years. that are driving the community. So we have actually done that kind of a tight coupling and that helps the research And is, and is that, and is that a good pitch to for, So, so we encourage those people that wish you can really bring you those kind of experience. you were already doing this stuff? all over the world. Thank this area that you think have, have great promise? I think every time you see now supercomputing technology, with the initial standards for Infin band, you know, Intel was very, very, was really big in that, And this is what you see on the, Sort of a macroeconomic question, Japan, the US and China have been leapfrogging each other for a number the number one, starting from the June ranking, I think we are getting that leadership back. And one of the gating factors, there is a shortage of people with data science skills. And I think we need to accelerate also that in a very significant and in fact, the first time we advertised in 24 hours, we got 120 application, that's pro that's, I i it kind of boggles the mind when you think of that progression over 31 years, I am the overall pi, And that lies with what exactly what you are trying to do, to the tractor or the sprayer saying, okay, spray only pesticide one, you have pesticide two here. I think that's a great place to wrap the conversation here On
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Nicholson | PERSON | 0.99+ |
Paul Gillum | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Paul Gillin | PERSON | 0.99+ |
October of 2000 | DATE | 0.99+ |
Paul | PERSON | 0.99+ |
NASA Science Foundation | ORGANIZATION | 0.99+ |
2001 | DATE | 0.99+ |
Baltimore | LOCATION | 0.99+ |
8,000 | QUANTITY | 0.99+ |
14 universities | QUANTITY | 0.99+ |
31 years | QUANTITY | 0.99+ |
20 million | QUANTITY | 0.99+ |
24 hours | QUANTITY | 0.99+ |
last year | DATE | 0.99+ |
Karen Tom Cook | PERSON | 0.99+ |
60 students | QUANTITY | 0.99+ |
Ohio State University | ORGANIZATION | 0.99+ |
90 countries | QUANTITY | 0.99+ |
six | QUANTITY | 0.99+ |
Earth | LOCATION | 0.99+ |
Panda | PERSON | 0.99+ |
today | DATE | 0.99+ |
65,000 students | QUANTITY | 0.99+ |
3,200 organizations | QUANTITY | 0.99+ |
North America | LOCATION | 0.99+ |
Python | TITLE | 0.99+ |
United States | LOCATION | 0.99+ |
Dallas, Texas | LOCATION | 0.99+ |
over 500 papers | QUANTITY | 0.99+ |
June | DATE | 0.99+ |
One | QUANTITY | 0.99+ |
more than 32 organ | QUANTITY | 0.99+ |
120 application | QUANTITY | 0.99+ |
Ohio | LOCATION | 0.99+ |
more than 3000 orange | QUANTITY | 0.99+ |
first ways | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
nine months | QUANTITY | 0.99+ |
40 PIs | QUANTITY | 0.99+ |
Asics | ORGANIZATION | 0.99+ |
MPI Forum | ORGANIZATION | 0.98+ |
China | ORGANIZATION | 0.98+ |
Two | QUANTITY | 0.98+ |
Ohio State State University | ORGANIZATION | 0.98+ |
8 billion people | QUANTITY | 0.98+ |
Intel | ORGANIZATION | 0.98+ |
HP | ORGANIZATION | 0.97+ |
Dr. | PERSON | 0.97+ |
over 20 years | QUANTITY | 0.97+ |
US | ORGANIZATION | 0.97+ |
Finman | ORGANIZATION | 0.97+ |
Rocky | PERSON | 0.97+ |
Japan | ORGANIZATION | 0.97+ |
first time | QUANTITY | 0.97+ |
first demonstration | QUANTITY | 0.96+ |
31 years ago | DATE | 0.96+ |
Ohio Super Center | ORGANIZATION | 0.96+ |
three broad goals | QUANTITY | 0.96+ |
one wish | QUANTITY | 0.96+ |
second part | QUANTITY | 0.96+ |
31 | QUANTITY | 0.96+ |
Cube | ORGANIZATION | 0.95+ |
eight | QUANTITY | 0.95+ |
over 31 years | QUANTITY | 0.95+ |
10,000 node clusters | QUANTITY | 0.95+ |
day three | QUANTITY | 0.95+ |
first | QUANTITY | 0.95+ |
INFIN | EVENT | 0.94+ |
seven years | QUANTITY | 0.94+ |
Dhabaleswar “DK” Panda | PERSON | 0.94+ |
three | QUANTITY | 0.93+ |
S f I institute | TITLE | 0.93+ |
first thing | QUANTITY | 0.93+ |
Fred Wurden and Narayan Bharadwaj Accelerating Business Transformation with VMware Cloud on AWS
(upbeat music) >> Hello everyone, welcome to this CUBE Showcase, accelerating business transformation with VMware Cloud on AWS. It's a solution innovation conversation with two great guests, Fred Wurden, VP of Commercial Services at AWS and Narayan Bharadwaj, who's the VP and General Manager of Cloud Solutions at VMware. Gentlemen, thanks for joining me on the showcase. >> Great to be here. >> Great. Thanks for having us on. It's a great topic. >> We've been covering this VMware cloud on AWS since the launch going back and it's been amazing to watch the evolution from people saying, Oh, it's the worst thing I've ever seen. What's this mean? And the press were not really on board with the vision, but as it played out as you guys had announced together, it did work out great for VMware. It did work out great for AWS and it continues two years later and I want to just get an update from you guys on where you guys see this has been going. I'll see multiple years. Where is the evolution of the solution as we are right now coming off VMware explorer just recently and going in to re:Invent, which is only a couple weeks away Feels like tomorrow. But as we prepare, a lot going on. Where are we with the evolution of the solution? >> I mean, first thing I want to say is October 2016 was a seminal moment in the history of IT. When Pat Gelsinger and Andy Jassy came together to announce this. And I think John, you were there at the time I was there. It was a great, great moment. We launched the solution in 2017 year after that at VMworld, back when we called it VMworld. I think we have gone from strength to strength. One of the things that has really mattered to us is we've learned from AWS also in the processes, this notion of working backwards. So we really, really focused on customer feedback as we built a service offering now five years old. Pretty remarkable journey. In the first years we tried to get across all the regions, that was a big focus because there was so much demand for it. In the second year, we started going really on enterprise great features. We invented this pretty awesome feature called Stretched Clusters, where you could stretch a vSphere cluster using vSAN and NSX-T across to AZs in the same region. Pretty phenomenal four nines of availability that applications started to get with that particular feature. And we kept moving forward, all kinds of integration with AWS Direct Connect, Transit Gateways with our own advanced networking capabilities. Along the way, Disaster Recovery, we punched out two new services just focused on that. And then more recently we launched our Outposts partnership. We were up on stage at re:Invent, again, with Pat and Andy announcing AWS Outposts and the VMware flavor of that, VMware Cloud and AWS Outposts. I think it's been significant growth in our federal sector as well with our federal and high certification more recently. So all in all, we are super excited. We're five years old. The customer momentum is really, really strong and we are scaling the service massively across all geos and industries. >> That's great, great update. And I think one of the things that you mentioned was how the advantages you guys got from that relationship. And this has been the theme for AWS, man, since I can remember from day one, Fred. You guys do the heavy lifting as you always say for the customers. Here, VMware comes on board. Takes advantage of the AWS and just doesn't miss a beat. Continues to move their workloads that everyone's using, vSphere, and these are big workloads on AWS. What's the AWS perspective on this? How do you see it? >> Yeah, it's pretty fascinating to watch how fast customers can actually transform and move when you take the skill set that they're familiar with and the advanced capabilities that they've been using on-prem and then overlay it on top of the AWS infrastructure that's evolving quickly and building out new hardware and new instances we'll talk about. But that combined experience between both of us on a jointly engineered solution to bring the best security and the best features that really matter for those workloads drive a lot of efficiency and speed for the customers. So it's been well received and the partnership is stronger than ever from an engineering standpoint, from a business standpoint. And obviously it's been very interesting to look at just how we stay day one in terms of looking at new features and work and responding to what customers want. So pretty excited about just seeing the transformation and the speed that which customers can move to while at VMC. >> That's a great value proposition. We've been talking about that in context to anyone building on top of the cloud. They can have their own supercloud, as we call it, if you take advantage of all the CapEx and investment Amazon's made and AWS has made and continues to make in performance IaaS and PaaS, all great stuff. I have to ask you guys both as you guys see this going to the next level, what are some of the differentiations you see around the service compared to other options in the market? What makes it different? What's the combination? You mentioned jointly engineered. What are some of the key differentiators of the service compared to others? >> Yeah. I think one of the key things Fred talked about is this jointly engineered notion. Right from day one we were the early adopters of the AWS Nitro platform. The reinvention of EC2 back five years ago. And so we have been having a very, very strong engineering partnership at that level. I think from a VMware customer standpoint, you get the full software-defined data center, compute storage networking on EC2, bare metal across all regions. You can scale that elastically up and down. It's pretty phenomenal just having that consistency globally on AWS EC2 global regions. Now the other thing that's a real differentiator for us, what customers tell us about is this whole notion of a managed service. And this was somewhat new to VMware. But we took away the pain of this undifferentiated heavy lifting where customers had to provision rack stack hardware, configure the software on top, and then upgrade the software and the security patches on top. So we took away all of that pain as customers transitioned to VMware cloud in AWS. In fact, my favorite story from last year when we were all going through the Log4j debacle. Industry was just going through that. Favorite proof point from customers was before they could even race this issue to us, we sent them a notification saying, we already patched all of your systems, no action from you. The customers were super thrilled. I mean, these are large banks. Many other customers around the world were super thrilled they had to take no action, but a pretty incredible industry challenge that we were all facing. >> Narayan, that's a great point. The whole managed service piece brings up the security. You kind of teasing at it, but there's always vulnerabilities that emerge when you are doing complex logic. And as you grow your solutions, there's more bits. Fred, we were commenting before we came on camera more bits than ever before and at the physics layer too, as well as the software. So you never know when there's going to be a zero-day vulnerability out there. It happens. We saw one with Fortinet this week. This came out of the woodwork. But moving fast on those patches, it's huge. This brings up the whole support angle. I wanted to ask you about how you guys are doing that as well, because to me, we see the value when we talk to customers on theCUBE about this. It was a real easy understanding of what the cloud means to them with VMware now with the AWS. But the question that comes up that we want to get more clarity on is how do you guys handle support together? >> Well, what's interesting about this is that it's done mutually. We have dedicated support teams on both sides that work together pretty seamlessly to make sure that whether there's a issue at any layer, including all the way up into the app layer, as you think about some of the other workloads like SAP, we'll go end-to-end and make sure that we support the customer regardless of where the particular issue might be for them. And on top of that, we look at where we're improving reliability in as a first order of principle between both companies. So from availability and reliability standpoint, it's top of mind and no matter where the particular item might land, we're going to go help the customer resolve that. It works really well. >> On the VMware side, what's been the feedback there? What are some of the updates? >> Yeah, I think, look, I mean, VMware owns and operates the service, but we work phenomenal backend relationship with AWS. Customers call VMware for the service or any issues. And then we have a awesome relationship with AWS on the backend for support issues or any hardware issues. The key management that we jointly do. All of the hard problems that customers don't have to worry about. I think on the front end, we also have a really good group of solution architects across the companies that help to really explain the solution, do complex things like cloud migration, which is much, much easier with the VMware Cloud in AWS. We're presenting that easy button to the public cloud in many ways. And so we have a whole technical audience across the two companies that are working with customers every single day. >> You had mentioned, I've got list here of some of the innovations. You mentioned the stretch clustering, getting the geos working, advanced network, Disaster Recovery, FedRAMP, public sector certifications, Outposts. All good, you guys are checking the boxes every year. You got a good accomplishments list there on the VMware AWS side here in this relationship. The question that I'm interested in is what's next? What recent innovations are you doing? Are you making investments in? What's on the list this year? What items will be next year? How do you see the new things, the list of accomplishments? People want to know what's next. They don't want to see stagnant growth here. They want to see more action as cloud continues to scale and modern applications cloud native. You're seeing more and more containers, more and more CI/CD pipelining with modern apps, put more pressure on the system. What's new? What's the new innovations? >> Absolutely. And I think as a five year old service offering, innovation is top of mind for us every single day. So just to call out a few recent innovations that we announced in San Francisco at VMware Explore. First of all, our new platform i4i.metal. It's isolate based. It's pretty awesome. It's the latest and greatest, all the speeds and feeds that we would expect from VMware and AWS at this point in our relationship. We announced two different storage options. This notion of working from customer feedback, allowing customers even more price reductions, really take off that storage and park it externally and separate that from compute. So two different storage offerings there. One is with AWS FSx with NetApp ONTAP, which brings in our NetApp partnership as well into the equation and really get that NetApp based really excited about this offering as well. And the second storage offering called VMware Cloud Flex Storage. VMware's own managed storage offering. Beyond that, we have done a lot of other innovations as well. I really wanted to talk about VMware Cloud Flex Compute where previously customers could only scale by hosts and a host is 36 to 48 cores, give or take. But with VMware Cloud Flex Compute, we are now allowing this notion of a resource defined compute model where customers can just get exactly the vCPU memory and storage that maps to the applications, however small they might be. So this notion of granularity is really a big innovation that we are launching in the market this year. And then last but not least, top of ransomware. Of course it's a hot topic in the industry. We are seeing many, many customers ask for this. We are happy to announce a new ransomware recovery with our VMware Cloud DR solution. A lot of innovation there and the way we are able to do machine learning and make sure the workloads that are covered from snapshots and backups are actually safe to use. So there's a lot of differentiation on that front as well. A lot of networking innovations with Project Northstar. Our ability to have layer four through layer seven, new SaaS services in that area as well. Keep in mind that the service already supports managed Kubernetes for containers. It's built in to the same clusters that have virtual machines. And so this notion of a single service with a great TCO for VMs and containers is sort at the heart of our (faintly speaking). >> The networking side certainly is a hot area to keep innovating on. Every year it's the same, same conversation, get better faster, networking more options there. The Flex Compute is interesting. If you don't mind me getting a quick clarification, could you explain the resource-defined versus hardware-defined? Because this is what we had saw at Explore coming out, that notion of resource-defined versus hardware-defined. What does that mean? >> Yeah, I mean I think we have been super successful in this hardware-defined notion. We we're scaling by the hardware unit that we present as software-defined data centers. And so that's been super successful. But customers wanted more, especially customers in different parts of the world wanted to start even smaller and grow even more incrementally. Lower the cost even more. And so this is the part where resource-defined starts to be very, very interesting as a way to think about, here's my bag of resources exactly based on what the customers request before fiber machines, five containers. It's size exactly for that. And then as utilization grows, we elastically behind the scenes, we're able to grow it through policies. So that's a whole different dimension. That's a whole different service offering that adds value and customers are comfortable. They can go from one to the other. They can go back to that host based model if they so choose to. And there's a jump off point across these two different economic models. >> It's cloud flexibility right there. I like the name. Fred, let's get into some of the examples of customers, if you don't mind, let's get into some of the, we have some time. I want to unpack a little bit of what's going on with the customer deployments. One of the things we've heard again on theCUBE is from customers is they like the clarity of the relationship, they love the cloud positioning of it. And then what happens is they lift and shift the workloads and it's like feels great. It's just like we're running VMware on AWS and then they start consuming higher level services. That adoption next level happens and because it's in the cloud. So can you guys take us through some recent examples of customer wins or deployments where they're using VMware cloud on AWS on getting started and then how do they progress once they're there? How does it evolve? Can you just walk us through a couple use cases? >> Sure. Well, there's a couple. One, it's pretty interesting that like you said, as there's more and more bits, you need better and better hardware and networking. And we're super excited about the i4 and the capabilities there in terms of doubling and or tripling what we're doing around lower variability on latency and just improving all the speeds. But what customers are doing with it, like the college in New Jersey, they're accelerating their deployment on onboarding over like 7,400 students over a six to eight month period. And they've really realized a ton of savings. But what's interesting is where and how they can actually grow onto additional native services too. So connectivity to any other services is available as they start to move and migrate into this. The options there obviously are tied to all the innovation that we have across any services, whether it's containerized and with what they're doing with Tanzu or with any other container and or services within AWS. So there's some pretty interesting scenarios where that data and or the processing, which is moved quickly with full compliance, whether it's in like healthcare or regulatory business is allowed to then consume and use things, for example, with Textract or any other really cool service that has monthly and quarterly innovations. So there's things that you just could not do before that are coming out and saving customers money and building innovative applications on top of their current app base in a rapid fashion. So pretty excited about it. There's a lot of examples. I think I probably don't have time to go into too many here. But that's actually the best part is listening to customers and seeing how many net new services and new applications are they actually building on top of this platform. >> Narayan, what's your perspective from the VMware side? 'Cause you guys have now a lot of headroom to offer customers with Amazon's higher level services and or whatever's homegrown where it's being rolled out 'cause you now have a lot of hybrid too. So what's your take on what's happening in with customers? >> I mean, it's been phenomenal. The customer adoption of this and banks and many other highly sensitive verticals are running production-grade applications, tier one applications on the service over the last five years. And so I have a couple of really good examples. S&P Global is one of my favorite examples. Large bank, they merge with IHS Markit, big conglomeration now. Both customers were using VMware Cloud and AWS in different ways. And with the use case, one of their use cases was how do I just respond to these global opportunities without having to invest in physical data centers? And then how do I migrate and consolidate all my data centers across the global, which there were many. And so one specific example for this company was how they migrated 1000 workloads to VMware Cloud and AWS in just six weeks. Pretty phenomenal if you think about everything that goes into a cloud migration process, people process technology. And the beauty of the technology going from VMware point A to VMware point B. The lowest cost, lowest risk approach to adopting VMware Cloud and AWS. So that's one of my favorite examples. There are many other examples across other verticals that we continue to see. The good thing is we are seeing rapid expansion across the globe, but constantly entering new markets with a limited number of regions and progressing our roadmap. >> It's great to see. I mean, the data center migrations go from months, many, many months to weeks. It's interesting to see some of those success stories. Congratulations. >> One of the other interesting fascinating benefits is the sustainability improvement in terms of being green. So the efficiency gains that we have both in current generation and new generation processors and everything that we're doing to make sure that when a customer can be elastic, they're also saving power, which is really critical in a lot of regions worldwide at this point in time. They're seeing those benefits. If you're running really inefficiently in your own data center, that is not a great use of power. So the actual calculators and the benefits to these workloads are pretty phenomenal just in being more green, which I like. We just all need to do our part there and this is a big part of it here. >> It's a huge point about the sustainability. Fred, I'm glad you called that out. The other one I would say is supply chain issue is another one. You see that constraints. I can't buy hardware. And the third one is really obvious, but no one really talks about it. It's security. I mean, I remember interviewing Steven Schmidt with that AWS and many years ago, this is like 2013 and at that time people were saying, the cloud's not secure. And he's like, listen, it's more secure in the cloud on-premise. And if you look at the security breaches, it's all about the on-premise data center vulnerabilities, not so much hardware. So there's a lot, the stay current on the isolation there is hard. So I think the security and supply chain, Fred, is another one. Do you agree? >> I absolutely agree. It's hard to manage supply chain nowadays. We put a lot of effort into that and I think we have a great ability to forecast and make sure that we can lean in and have the resources that are available and run them more efficiently. And then like you said on the security point, security is job one. It is the only P1. And if you think of how we build our infrastructure from Nitro all the way up and how we respond and work with our partners and our customers, there's nothing more important. >> And Narayan, your point earlier about the managed service patching and being on top of things is really going to get better. All right, final question. I really want to thank you for your time on this showcase. It's really been a great conversation. Fred, you had made a comment earlier. I want to end with a curve ball and put you eyes on the spot. We're talking about a new modern shift. We're seeing another inflection point. We've been documenting it. It's almost like cloud hitting another inflection point with application and open source growth significantly at the app layer. Continue to put a lot of pressure and innovation in the infrastructure side. So the question is for you guys each to answer is, what's the same and what's different in today's market? So it's like we want more of the same here, but also things have changed radically and better here. What's changed for the better and what's still the same thing hanging around that people are focused on? Can you share your perspective? >> I'll tackle it. Businesses are complex and they're often unique, that's the same. What's changed is how fast you can innovate. The ability to combine managed services and new innovative services and build new applications is so much faster today. Leveraging world class hardware that you don't have to worry about, that's elastic. You could not do that even five, 10 years ago to the degree you can today, especially with innovation. So innovation is accelerating at a rate that most people can't even comprehend and understand the set of services that are available to them. It's really fascinating to see what a one pizza team of engineers can go actually develop in a week. It is phenomenal. So super excited about this space and it's only going to continue to accelerate that. That's my take, Narayan. >> You got a lot of platform to compete on. With Amazon, you got a lot to build on. Narayan, your side. What's your answer to that question? >> I think we are seeing a lot of innovation with new applications that customers are constantly (faintly speaking). I think what we see is this whole notion of how do you go from desktop to production to the secure supply chain and how can we truly build on the agility that developers desire and build all the security and the pipelines to energize that production quickly and efficiently. I think we are seeing, we are at the very start of that sort of journey. Of course, we have invested in Kubernetes, the means to an end, but we're so much more beyond that's happening in industry and I think we're at the very, very beginning of this transformations, enterprise transformation that many of our customers are going through and we are inherently part of it. >> Well, gentlemen, I really appreciate that we're seeing the same thing. It's more the same here on solving these complexities with distractions, whether it's higher level services with large scale infrastructure. At your fingertips, infrastructure as code, infrastructure to be provisioned, serverless, all the good stuff happen and Fred with AWS on your side. And we're seeing customers resonate with this idea of being an operator again, being a cloud operator and developer. So the developer ops is kind of, DevOps is changing too. So all for the better. Thank you for spending the time and we're seeing again that traction with the VMware customer base and AWS getting along great together. So thanks for sharing your perspectives. >> We appreciate it. Thank you so much. >> Thank you John. >> This is theCUBE and AWS VMware showcase accelerating business transformation, VMware Cloud on AWS. Jointly engineered solution bringing innovation to the VMware customer base, going to the cloud and beyond. I'm John Furrier, your host. Thanks for watching. (gentle music)
SUMMARY :
joining me on the showcase. It's a great topic. and going in to re:Invent, and the VMware flavor of that, Takes advantage of the AWS and the speed that which customers around the service compared to and the security patches on top. and at the physics layer too, the other workloads like SAP, All of the hard problems What's on the list this year? and the way we are able to do to keep innovating on. in different parts of the world and because it's in the cloud. and just improving all the speeds. perspective from the VMware side? And the beauty of the technology I mean, the data center So the efficiency gains that we have And the third one is really obvious, and have the resources that are available So the question is for you and it's only going to platform to compete on. and the pipelines to energize So all for the better. Thank you so much. the VMware customer base,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
AWS | ORGANIZATION | 0.99+ |
Steven Schmidt | PERSON | 0.99+ |
Fred Wurden | PERSON | 0.99+ |
VMware | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Narayan Bharadwaj | PERSON | 0.99+ |
Andy Jassy | PERSON | 0.99+ |
Pat | PERSON | 0.99+ |
36 | QUANTITY | 0.99+ |
October 2016 | DATE | 0.99+ |
John | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
Fred | PERSON | 0.99+ |
2013 | DATE | 0.99+ |
Andy | PERSON | 0.99+ |
San Francisco | LOCATION | 0.99+ |
two companies | QUANTITY | 0.99+ |
New Jersey | LOCATION | 0.99+ |
last year | DATE | 0.99+ |
Pat Gelsinger | PERSON | 0.99+ |
five years | QUANTITY | 0.99+ |
six weeks | QUANTITY | 0.99+ |
both companies | QUANTITY | 0.99+ |
1000 workloads | QUANTITY | 0.99+ |
S&P Global | ORGANIZATION | 0.99+ |
One | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
2017 year | DATE | 0.99+ |
both sides | QUANTITY | 0.99+ |
VMworld | ORGANIZATION | 0.99+ |
next year | DATE | 0.99+ |
48 cores | QUANTITY | 0.99+ |
this year | DATE | 0.99+ |
third one | QUANTITY | 0.98+ |
two years later | DATE | 0.98+ |
Narayan | PERSON | 0.98+ |
Fortinet | ORGANIZATION | 0.98+ |
both | QUANTITY | 0.98+ |
Both customers | QUANTITY | 0.98+ |
NetApp | TITLE | 0.98+ |
EC2 | TITLE | 0.98+ |
five containers | QUANTITY | 0.98+ |
7,400 students | QUANTITY | 0.98+ |
Project Northstar | ORGANIZATION | 0.98+ |
tomorrow | DATE | 0.98+ |
Accelerating Business Transformation with VMware Cloud on AWS 10 31
>>Hi everyone. Welcome to the Cube special presentation here in Palo Alto, California. I'm John Foer, host of the Cube. We've got two great guests, one for calling in from Germany, our videoing in from Germany, one from Maryland. We've got VMware and aws. This is the customer successes with VMware cloud on AWS showcase, accelerating business transformation here in the showcase with Samir Candu Worldwide. VMware strategic alliance solution, architect leader with AWS Samir. Great to have you and Daniel Re Myer, principal architect global AWS synergy at VMware. Guys, you guys are, are working together. You're the key players in the re relationship as it rolls out and continues to grow. So welcome to the cube. >>Thank you. Greatly appreciate it. >>Great to have you guys both on, As you know, we've been covering this since 2016 when Pat Geling, then CEO and then then CEO AWS at Andy Chasy did this. It kind of got people by surprise, but it really kind of cleaned out the positioning in the enterprise for the success. OFM workloads in the cloud. VMware's had great success with it since, and you guys have the great partnerships. So this has been like a really strategic, successful partnership. Where are we right now? You know, years later we got this whole inflection point coming. You're starting to see, you know, this idea of higher level services, more performance are coming in at the infrastructure side. More automation, more serverless, I mean, and a, I mean it's just getting better and better every year in the cloud. Kinda a whole nother level. Where are we, Samir? Let's start with you on, on the relationship. >>Yeah, totally. So I mean, there's several things to keep in mind, right? So in 2016, right, that's when the partnership between AWS and VMware was announced, and then less than a year later, that's when we officially launched VMware cloud on aws. Years later, we've been driving innovation, working with our customers, jointly engineering this between AWS and VMware day in, day out. As far as advancing VMware cloud on aws. You know, even if you look at the innovation that takes place with a solution, things have modernized, things have changed, there's been advancements, you know, whether it's security focus, whether it's platform focus, whether it's networking focus, there's been modifications along the way, even storage, right? More recently, one of the things to keep in mind is we're looking to deliver value to our customers together. These are our joint customers. So there's hundreds of VMware and AWS engineers working together on this solution. >>And then factor in even our sales teams, right? We have VMware and AWS sales teams interacting with each other on a constant daily basis. We're working together with our customers at the end of the day too. Then we're looking to even offer and develop jointly engineered solutions specific to VMware cloud on aws, and even with VMware's, other platforms as well. Then the other thing comes down to is where we have dedicated teams around this at both AWS and VMware. So even from solutions architects, even to our sales specialists, even to our account teams, even to specific engineering teams within the organizations, they all come together to drive this innovation forward with VMware cloud on AWS and the jointly engineered solution partnership as well. And then I think one of the key things to keep in mind comes down to we have nearly 600 channel partners that have achieved VMware cloud on AWS service competency. So think about it from the standpoint there's 300 certified or validated technology solutions, they're now available to our customers. So that's even innovation right off the top as well. >>Great stuff. Daniel, I wanna get to you in a second. Upon this principal architect position you have in your title, you're the global a synergy person. Synergy means bringing things together, making it work. Take us through the architecture, because we heard a lot of folks at VMware explore this year, formerly world, talking about how the, the workloads on it has been completely transforming into cloud and hybrid, right? This is where the action is. Where are you? Is your customers taking advantage of that new shift? You got AI ops, you got it. Ops changing a lot, you got a lot more automation edges right around the corner. This is like a complete transformation from where we were just five years ago. What's your thoughts on the >>Relationship? So at at, at first, I would like to emphasize that our collaboration is not just that we have dedicated teams to help our customers get the most and the best benefits out of VMware cloud on aws. We are also enabling US mutually. So AWS learns from us about the VMware technology, where VMware people learn about the AWS technology. We are also enabling our channel partners and we are working together on customer projects. So we have regular assembled globally and also virtually on Slack and the usual suspect tools working together and listening to customers, that's, that's very important. Asking our customers where are their needs? And we are driving the solution into the direction that our customers get the, the best benefits out of VMware cloud on aws. And over the time we, we really have involved the solution. As Samia mentioned, we just added additional storage solutions to VMware cloud on aws. We now have three different instance types that cover a broad range of, of workload. So for example, we just added the I four I host, which is ideally for workloads that require a lot of CPU power, such as you mentioned it, AI workloads. >>Yeah. So I wanna guess just specifically on the customer journey and their transformation. You know, we've been reporting on Silicon angle in the queue in the past couple weeks in a big way that the OPS teams are now the new devs, right? I mean that sounds OP a little bit weird, but operation IT operations is now part of the, a lot more data ops, security writing code composing, you know, with open source, a lot of great things are changing. Can you share specifically what customers are looking for when you say, as you guys come in and assess their needs, what are they doing? What are some of the things that they're doing with VMware on AWS specifically that's a little bit different? Can you share some of and highlights there? >>That, that's a great point because originally VMware and AWS came from very different directions when it comes to speaking people at customers. So for example, aws very developer focused, whereas VMware has a very great footprint in the IT ops area. And usually these are very different, very different teams, groups, different cultures, but it's, it's getting together. However, we always try to address the customers, right? There are customers that want to build up a new application from the scratch and build resiliency, availability, recoverability, scalability into the application. But there are still a lot of customers that say, well we don't have all of the skills to redevelop everything to refactor an application to make it highly available. So we want to have all of that as a service, recoverability as a service, scalability as a service. We want to have this from the infrastructure. That was one of the unique selling points for VMware on premise and now we are bringing this into the cloud. >>Samir, talk about your perspective. I wanna get your thoughts, and not to take a tangent, but we had covered the AWS remar of, actually it was Amazon res machine learning automation, robotics and space. It was really kinda the confluence of industrial IOT software physical. And so when you look at like the IT operations piece becoming more software, you're seeing things about automation, but the skill gap is huge. So you're seeing low code, no code automation, you know, Hey Alexa, deploy a Kubernetes cluster. Yeah, I mean, I mean that's coming, right? So we're seeing this kind of operating automation meets higher level services meets workloads. Can you unpack that and share your opinion on, on what you see there from an Amazon perspective and how it relates to this? >>Yeah, totally. Right. And you know, look at it from the point of view where we said this is a jointly engineered solution, but it's not migrating to one option or the other option, right? It's more or less together. So even with VMware cloud on aws, yes it is utilizing AWS infrastructure, but your environment is connected to that AWS VPC in your AWS account. So if you wanna leverage any of the native AWS services, so any of the 200 plus AWS services, you have that option to do so. So that's gonna give you that power to do certain things, such as, for example, like how you mentioned with iot, even with utilizing Alexa or if there's any other service that you wanna utilize, that's the joining point between both of the offerings. Right off the top though, with digital transformation, right? You, you have to think about where it's not just about the technology, right? There's also where you want to drive growth in the underlying technology. Even in your business leaders are looking to reinvent their business. They're looking to take different steps as far as pursuing a new strategy. Maybe it's a process, maybe it's with the people, the culture, like how you said before, where people are coming in from a different background, right? They may not be used to the cloud, they may not be used to AWS services, but now you have that capability to mesh them together. Okay. Then also, Oh, >>Go ahead, finish >>Your thought. No, no, I was gonna say, what it also comes down to is you need to think about the operating model too, where it is a shift, right? Especially for that VS four admin that's used to their on-premises at environment. Now with VMware cloud on aws, you have that ability to leverage a cloud, but the investment that you made and certain things as far as automation, even with monitoring, even with logging, yeah. You still have that methodology where you can utilize that in VMware cloud on AWS two. >>Danielle, I wanna get your thoughts on this because at at explore and, and, and after the event, now as we prep for Cuban and reinvent coming up the big AWS show, I had a couple conversations with a lot of the VMware customers and operators and it's like hundreds of thousands of, of, of, of users and millions of people talking about and and peaked on VM we're interested in v VMware. The common thread was one's one, one person said, I'm trying to figure out where I'm gonna put my career in the next 10 to 15 years. And they've been very comfortable with VMware in the past, very loyal, and they're kind of talking about, I'm gonna be the next cloud, but there's no like role yet architects, is it Solution architect sre. So you're starting to see the psychology of the operators who now are gonna try to make these career decisions, like how, what am I gonna work on? And it's, and that was kind of fuzzy, but I wanna get your thoughts. How would you talk to that persona about the future of VMware on, say, cloud for instance? What should they be thinking about? What's the opportunity and what's gonna happen? >>So digital transformation definitely is a huge change for many organizations and leaders are perfectly aware of what that means. And that also means in, in to to some extent, concerns with your existing employees. Concerns about do I have to relearn everything? Do I have to acquire new skills? And, and trainings is everything worthless I learned over the last 15 years of my career? And the, the answer is to make digital transformation a success. We need not just to talk about technology, but also about process people and culture. And this is where VMware really can help because if you are applying VMware cloud on a, on AWS to your infrastructure, to your existing on-premise infrastructure, you do not need to change many things. You can use the same tools and skills, you can manage your virtual machines as you did in your on-premise environment. You can use the same managing and monitoring tools. If you have written, and many customers did this, if you have developed hundreds of, of scripts that automate tasks and if you know how to troubleshoot things, then you can use all of that in VMware cloud on aws. And that gives not just leaders, but but also the architects at customers, the operators at customers, the confidence in, in such a complex project, >>The consistency, very key point, gives them the confidence to go and, and then now that once they're confident they can start committing themselves to new things. Samir, you're reacting to this because you know, on your side you've got higher level services, you got more performance at the hardware level. I mean, lot improvement. So, okay, nothing's changed. I can still run my job now I got goodness on the other side. What's the upside? What's in it for the, for the, for the customer there? >>Yeah, so I think what it comes down to is they've already been so used to or entrenched with that VMware admin mentality, right? But now extending that to the cloud, that's where now you have that bridge between VMware cloud on AWS to bridge that VMware knowledge with that AWS knowledge. So I will look at it from the point of view where now one has that capability and that ability to just learn about the cloud, but if they're comfortable with certain aspects, no one's saying you have to change anything. You can still leverage that, right? But now if you wanna utilize any other AWS service in conjunction with that VM that resides maybe on premises or even in VMware cloud on aws, you have that option to do so. So think about it where you have that ability to be someone who's curious and wants to learn. And then if you wanna expand on the skills, you certainly have that capability to do so. >>Great stuff. I love, love that. Now that we're peeking behind the curtain here, I'd love to have you guys explain, cuz people wanna know what's goes on in behind the scenes. How does innovation get happen? How does it happen with the relationship? Can you take us through a day in the life of kind of what goes on to make innovation happen with the joint partnership? You guys just have a zoom meeting, Do you guys fly out, you write go do you ship thing? I mean I'm making it up, but you get the idea, what's the, what's, how does it work? What's going on behind the scenes? >>So we hope to get more frequently together in person, but of course we had some difficulties over the last two to three years. So we are very used to zoom conferences and and Slack meetings. You always have to have the time difference in mind if we are working globally together. But what we try, for example, we have reg regular assembled now also in person geo based. So for emia, for the Americas, for aj. And we are bringing up interesting customer situations, architectural bits and pieces together. We are discussing it always to share and to contribute to our community. >>What's interesting, you know, as, as events are coming back to here, before you get, you weigh in, I'll comment, as the cube's been going back out to events, we are hearing comments like what, what pandemic we were more productive in the pandemic. I mean, developers know how to work remotely and they've been on all the tools there, but then they get in person, they're happy to see people, but there's no one's, no one's really missed the beat. I mean it seems to be very productive, you know, workflow, not a lot of disruption. More if anything, productivity gains. >>Agreed, right? I think one of the key things to keep in mind is, you know, even if you look at AWS's and even Amazon's leadership principles, right? Customer obsession, that's key. VMware is carrying that forward as well. Where we are working with our customers, like how Daniel said met earlier, right? We might have meetings at different time zones, maybe it's in person, maybe it's virtual, but together we're working to listen to our customers. You know, we're taking and capturing that feedback to drive innovation and VMware cloud on AWS as well. But one of the key things to keep in mind is yes, there have been, there has been the pandemic, we might have been disconnected to a certain extent, but together through technology we've been able to still communicate work with our customers. Even with VMware in between, with AWS and whatnot. We had that flexibility to innovate and continue that innovation. So even if you look at it from the point of view, right? VMware cloud on AWS outposts, that was something that customers have been asking for. We've been been able to leverage the feedback and then continue to drive innovation even around VMware cloud on AWS outposts. So even with the on premises environment, if you're looking to handle maybe data sovereignty or compliance needs, maybe you have low latency requirements, that's where certain advancements come into play, right? So the key thing is always to maintain that communication track. >>And our last segment we did here on the, on this showcase, we listed the accomplishments and they were pretty significant. I mean go, you got the global rollouts of the relationship. It's just really been interesting and, and people can reference that. We won't get into it here, but I will ask you guys to comment on, as you guys continue to evolve the relationship, what's in it for the customer? What can they expect next? Cuz again, I think right now we're in at a, an inflection point more than ever. What can people expect from the relationship and what's coming up with reinvent? Can you share a little bit of kind of what's coming down the pike? >>So one of the most important things we have announced this year, and we will continue to evolve into that direction, is independent scale of storage. That absolutely was one of the most important items customer asked us for over the last years. Whenever, whenever you are requiring additional storage to host your virtual machines, you usually in VMware cloud on aws, you have to add additional notes. Now we have three different note types with different ratios of compute, storage and memory. But if you only require additional storage, you always have to get also additional compute and memory and you have to pay. And now with two solutions which offer choice for the customers, like FS six one, NetApp onap, and VMware cloud Flex Storage, you now have two cost effective opportunities to add storage to your virtual machines. And that offers opportunities for other instance types maybe that don't have local storage. We are also very, very keen looking forward to announcements, exciting announcements at the upcoming events. >>Samir, what's your, what's your reaction take on the, on what's coming down on your side? >>Yeah, I think one of the key things to keep in mind is, you know, we're looking to help our customers be agile and even scale with their needs, right? So with VMware cloud on aws, that's one of the key things that comes to mind, right? There are gonna be announcements, innovations and whatnot with outcoming events. But together we're able to leverage that to advance VMware cloud on AWS to Daniel's point storage, for example, even with host offerings. And then even with decoupling storage from compute and memory, right now you have the flexibility where you can do all of that. So to look at it from the standpoint where now with 21 regions where we have VMware cloud on AWS available as well, where customers can utilize that as needed when needed, right? So it comes down to, you know, transformation will be there. Yes, there's gonna be maybe where workloads have to be adapted where they're utilizing certain AWS services, but you have that flexibility and option to do so. And I think with the continuing events that's gonna give us the options to even advance our own services together. >>Well you guys are in the middle of it, you're in the trenches, you're making things happen, you've got a team of people working together. My final question is really more of a kind of a current situation, kind of future evolutionary thing that you haven't seen this before. I wanna get both of your reaction to it. And we've been bringing this up in, in the open conversations on the cube is in the old days it was going back this generation, you had ecosystems, you had VMware had an ecosystem they did best, had an ecosystem. You know, we have a product, you have a product, biz dev deals happen, people sign relationships and they do business together and they, they sell to each other's products or do some stuff. Now it's more about architecture cuz we're now in a distributed large scale environment where the role of ecosystems are intertwining. >>And this, you guys are in the middle of two big ecosystems. You mentioned channel partners, you both have a lot of partners on both sides. They come together. So you have this now almost a three dimensional or multidimensional ecosystem, you know, interplay. What's your thoughts on this? And, and, and because it's about the architecture, integration is a value, not so much. Innovation is only, you gotta do innovation, but when you do innovation, you gotta integrate it, you gotta connect it. So what is, how do you guys see this as a, as an architectural thing, start to see more technical business deals? >>So we are, we are removing dependencies from individual ecosystems and from individual vendors. So a customer no longer has to decide for one vendor and then it is a very expensive and high effort project to move away from that vendor, which ties customers even, even closer to specific vendors. We are removing these obstacles. So with VMware cloud on aws moving to the cloud, firstly it's, it's not a dead end. If you decide at one point in time because of latency requirements or maybe it's some compliance requirements, you need to move back into on-premise. You can do this if you decide you want to stay with some of your services on premise and just run a couple of dedicated services in the cloud, you can do this and you can mana manage it through a single pane of glass. That's quite important. So cloud is no longer a dead and it's no longer a binary decision, whether it's on premise or the cloud. It it is the cloud. And the second thing is you can choose the best of both works, right? If you are migrating virtual machines that have been running in your on-premise environment to VMware cloud on aws, by the way, in a very, very fast cost effective and safe way, then you can enrich later on enrich these virtual machines with services that are offered by aws. More than 200 different services ranging from object based storage, load balancing and so on. So it's an endless, endless possibility. >>We, we call that super cloud in, in a, in a way that we be generically defining it where everyone's innovating, but yet there's some common services. But the differentiation comes from innovation where the lock in is the value, not some spec, right? Samir, this is gonna where cloud is right now, you guys are, are not commodity. Amazon's completely differentiating, but there's some commodity things. Having got storage, you got compute, but then you got now advances in all areas. But partners innovate with you on their terms. Absolutely. And everybody wins. >>Yeah. And a hundred percent agree with you. I think one of the key things, you know, as Daniel mentioned before, is where it it, it's a cross education where there might be someone who's more proficient on the cloud side with aws, maybe more proficient with the viewers technology, but then for partners, right? They bridge that gap as well where they come in and they might have a specific niche or expertise where their background, where they can help our customers go through that transformation. So then that comes down to, hey, maybe I don't know how to connect to the cloud. Maybe I don't know what the networking constructs are. Maybe I can leverage that partner. That's one aspect to go about it. Now maybe you migrated that workload to VMware cloud on aws. Maybe you wanna leverage any of the native AWS services or even just off the top 200 plus AWS services, right? But it comes down to that skill, right? So again, solutions architecture at the back of, back of the day, end of the day, what it comes down to is being able to utilize the best of both worlds. That's what we're giving our customers at the end of the >>Day. I mean, I just think it's, it's a, it's a refactoring and innovation opportunity at all levels. I think now more than ever, you can take advantage of each other's ecosystems and partners and technologies and change how things get done with keeping the consistency. I mean, Daniel, you nailed that, right? I mean, you don't have to do anything. You still run the fear, the way you working on it and now do new things. This is kind of a cultural shift. >>Yeah, absolutely. And if, if you look, not every, not every customer, not every organization has the resources to refactor and re-platform everything. And we gave, we give them a very simple and easy way to move workloads to the cloud. Simply run them and at the same time they can free up resources to develop new innovations and, and grow their business. >>Awesome. Samir, thank you for coming on. Danielle, thank you for coming to Germany, Octoberfest, I know it's evening over there, your weekend's here. And thank you for spending the time. Samir final give you the final word, AWS reinvents coming up. Preparing. We're gonna have an exclusive with Adam, but Fry, we do a curtain raise, a dual preview. What's coming down on your side with the relationship and what can we expect to hear about what you got going on at reinvent this year? The big show? >>Yeah, so I think, you know, Daniel hit upon some of the key points, but what I will say is we do have, for example, specific sessions, both that VMware's driving and then also that AWS is driving. We do have even where we have what I call a chalk talks. So I would say, and then even with workshops, right? So even with the customers, the attendees who are there, whatnot, if they're looking for to sit and listen to a session, yes that's there. But if they wanna be hands on, that is also there too. So personally for me as an IT background, you know, been in CIS admin world and whatnot, being hands on, that's one of the key things that I personally am looking forward. But I think that's one of the key ways just to learn and get familiar with the technology. Yeah, >>Reinvents an amazing show for the in person. You guys nail it every year. We'll have three sets this year at the cube. It's becoming popular. We more and more content. You guys got live streams going on, a lot of content, a lot of media, so thanks, thanks for sharing that. Samir Daniel, thank you for coming on on this part of the showcase episode of really the customer successes with VMware Cloud Ons, really accelerating business transformation withs and VMware. I'm John Fur with the cube, thanks for watching. Hello everyone. Welcome to this cube showcase, accelerating business transformation with VMware cloud on it's a solution innovation conversation with two great guests, Fred and VP of commercial services at aws and NA Ryan Bard, who's the VP and general manager of cloud solutions at VMware. Gentlemen, thanks for joining me on this showcase. >>Great to be here. >>Hey, thanks for having us on. It's a great topic. You know, we, we've been covering this VMware cloud on abus since, since the launch going back and it's been amazing to watch the evolution from people saying, Oh, it's the worst thing I've ever seen. It's what's this mean? And depress work were, we're kind of not really on board with kind of the vision, but as it played out as you guys had announced together, it did work out great for VMware. It did work out great for a D and it continues two years later and I want just get an update from you guys on where you guys see this has been going. I'll see multiple years. Where is the evolution of the solution as we are right now coming off VMware explorer just recently and going in to reinvent, which is only a couple weeks away, feels like tomorrow. But you know, as we prepare a lot going on, where are we with the evolution of the solution? >>I mean, first thing I wanna say is, you know, PBO 2016 was a someon moment and the history of it, right? When Pat Gelsinger and Andy Jessey came together to announce this and I think John, you were there at the time I was there, it was a great, great moment. We launched the solution in 2017, the year after that at VM Word back when we called it Word, I think we have gone from strength to strength. One of the things that has really mattered to us is we have learned froms also in the processes, this notion of working backwards. So we really, really focused on customer feedback as we build a service offering now five years old, pretty remarkable journey. You know, in the first years we tried to get across all the regions, you know, that was a big focus because there was so much demand for it. >>In the second year we started going really on enterprise grade features. We invented this pretty awesome feature called Stretch clusters, where you could stretch a vSphere cluster using VSA and NSX across two AZs in the same region. Pretty phenomenal four nine s availability that applications start started to get with that particular feature. And we kept moving forward all kinds of integration with AWS direct connect transit gateways with our own advanced networking capabilities. You know, along the way, disaster recovery, we punched out two, two new services just focused on that. And then more recently we launched our outposts partnership. We were up on stage at Reinvent, again with Pat Andy announcing AWS outposts and the VMware flavor of that VMware cloud and AWS outposts. I think it's been significant growth in our federal sector as well with our federal and high certification more recently. So all in all, we are super excited. We're five years old. The customer momentum is really, really strong and we are scaling the service massively across all geos and industries. >>That's great, great update. And I think one of the things that you mentioned was how the advantages you guys got from that relationship. And, and this has kind of been the theme for AWS since I can remember from day one. Fred, you guys do the heavy lifting as as, as you always say for the customers here, VMware comes on board, takes advantage of the AWS and kind of just doesn't miss a beat, continues to move their workloads that everyone's using, you know, vSphere and these are, these are big workloads on aws. What's the AWS perspective on this? How do you see it? >>Yeah, it's pretty fascinating to watch how fast customers can actually transform and move when you take the, the skill set that they're familiar with and the advanced capabilities that they've been using on Preem and then overlay it on top of the AWS infrastructure that's, that's evolving quickly and, and building out new hardware and new instances we'll talk about. But that combined experience between both of us on a jointly engineered solution to bring the best security and the best features that really matter for those workloads drive a lot of efficiency and speed for the, for the customer. So it's been well received and the partnership is stronger than ever from an engineering standpoint, from a business standpoint. And obviously it's been very interesting to look at just how we stay day one in terms of looking at new features and work and, and responding to what customers want. So pretty, pretty excited about just seeing the transformation and the speed that which customers can move to bmc. Yeah, >>That's what great value publish. We've been talking about that in context too. Anyone building on top of the cloud, they can have their own supercloud as we call it. If you take advantage of all the CapEx and and investment Amazon's made and AWS has made and, and and continues to make in performance IAS and pass all great stuff. I have to ask you guys both as you guys see this going to the next level, what are some of the differentiations you see around the service compared to other options on the market? What makes it different? What's the combination? You mentioned jointly engineered, what are some of the key differentiators of the service compared to others? >>Yeah, I think one of the key things Fred talked about is this jointly engineered notion right from day one. We were the earlier doctors of AWS Nitro platform, right? The reinvention of E two back five years ago. And so we have been, you know, having a very, very strong engineering partnership at that level. I think from a VMware customer standpoint, you get the full software defined data center or compute storage networking on EC two, bare metal across all regions. You can scale that elastically up and down. It's pretty phenomenal just having that consistency globally, right on aws EC two global regions. Now the other thing that's a real differentiator for us that customers tell us about is this whole notion of a managed service, right? And this was somewhat new to VMware, but we took away the pain of this undifferentiated heavy lifting where customers had to provision rack, stack hardware, configure the software on top, and then upgrade the software and the security batches on top. >>So we took, took away all of that pain as customers transitioned to VMware cloud and aws. In fact, my favorite story from last year when we were all going through the lock for j debacle industry was just going through that, right? Favorite proof point from customers was before they put even race this issue to us, we sent them a notification saying we already patched all of your systems, no action from you. The customers were super thrilled. I mean these are large banks, many other customers around the world, super thrilled they had to take no action, but a pretty incredible industry challenge that we were all facing. >>Nora, that's a great, so that's a great point. You know, the whole managed service piece brings up the security, you kind of teasing at it, but you know, there's always vulnerabilities that emerge when you are doing complex logic. And as you grow your solutions, there's more bits. You know, Fred, we were commenting before we came on camera, there's more bits than ever before and, and at at the physics layer too, as well as the software. So you never know when there's gonna be a zero day vulnerability out there. Just, it happens. We saw one with fornet this week, this came outta the woodwork. But moving fast on those patches, it's huge. This brings up the whole support angle. I wanted to ask you about how you guys are doing that as well, because to me we see the value when we, when we talk to customers on the cube about this, you know, it was a real, real easy understanding of how, what the cloud means to them with VMware now with the aws. But the question that comes up that we wanna get more clarity on is how do you guys handle support together? >>Well, what's interesting about this is that it's, it's done mutually. We have dedicated support teams on both sides that work together pretty seamlessly to make sure that whether there's a issue at any layer, including all the way up into the app layer, as you think about some of the other workloads like sap, we'll go end to end and make sure that we support the customer regardless of where the particular issue might be for them. And on top of that, we look at where, where we're improving reliability in, in as a first order of, of principle between both companies. So from an availability and reliability standpoint, it's, it's top of mind and no matter where the particular item might land, we're gonna go help the customer resolve. That works really well >>On the VMware side. What's been the feedback there? What's the, what are some of the updates? >>Yeah, I think, look, I mean, VMware owns and operates the service, but we have a phenomenal backend relationship with aws. Customers call VMware for the service for any issues and, and then we have a awesome relationship with AWS on the backend for support issues or any hardware issues. The BASKE management that we jointly do, right? All of the hard problems that customers don't have to worry about. I think on the front end, we also have a really good group of solution architects across the companies that help to really explain the solution. Do complex things like cloud migration, which is much, much easier with VMware cloud aws, you know, we are presenting that easy button to the public cloud in many ways. And so we have a whole technical audience across the two companies that are working with customers every single day. >>You know, you had mentioned, I've got a list here, some of the innovations the, you mentioned the stretch clustering, you know, getting the GOs working, Advanced network, disaster recovery, you know, fed, Fed ramp, public sector certifications, outposts, all good. You guys are checking the boxes every year. You got a good, good accomplishments list there on the VMware AWS side here in this relationship. The question that I'm interested in is what's next? What recent innovations are you doing? Are you making investments in what's on the lists this year? What items will be next year? How do you see the, the new things, the list of accomplishments, people wanna know what's next. They don't wanna see stagnant growth here, they wanna see more action, you know, as as cloud kind of continues to scale and modern applications cloud native, you're seeing more and more containers, more and more, you know, more CF C I C D pipe pipelining with with modern apps, put more pressure on the system. What's new, what's the new innovations? >>Absolutely. And I think as a five yearold service offering innovation is top of mind for us every single day. So just to call out a few recent innovations that we announced in San Francisco at VMware Explorer. First of all, our new platform i four I dot metal, it's isolate based, it's pretty awesome. It's the latest and greatest, all the speeds and feeds that we would expect from VMware and aws. At this point in our relationship. We announced two different storage options. This notion of working from customer feedback, allowing customers even more price reductions, really take off that storage and park it externally, right? And you know, separate that from compute. So two different storage offerings there. One is with AWS Fsx, with NetApp on tap, which brings in our NetApp partnership as well into the equation and really get that NetApp based, really excited about this offering as well. >>And the second storage offering for VMware cloud Flex Storage, VMware's own managed storage offering. Beyond that, we have done a lot of other innovations as well. I really wanted to talk about VMware cloud Flex Compute, where previously customers could only scale by hosts and a host is 36 to 48 cores, give or take. But with VMware cloud Flex Compute, we are now allowing this notion of a resource defined compute model where customers can just get exactly the V C P memory and storage that maps to the applications, however small they might be. So this notion of granularity is really a big innovation that that we are launching in the market this year. And then last but not least, talk about ransomware. Of course it's a hot topic in industry. We are seeing many, many customers ask for this. We are happy to announce a new ransomware recovery with our VMware cloud DR solution. >>A lot of innovation there and the way we are able to do machine learning and make sure the workloads that are covered from snapshots and backups are actually safe to use. So there's a lot of differentiation on that front as well. A lot of networking innovations with Project Knot star for ability to have layer flow through layer seven, you know, new SaaS services in that area as well. Keep in mind that the service already supports managed Kubernetes for containers. It's built in to the same clusters that have virtual machines. And so this notion of a single service with a great TCO for VMs and containers and sort of at the heart of our office, >>The networking side certainly is a hot area to keep innovating on. Every year it's the same, same conversation, get better, faster networking, more, more options there. The flex computes. Interesting. If you don't mind me getting a quick clarification, could you explain the Drew screen resource defined versus hardware defined? Because this is kind of what we had saw at Explore coming out, that notion of resource defined versus hardware defined. What's the, what does that mean? >>Yeah, I mean I think we have been super successful in this hardware defined notion. We we're scaling by the hardware unit that we present as software defined data centers, right? And so that's been super successful. But we, you know, customers wanted more, especially customers in different parts of the world wanted to start even smaller and grow even more incrementally, right? Lower their costs even more. And so this is the part where resource defined starts to be very, very interesting as a way to think about, you know, here's my bag of resources exactly based on what the customers request for fiber machines, five containers, its size exactly for that. And then as utilization grows, we elastically behind the scenes, we're able to grow it through policies. So that's a whole different dimension. It's a whole different service offering that adds value and customers are comfortable. They can go from one to the other, they can go back to that post based model if they so choose to. And there's a jump off point across these two different economic models. >>It's kind of cloud of flexibility right there. I like the name Fred. Let's get into some of the examples of customers, if you don't mind. Let's get into some of the ex, we have some time. I wanna unpack a little bit of what's going on with the customer deployments. One of the things we've heard again on the cube is from customers is they like the clarity of the relationship, they love the cloud positioning of it. And then what happens is they lift and shift the workloads and it's like, feels great. It's just like we're running VMware on AWS and then they would start consuming higher level services, kind of that adoption next level happens and because it it's in the cloud, so, So can you guys take us through some recent examples of customer wins or deployments where they're using VMware cloud on AWS on getting started, and then how do they progress once they're there? How does it evolve? Can you just walk us through a couple of use cases? >>Sure. There's a, well there's a couple. One, it's pretty interesting that, you know, like you said, as there's more and more bits you need better and better hardware and networking. And we're super excited about the I four and the capabilities there in terms of doubling and or tripling what we're doing around a lower variability on latency and just improving all the speeds. But what customers are doing with it, like the college in New Jersey, they're accelerating their deployment on a, on onboarding over like 7,400 students over a six to eight month period. And they've really realized a ton of savings. But what's interesting is where and how they can actually grow onto additional native services too. So connectivity to any other services is available as they start to move and migrate into this. The, the options there obviously are tied to all the innovation that we have across any services, whether it's containerized and with what they're doing with Tanu or with any other container and or services within aws. >>So there's, there's some pretty interesting scenarios where that data and or the processing, which is moved quickly with full compliance, whether it's in like healthcare or regulatory business is, is allowed to then consume and use things, for example, with tech extract or any other really cool service that has, you know, monthly and quarterly innovations. So there's things that you just can't, could not do before that are coming out and saving customers money and building innovative applications on top of their, their current app base in, in a rapid fashion. So pretty excited about it. There's a lot of examples. I think I probably don't have time to go into too, too many here. Yeah. But that's actually the best part is listening to customers and seeing how many net new services and new applications are they actually building on top of this platform. >>Nora, what's your perspective from the VMware sy? So, you know, you guys have now a lot of headroom to offer customers with Amazon's, you know, higher level services and or whatever's homegrown where's being rolled out? Cuz you now have a lot of hybrid too, so, so what's your, what's your take on what, what's happening in with customers? >>I mean, it's been phenomenal, the, the customer adoption of this and you know, banks and many other highly sensitive verticals are running production grade applications, tier one applications on the service over the last five years. And so, you know, I have a couple of really good examples. S and p Global is one of my favorite examples. Large bank, they merge with IHS market, big sort of conglomeration. Now both customers were using VMware cloud and AWS in different ways. And with the, with the use case, one of their use cases was how do I just respond to these global opportunities without having to invest in physical data centers? And then how do I migrate and consolidate all my data centers across the global, which there were many. And so one specific example for this company was how they migrated thousand 1000 workloads to VMware cloud AWS in just six weeks. Pretty phenomenal. If you think about everything that goes into a cloud migration process, people process technology and the beauty of the technology going from VMware point A to VMware point B, the the lowest cost, lowest risk approach to adopting VMware, VMware cloud, and aws. So that's, you know, one of my favorite examples. There are many other examples across other verticals that we continue to see. The good thing is we are seeing rapid expansion across the globe that constantly entering new markets with the limited number of regions and progressing our roadmap there. >>Yeah, it's great to see, I mean the data center migrations go from months, many, many months to weeks. It's interesting to see some of those success stories. So congratulations. One >>Of other, one of the other interesting fascinating benefits is the sustainability improvement in terms of being green. So the efficiency gains that we have both in current generation and new generation processors and everything that we're doing to make sure that when a customer can be elastic, they're also saving power, which is really critical in a lot of regions worldwide at this point in time. They're, they're seeing those benefits. If you're running really inefficiently in your own data center, that is just a, not a great use of power. So the actual calculators and the benefits to these workloads is, are pretty phenomenal just in being more green, which I like. We just all need to do our part there. And, and this is a big part of it here. >>It's a huge, it's a huge point about the sustainability. Fred, I'm glad you called that out. The other one I would say is supply chain issues. Another one you see that constrains, I can't buy hardware. And the third one is really obvious, but no one really talks about it. It's security, right? I mean, I remember interviewing Stephen Schmidt with that AWS and many years ago, this is like 2013, and you know, at that time people were saying the cloud's not secure. And he's like, listen, it's more secure in the cloud on premise. And if you look at the security breaches, it's all about the on-premise data center vulnerabilities, not so much hardware. So there's a lot you gotta to stay current on, on the isolation there is is hard. So I think, I think the security and supply chain, Fred is, is another one. Do you agree? >>I I absolutely agree. It's, it's hard to manage supply chain nowadays. We put a lot of effort into that and I think we have a great ability to forecast and make sure that we can lean in and, and have the resources that are available and run them, run them more efficiently. Yeah, and then like you said on the security point, security is job one. It is, it is the only P one. And if you think of how we build our infrastructure from Nitro all the way up and how we respond and work with our partners and our customers, there's nothing more important. >>And naron your point earlier about the managed service patching and being on top of things, it's really gonna get better. All right, final question. I really wanna thank you for your time on this showcase. It's really been a great conversation. Fred, you had made a comment earlier. I wanna kind of end with kind of a curve ball and put you eyes on the spot. We're talking about a modern, a new modern shift. It's another, we're seeing another inflection point, we've been documenting it, it's almost like cloud hitting another inflection point with application and open source growth significantly at the app layer. Continue to put a lot of pressure and, and innovation in the infrastructure side. So the question is for you guys each to answer is what's the same and what's different in today's market? So it's kind of like we want more of the same here, but also things have changed radically and better here. What are the, what's, what's changed for the better and where, what's still the same kind of thing hanging around that people are focused on? Can you share your perspective? >>I'll, I'll, I'll, I'll tackle it. You know, businesses are complex and they're often unique that that's the same. What's changed is how fast you can innovate. The ability to combine manage services and new innovative services and build new applications is so much faster today. Leveraging world class hardware that you don't have to worry about that's elastic. You, you could not do that even five, 10 years ago to the degree you can today, especially with innovation. So innovation is accelerating at a, at a rate that most people can't even comprehend and understand the, the set of services that are available to them. It's really fascinating to see what a one pizza team of of engineers can go actually develop in a week. It is phenomenal. So super excited about this space and it's only gonna continue to accelerate that. That's my take. All right. >>You got a lot of platform to compete on with, got a lot to build on then you're Ryan, your side, What's your, what's your answer to that question? >>I think we are seeing a lot of innovation with new applications that customers are constant. I think what we see is this whole notion of how do you go from desktop to production to the secure supply chain and how can we truly, you know, build on the agility that developers desire and build all the security and the pipelines to energize that motor production quickly and efficiently. I think we, we are seeing, you know, we are at the very start of that sort of of journey. Of course we have invested in Kubernetes the means to an end, but there's so much more beyond that's happening in industry. And I think we're at the very, very beginning of this transformations, enterprise transformation that many of our customers are going through and we are inherently part of it. >>Yeah. Well gentlemen, I really appreciate that we're seeing the same thing. It's more the same here on, you know, solving these complexities with distractions. Whether it's, you know, higher level services with large scale infrastructure at, at your fingertips. Infrastructures, code, infrastructure to be provisioned, serverless, all the good stuff happen in Fred with AWS on your side. And we're seeing customers resonate with this idea of being an operator, again, being a cloud operator and developer. So the developer ops is kind of, DevOps is kind of changing too. So all for the better. Thank you for spending the time and we're seeing again, that traction with the VMware customer base and of us getting, getting along great together. So thanks for sharing your perspectives, >>I appreciate it. Thank you so >>Much. Okay, thank you John. Okay, this is the Cube and AWS VMware showcase, accelerating business transformation. VMware cloud on aws, jointly engineered solution, bringing innovation to the VMware customer base, going to the cloud and beyond. I'm John Fur, your host. Thanks for watching. Hello everyone. Welcome to the special cube presentation of accelerating business transformation on vmc on aws. I'm John Furrier, host of the Cube. We have dawan director of global sales and go to market for VMware cloud on adb. This is a great showcase and should be a lot of fun. Ashish, thanks for coming on. >>Hi John. Thank you so much. >>So VMware cloud on AWS has been well documented as this big success for VMware and aws. As customers move their workloads into the cloud, IT operations of VMware customers has signaling a lot of change. This is changing the landscape globally is on cloud migration and beyond. What's your take on this? Can you open this up with the most important story around VMC on aws? >>Yes, John. The most important thing for our customers today is the how they can safely and swiftly move their ID infrastructure and applications through cloud. Now, VMware cloud AWS is a service that allows all vSphere based workloads to move to cloud safely, swiftly and reliably. Banks can move their core, core banking platforms, insurance companies move their core insurance platforms, telcos move their goss, bss, PLA platforms, government organizations are moving their citizen engagement platforms using VMC on aws because this is one platform that allows you to move it, move their VMware based platforms very fast. Migrations can happen in a matter of days instead of months. Extremely securely. It's a VMware manage service. It's very secure and highly reliably. It gets the, the reliability of the underlyings infrastructure along with it. So win-win from our customers perspective. >>You know, we reported on this big news in 2016 with Andy Chas, the, and Pat Geling at the time, a lot of people said it was a bad deal. It turned out to be a great deal because not only could VMware customers actually have a cloud migrate to the cloud, do it safely, which was their number one concern. They didn't want to have disruption to their operations, but also position themselves for what's beyond just shifting to the cloud. So I have to ask you, since you got the finger on the pulse here, what are we seeing in the market when it comes to migrating and modern modernizing in the cloud? Because that's the next step. They go to the cloud, you guys have done that, doing it, then they go, I gotta modernize, which means kind of upgrading or refactoring. What's your take on that? >>Yeah, absolutely. Look, the first step is to help our customers assess their infrastructure and licensing and entire ID operations. Once we've done the assessment, we then create their migration plans. A lot of our customers are at that inflection point. They're, they're looking at their real estate, ex data center, real estate. They're looking at their contracts with colocation vendors. They really want to exit their data centers, right? And VMware cloud and AWS is a perfect solution for customers who wanna exit their data centers, migrate these applications onto the AWS platform using VMC on aws, get rid of additional real estate overheads, power overheads, be socially and environmentally conscious by doing that as well, right? So that's the migration story, but to your point, it doesn't end there, right? Modernization is a critical aspect of the entire customer journey as as well customers, once they've migrated their ID applications and infrastructure on cloud get access to all the modernization services that AWS has. They can correct easily to our data lake services, to our AIML services, to custom databases, right? They can decide which applications they want to keep and which applications they want to refactor. They want to take decisions on containerization, make decisions on service computing once they've come to the cloud. But the most important thing is to take that first step. You know, exit data centers, come to AWS using vmc or aws, and then a whole host of modernization options available to them. >>Yeah, I gotta say, we had this right on this, on this story, because you just pointed out a big thing, which was first order of business is to make sure to leverage the on-prem investments that those customers made and then migrate to the cloud where they can maintain their applications, their data, their infrastructure operations that they're used to, and then be in position to start getting modern. So I have to ask you, how are you guys specifically, or how is VMware cloud on s addressing these needs of the customers? Because what happens next is something that needs to happen faster. And sometimes the skills might not be there because if they're running old school, IT ops now they gotta come in and jump in. They're gonna use a data cloud, they're gonna want to use all kinds of machine learning, and there's a lot of great goodness going on above the stack there. So as you move with the higher level services, you know, it's a no brainer, obviously, but they're not, it's not yesterday's higher level services in the cloud. So how are, how is this being addressed? >>Absolutely. I think you hit up on a very important point, and that is skills, right? When our customers are operating, some of the most critical applications I just mentioned, core banking, core insurance, et cetera, they're most of the core applications that our customers have across industries, like even, even large scale ERP systems, they're actually sitting on VMware's vSphere platform right now. When the customer wants to migrate these to cloud, one of the key bottlenecks they face is skill sets. They have the trained manpower for these core applications, but for these high level services, they may not, right? So the first order of business is to help them ease this migration pain as much as possible by not wanting them to, to upscale immediately. And we VMware cloud and AWS exactly does that. I mean, you don't have to do anything. You don't have to create new skill set for doing this, right? Their existing skill sets suffice, but at the same time, it gives them that, that leeway to build that skills roadmap for their team. DNS is invested in that, right? Yes. We want to help them build those skills in the high level services, be it aml, be it, be it i t be it data lake and analytics. We want to invest in them, and we help our customers through that. So that ultimately the ultimate goal of making them drop data is, is, is a front and center. >>I wanna get into some of the use cases and success stories, but I want to just reiterate, hit back your point on the skill thing. Because if you look at what you guys have done at aws, you've essentially, and Andy Chassey used to talk about this all the time when I would interview him, and now last year Adam was saying the same thing. You guys do all the heavy lifting, but if you're a VMware customer user or operator, you are used to things. You don't have to be relearn to be a cloud architect. Now you're already in the game. So this is like almost like a instant path to cloud skills for the VMware. There's hundreds of thousands of, of VMware architects and operators that now instantly become cloud architects, literally overnight. Can you respond to that? Do you agree with that? And then give an example. >>Yes, absolutely. You know, if you have skills on the VMware platform, you know, know, migrating to AWS using via by cloud and AWS is absolutely possible. You don't have to really change the skills. The operations are exactly the same. The management systems are exactly the same. So you don't really have to change anything but the advantages that you get access to all the other AWS services. So you are instantly able to integrate with other AWS services and you become a cloud architect immediately, right? You are able to solve some of the critical problems that your underlying IT infrastructure has immediately using this. And I think that's a great value proposition for our customers to use this service. >>And just one more point, I want just get into something that's really kind of inside baseball or nuanced VMC or VMware cloud on AWS means something. Could you take a minute to explain what on AWS means? Just because you're like hosting and using Amazon as a, as a work workload? Being on AWS means something specific in your world, being VMC on AWS mean? >>Yes. This is a great question, by the way, You know, on AWS means that, you know, VMware's vse platform is, is a, is an iconic enterprise virtualization software, you know, a disproportionately high market share across industries. So when we wanted to create a cloud product along with them, obviously our aim was for them, for the, for this platform to have the goodness of the AWS underlying infrastructure, right? And, and therefore, when we created this VMware cloud solution, it it literally use the AWS platform under the eighth, right? And that's why it's called a VMs VMware cloud on AWS using, using the, the, the wide portfolio of our regions across the world and the strength of the underlying infrastructure, the reliability and, and, and sustainability that it offers. And therefore this product is called VMC on aws. >>It's a distinction I think is worth noting, and it does reflect engineering and some levels of integration that go well beyond just having a SaaS app and, and basically platform as a service or past services. So I just wanna make sure that now super cloud, we'll talk about that a little bit in another interview, but I gotta get one more question in before we get into the use cases and customer success stories is in, in most of the VM world, VMware world, in that IT world, it used to, when you heard migration, people would go, Oh my God, that's gonna take months. And when I hear about moving stuff around and doing cloud native, the first reaction people might have is complexity. So two questions for you before we move on to the next talk. Track complexity. How are you addressing the complexity issue and how long these migrations take? Is it easy? Is it it hard? I mean, you know, the knee jerk reaction is month, You're very used to that. If they're dealing with Oracle or other old school vendors, like, they're, like the old guard would be like, takes a year to move stuff around. So can you comment on complexity and speed? >>Yeah. So the first, first thing is complexity. And you know, what makes what makes anything complex is if you're, if you're required to acquire new skill sets or you've gotta, if you're required to manage something differently, and as far as VMware cloud and AWS on both these aspects, you don't have to do anything, right? You don't have to acquire new skill sets. Your existing idea operation skill sets on, on VMware's platforms are absolutely fine and you don't have to manage it any differently like, than what you're managing your, your ID infrastructure today. So in both these aspects, it's exactly the same and therefore it is absolutely not complex as far as, as far as, as far as we cloud and AWS is concerned. And the other thing is speed. This is where the huge differentiation is. You have seen that, you know, large banks and large telcos have now moved their workloads, you know, literally in days instead of months. >>Because because of VMware cloud and aws, a lot of time customers come to us with specific deadlines because they want to exit their data centers on a particular date. And what happens, VMware cloud and AWS is called upon to do that migration, right? So speed is absolutely critical. The reason is also exactly the same because you are using the exactly the same platform, the same management systems, people are available to you, you're able to migrate quickly, right? I would just reference recently we got an award from President Zelensky of Ukraine for, you know, migrating their entire ID digital infrastructure and, and that that happened because they were using VMware cloud database and happened very swiftly. >>That's been a great example. I mean, that's one political, but the economic advantage of getting outta the data center could be national security. You mentioned Ukraine, I mean Oscar see bombing and death over there. So clearly that's a critical crown jewel for their running their operations, which is, you know, you know, world mission critical. So great stuff. I love the speed thing. I think that's a huge one. Let's get into some of the use cases. One of them is, the first one I wanted to talk about was we just hit on data, data center migration. It could be financial reasons on a downturn or our, or market growth. People can make money by shifting to the cloud, either saving money or making money. You win on both sides. It's a, it's a, it's almost a recession proof, if you will. Cloud is so use case for number one data center migration. Take us through what that looks like. Give an example of a success. Take us through a day, day in the life of a data center migration in, in a couple minutes. >>Yeah. You know, I can give you an example of a, of a, of a large bank who decided to migrate, you know, their, all their data centers outside their existing infrastructure. And they had, they had a set timeline, right? They had a set timeline to migrate the, the, they were coming up on a renewal and they wanted to make sure that this set timeline is met. We did a, a complete assessment of their infrastructure. We did a complete assessment of their IT applications, more than 80% of their IT applications, underlying v vSphere platform. And we, we thought that the right solution for them in the timeline that they wanted, right, is VMware cloud ands. And obviously it was a large bank, it wanted to do it safely and securely. It wanted to have it completely managed, and therefore VMware cloud and aws, you know, ticked all the boxes as far as that is concerned. >>I'll be happy to report that the large bank has moved to most of their applications on AWS exiting three of their data centers, and they'll be exiting 12 more very soon. So that's a great example of, of, of the large bank exiting data centers. There's another Corolla to that. Not only did they manage to manage to exit their data centers and of course use and be more agile, but they also met their sustainability goals. Their board of directors had given them goals to be carbon neutral by 2025. They found out that 35% of all their carbon foot footprint was in their data centers. And if they moved their, their ID infrastructure to cloud, they would severely reduce the, the carbon footprint, which is 35% down to 17 to 18%. Right? And that meant their, their, their, their sustainability targets and their commitment to the go to being carbon neutral as well. >>And that they, and they shift that to you guys. Would you guys take that burden? A heavy lifting there and you guys have a sustainability story, which is a whole nother showcase in and of itself. We >>Can Exactly. And, and cause of the scale of our, of our operations, we are able to, we are able to work on that really well as >>Well. All right. So love the data migration. I think that's got real proof points. You got, I can save money, I can, I can then move and position my applications into the cloud for that reason and other reasons as a lot of other reasons to do that. But now it gets into what you mentioned earlier was, okay, data migration, clearly a use case and you laid out some successes. I'm sure there's a zillion others. But then the next step comes, now you got cloud architects becoming minted every, and you got managed services and higher level services. What happens next? Can you give us an example of the use case of the modernization around the NextGen workloads, NextGen applications? We're starting to see, you know, things like data clouds, not data warehouses. We're not gonna data clouds, it's gonna be all kinds of clouds. These NextGen apps are pure digital transformation in action. Take us through a use case of how you guys make that happen with a success story. >>Yes, absolutely. And this is, this is an amazing success story and the customer here is s and p global ratings. As you know, s and p global ratings is, is the world leader as far as global ratings, global credit ratings is concerned. And for them, you know, the last couple of years have been tough as far as hardware procurement is concerned, right? The pandemic has really upended the, the supply chain. And it was taking a lot of time to procure hardware, you know, configure it in time, make sure that that's reliable and then, you know, distribute it in the wide variety of, of, of offices and locations that they have. And they came to us. We, we did, again, a, a, a alar, a fairly large comprehensive assessment of their ID infrastructure and their licensing contracts. And we also found out that VMware cloud and AWS is the right solution for them. >>So we worked there, migrated all their applications, and as soon as we migrated all their applications, they got, they got access to, you know, our high level services be our analytics services, our machine learning services, our, our, our, our artificial intelligence services that have been critical for them, for their growth. And, and that really is helping them, you know, get towards their next level of modern applications. Right Now, obviously going forward, they will have, they will have the choice to, you know, really think about which applications they want to, you know, refactor or which applications they want to go ahead with. That is really a choice in front of them. And, but you know, the, we VMware cloud and AWS really gave them the opportunity to first migrate and then, you know, move towards modernization with speed. >>You know, the speed of a startup is always the kind of the Silicon Valley story where you're, you know, people can make massive changes in 18 months, whether that's a pivot or a new product. You see that in startup world. Now, in the enterprise, you can see the same thing. I noticed behind you on your whiteboard, you got a slogan that says, are you thinking big? I know Amazon likes to think big, but also you work back from the customers and, and I think this modern application thing's a big deal because I think the mindset has always been constrained because back before they moved to the cloud, most IT, and, and, and on-premise data center shops, it's slow. You gotta get the hardware, you gotta configure it, you gotta, you gotta stand it up, make sure all the software is validated on it, and loading a database and loading oss, I mean, mean, yeah, it got easier and with scripting and whatnot, but when you move to the cloud, you have more scale, which means more speed, which means it opens up their capability to think differently and build product. What are you seeing there? Can you share your opinion on that epiphany of, wow, things are going fast, I got more time to actually think about maybe doing a cloud native app or transforming this or that. What's your, what's your reaction to that? Can you share your opinion? >>Well, ultimately we, we want our customers to utilize, you know, most of our modern services, you know, applications should be microservices based. When desired, they should use serverless applic. So list technology, they should not have monolithic, you know, relational database contracts. They should use custom databases, they should use containers when needed, right? So ultimately, we want our customers to use these modern technologies to make sure that their IT infrastructure, their licensing, their, their entire IT spend is completely native to cloud technologies. They work with the speed of a startup, but it's important for them to, to, to get to the first step, right? So that's why we create this journey for our customers, where you help them migrate, give them time to build the skills, they'll help them mo modernize, take our partners along with their, along with us to, to make sure that they can address the need for our customers. That's, that's what our customers need today, and that's what we are working backwards from. >>Yeah, and I think that opens up some big ideas. I'll just say that the, you know, we're joking, I was joking the other night with someone here in, in Palo Alto around serverless, and I said, you know, soon you're gonna hear words like architectural list. And that's a criticism on one hand, but you might say, Hey, you know, if you don't really need an architecture, you know, storage lists, I mean, at the end of the day, infrastructure is code means developers can do all the it in the coding cycles and then make the operations cloud based. And I think this is kind of where I see the dots connecting. Final thought here, take us through what you're thinking around how this new world is evolving. I mean, architecturals kind of a joke, but the point is, you know, you have to some sort of architecture, but you don't have to overthink it. >>Totally. No, that's a great thought, by the way. I know it's a joke, but it's a great thought because at the end of the day, you know, what do the customers really want? They want outcomes, right? Why did service technology come? It was because there was an outcome that they needed. They didn't want to get stuck with, you know, the, the, the real estate of, of a, of a server. They wanted to use compute when they needed to, right? Similarly, what you're talking about is, you know, outcome based, you know, desire of our customers and, and, and that's exactly where the word is going to, Right? Cloud really enforces that, right? We are actually, you know, working backwards from a customer's outcome and using, using our area the breadth and depth of our services to, to deliver those outcomes, right? And, and most of our services are in that path, right? When we use VMware cloud and aws, the outcome is a, to migrate then to modernize, but doesn't stop there, use our native services, you know, get the business outcomes using this. So I think that's, that's exactly what we are going through >>Actually, should actually, you're the director of global sales and go to market for VMware cloud on Aus. I wanna thank you for coming on, but I'll give you the final minute. Give a plug, explain what is the VMware cloud on Aus, Why is it great? Why should people engage with you and, and the team, and what ultimately is this path look like for them going forward? >>Yeah. At the end of the day, we want our customers to have the best paths to the cloud, right? The, the best path to the cloud is making sure that they migrate safely, reliably, and securely as well as with speed, right? And then, you know, use that cloud platform to, to utilize AWS's native services to make sure that they modernize their IT infrastructure and applications, right? We want, ultimately that our customers, customers, customer get the best out of, you know, utilizing the, that whole application experience is enhanced tremendously by using our services. And I think that's, that's exactly what we are working towards VMware cloud AWS is, is helping our customers in that journey towards migrating, modernizing, whether they wanna exit a data center or whether they wanna modernize their applications. It's a essential first step that we wanna help our customers with >>One director of global sales and go to market with VMware cloud on neighbors. He's with aws sharing his thoughts on accelerating business transformation on aws. This is a showcase. We're talking about the future path. We're talking about use cases with success stories from customers as she's thank you for spending time today on this showcase. >>Thank you, John. I appreciate it. >>Okay. This is the cube, special coverage, special presentation of the AWS Showcase. I'm John Furrier, thanks for watching.
SUMMARY :
Great to have you and Daniel Re Myer, principal architect global AWS synergy Greatly appreciate it. You're starting to see, you know, this idea of higher level services, More recently, one of the things to keep in mind is we're looking to deliver value Then the other thing comes down to is where we Daniel, I wanna get to you in a second. lot of CPU power, such as you mentioned it, AI workloads. composing, you know, with open source, a lot of great things are changing. So we want to have all of that as a service, on what you see there from an Amazon perspective and how it relates to this? And you know, look at it from the point of view where we said this to leverage a cloud, but the investment that you made and certain things as far How would you talk to that persona about the future And that also means in, in to to some extent, concerns with your I can still run my job now I got goodness on the other side. on the skills, you certainly have that capability to do so. Now that we're peeking behind the curtain here, I'd love to have you guys explain, You always have to have the time difference in mind if we are working globally together. I mean it seems to be very productive, you know, I think one of the key things to keep in mind is, you know, even if you look at AWS's guys to comment on, as you guys continue to evolve the relationship, what's in it for So one of the most important things we have announced this year, Yeah, I think one of the key things to keep in mind is, you know, we're looking to help our customers You know, we have a product, you have a product, biz dev deals happen, people sign relationships and they do business And this, you guys are in the middle of two big ecosystems. You can do this if you decide you want to stay with some of your services But partners innovate with you on their terms. I think one of the key things, you know, as Daniel mentioned before, You still run the fear, the way you working on it and And if, if you look, not every, And thank you for spending the time. So personally for me as an IT background, you know, been in CIS admin world and whatnot, thank you for coming on on this part of the showcase episode of really the customer successes with VMware we're kind of not really on board with kind of the vision, but as it played out as you guys had announced together, across all the regions, you know, that was a big focus because there was so much demand for We invented this pretty awesome feature called Stretch clusters, where you could stretch a And I think one of the things that you mentioned was how the advantages you guys got from that and move when you take the, the skill set that they're familiar with and the advanced capabilities that I have to ask you guys both as you guys see this going to the next level, you know, having a very, very strong engineering partnership at that level. put even race this issue to us, we sent them a notification saying we And as you grow your solutions, there's more bits. the app layer, as you think about some of the other workloads like sap, we'll go end to What's been the feedback there? which is much, much easier with VMware cloud aws, you know, they wanna see more action, you know, as as cloud kind of continues to And you know, separate that from compute. And the second storage offering for VMware cloud Flex Storage, VMware's own managed storage you know, new SaaS services in that area as well. If you don't mind me getting a quick clarification, could you explain the Drew screen resource defined versus But we, you know, because it it's in the cloud, so, So can you guys take us through some recent examples of customer The, the options there obviously are tied to all the innovation that we So there's things that you just can't, could not do before I mean, it's been phenomenal, the, the customer adoption of this and you know, Yeah, it's great to see, I mean the data center migrations go from months, many, So the actual calculators and the benefits So there's a lot you gotta to stay current on, Yeah, and then like you said on the security point, security is job one. So the question is for you guys each to Leveraging world class hardware that you don't have to worry production to the secure supply chain and how can we truly, you know, Whether it's, you know, higher level services with large scale Thank you so I'm John Furrier, host of the Cube. Can you open this up with the most important story around VMC on aws? platform that allows you to move it, move their VMware based platforms very fast. They go to the cloud, you guys have done that, So that's the migration story, but to your point, it doesn't end there, So as you move with the higher level services, So the first order of business is to help them ease Because if you look at what you guys have done at aws, the advantages that you get access to all the other AWS services. Could you take a minute to explain what on AWS on AWS means that, you know, VMware's vse platform is, I mean, you know, the knee jerk reaction is month, And you know, what makes what the same because you are using the exactly the same platform, the same management systems, which is, you know, you know, world mission critical. decided to migrate, you know, their, So that's a great example of, of, of the large bank exiting data And that they, and they shift that to you guys. And, and cause of the scale of our, of our operations, we are able to, We're starting to see, you know, things like data clouds, And for them, you know, the last couple of years have been tough as far as hardware procurement is concerned, And, and that really is helping them, you know, get towards their next level You gotta get the hardware, you gotta configure it, you gotta, you gotta stand it up, most of our modern services, you know, applications should be microservices based. I mean, architecturals kind of a joke, but the point is, you know, the end of the day, you know, what do the customers really want? I wanna thank you for coming on, but I'll give you the final minute. customers, customer get the best out of, you know, utilizing the, One director of global sales and go to market with VMware cloud on neighbors. I'm John Furrier, thanks for watching.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
John | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Samir | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Maryland | LOCATION | 0.99+ |
Pat Geling | PERSON | 0.99+ |
John Foer | PERSON | 0.99+ |
Andy Chassey | PERSON | 0.99+ |
Adam | PERSON | 0.99+ |
Daniel | PERSON | 0.99+ |
Andy Jessey | PERSON | 0.99+ |
2017 | DATE | 0.99+ |
Daniel Re Myer | PERSON | 0.99+ |
Germany | LOCATION | 0.99+ |
Fred | PERSON | 0.99+ |
Samir Daniel | PERSON | 0.99+ |
two | QUANTITY | 0.99+ |
Stephen Schmidt | PERSON | 0.99+ |
Danielle | PERSON | 0.99+ |
2016 | DATE | 0.99+ |
VMware | ORGANIZATION | 0.99+ |
Samia | PERSON | 0.99+ |
two companies | QUANTITY | 0.99+ |
2025 | DATE | 0.99+ |
Andy Chas | PERSON | 0.99+ |
John Fur | PERSON | 0.99+ |
San Francisco | LOCATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
2013 | DATE | 0.99+ |
36 | QUANTITY | 0.99+ |
Pat Gelsinger | PERSON | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
two questions | QUANTITY | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
Nora | PERSON | 0.99+ |
*****NEEDS TO STAY UNLISTED FOR REVIEW***** Ricky Cooper & Joseph George | VMware Explore 2022
(light corporate music) >> Welcome back, everyone, to VMware Explore 22. I'm John Furrier, host of theCUBE with Dave Vellante. Our 12th year covering VMware's User Conference, formerly known as VMworld, now rebranded as VMware Explore. Two great cube alumnus coming down the cube. Ricky Cooper, SVP, Worldwide Partner Commercials VMware, great to see you. Thanks for coming on. >> Thank you. >> We just had a great chat- >> Good to see you again. >> With the Discovery and, of course, Joseph George, vice president of Compute Industry Alliances. Great to have you on. Great to see you. >> Great to see you, John. >> So guys this year is very curious in VMware. A lot goin' on, the name change, the event. Big, big move. Bold move. And then they changed the name of the event. Then Broadcom buys them. A lot of speculation, but at the end of the day, this conference kind of, people were wondering what would be the barometer of the event. We're reporting this morning on the keynote analysis. Very good mojo in the keynote. Very transparent about the Broadcom relationship. The expo floor last night was buzzing. >> Mhm. >> I mean, this is not a show that's lookin' like it's going to be, ya' know, going down. >> Yeah. >> This is clearly a wave. We're calling it Super Cloud. Multi-Cloud's their theme. Clearly the cloud's happenin'. We not to date ourselves, but 2013 we were discussing on theCUBE- >> We talked about that. Yeah. Yeah. >> Discover about DevOps infrastructure as code- >> Mhm. >> We're full realization now of that. >> Yep. >> This is where we're at. You guys had a great partnership with VMware and HPE. Talk about where you guys see this coming together because customers are refactoring. They are lookin' at Cloud Native. The whole Broadcom visibility to the VMware customer bases activated them. They're here and they're leaning in. >> Yeah. >> What's going on? >> Yeah. Absolutely. We're seeing a renewed interest now as customers are looking at their entire infrastructure, bottoms up, all the way up the stack, and the notion of a hybrid cloud, where you've got some visibility and control of your data and your infrastructure and your applications, customers want to live in that sort of a cloud environment and so we're seeing a renewed interest. A lot of conversations we're having with customers now, a lot of customers committing to that model where they have applications and workloads running at the Edge, in their data center, and in the public cloud in a lot of cases, but having that mobility, having that control, being able to have security in their own, you know, in their control. There's a lot that you can do there and, obviously, partnering with VMware. We've been partners for so long. >> 20 years about. Yeah. Yeah. >> Yeah. At least 20 years, back when they invented stuff, they were inventing way- >> Yeah. Yeah. Yeah. >> VMware's got a very technical culture, but Ricky, I got to say that, you know, we commented earlier when Raghu was on, the CEO, now CEO, I mean, legendary product. I sent the trajectory to VMware. Everyone knows that. VMware, I can't know whether to tell it was VMware or HP, HP before HPE, coined hybrid- >> Yeah. >> 'Cause you guys were both on. I can't recall, Dave, which company coined it first, but it was either one of you guys. Nobody else was there. >> It was the partnership. >> Yes. I- (cross talking) >> They had a big thing with Pat Gelsinger. Dave, remember when he said, you know, he got in my grill on theCUBE live? But now you see- >> But if you focus on that Multi-Cloud aspect, right? So you've got a situation where our customers are looking at Multi-Cloud and they're looking at it not just as a flash in the pan. This is here for five years, 10 years, 20 years. Okay. So what does that mean then to our partners and to our distributors? You're seeing a whole seed change. You're seeing partners now looking at this. So, look at the OEMs, you know, the ones that have historically been vSphere customers are now saying, they're coming in droves saying, okay, what is the next step? Well, how can I be a Multi-Cloud partner with you? >> Yep. Right. >> How can I look at other aspects that we're driving here together? So, you know, GreenLake is a great example. We keep going back to GreenLake and we are partaking in GreenLake at the moment. The real big thing for us is going to be, right, let's make sure that we've got the agreements in place that support this SaaS and subscription motion going forward and then the sky's the limit for us. >> You're pluggin' that right into GreenLake, right? >> Well, here's why. Here's why. So customers are loving the fact that they can go to a public cloud and they can get an SLA. They come to a, you know, an On-Premise. You've got the hardware, you've got the software, you've got the, you know, the guys on board to maintain this through its life cycle. >> Right. I mean, this is complicated stuff. >> Yeah. >> Now we've got a situation where you can say, hey, we can get an SLA On-Premise. >> Yeah. And I think what you're seeing is it's very analogous to having a financial advisor just manage your portfolio. You're taking care of just submitting money. That's really a lot of what the customers have done with the public cloud, but now, a lot of these customers are getting savvy and they have been working with VMware Technologies and HPE for so long. They've got expertise. They know how they want their workloads architected. Now, we've given them a model where they can leverage the Cloud platform to be able to do this, whether it's On-Premise, The Edge, or in the public cloud, leveraging HPE GreenLake and VMware. >> Is it predominantly or exclusively a managed service or do you find some customers saying, hey, we want to manage ourself? How, what are you seeing is the mix there? >> It is not predominantly managed services right now. We're actually, as we are growing, last time we talked to HPE Discover we talked about a whole bunch of new services that we've added to our catalog. It's growing by leaps and bounds. A lot of folks are definitely interested in the pay as you go, obviously, the financial model, but are now getting exposed to all the other management that can happen. There are managed services capabilities, but actually running it as a service with your systems On-Prem is a phenomenal idea for all these customers and they're opening their eyes to some new ways to service their customers better. >> And another phenomenon we're seeing there is where partners, such as HPA, using other partners for various areas of their services implementation as well. So that's another phenomenon, you know? You're seeing the resale motion now going into a lot more of the services motion. >> It's interesting too, you know, I mean, the digital modernization that's goin' on. The transformation, whatever you want to call it, is complicated. >> Yeah. >> That's clear. One of the things I liked about the keynote today was the concept of cloud chaos. >> Yeah. >> Because we've been saying, you know, quoting Andy Grove at Intel, "Let chaos rain and rain in the chaos." >> Mhm. >> And when you have inflection points, complexity, which is the chaos, needs to be solved and whoever solves it kicks the inflection point, that's up into the right. So- >> Prime idea right here. Yeah. >> So GreenLake is- >> Well, also look at the distribution model and how that's changed. A couple of points on a deal. Now they're saying, "I'll be your aggregator. I'll take the strain and I'll give you scale." You know? "I'll give you VMware Scale for all, you know, for all of the various different partners, et cetera." >> Yeah. So let's break this down because this is, I think, a key point. So complexity is good, but the old model in the Enterprise market was- >> Sure. >> You solve complexity with more complexity. >> Yeah. >> And everybody wins. Oh, yeah! We're locked in! That's not what the market wants. They want some self-service. They want, as a service, they want easy. Developer first security data ops, DevOps, is already in the cycle, so they're going to want simpler. >> Yeah. >> Easier. Faster. >> And this is kind of why I'll say, for the big announcement today here at VMware Explore, around the VMware vSphere Distributed Services Engine, Project Monterey- >> Yeah. >> That we've talked about for so long, HPE and VMware and AMD, with the Pensando DPU, actually work together to engineer a solution for exactly that. The capabilities are fairly straightforward in terms of the technologies, but actually doing the work to do integration, joint engineering, make sure that this is simple and easy and able to be running HPE GreenLake, that's- >> That's invested in Pensando, right? >> We are. >> We're all investors. Yeah. >> What's the benefit of that? What's, that's a great point you made. What's the value to the customer, bottom line? That deep co-engineering, co-partnering, what does it deliver that others don't do? >> Yeah. Well, I think one example would be, you know, a lot of vendors can say we support it. >> Yep. >> That's great. That's actually a really good move, supporting it. It can be resold. That's another great move. I'm not mechanically inclined to where I would go build my own car. I'll go to a dealership and actually buy one that I can press the button and I can start it and I can do what I need to do with my car and that's really what this does is the engineering work that's gone on between our two companies and AMD Pensando, as well as the business work to make that simple and easy, that transaction to work, and then to be able to make it available as a service, is really what made, it's, that's why it's such a winner winner with our- >> But it's also a lower cost out of the box. >> Yep. >> Right. >> So you get in whatever. Let's call it 20%. Okay? But there's, it's nuanced because you're also on a new technology curve- >> Right. >> And you're able to absorb modern apps, like, you know, we use that term as a bromide, but when I say modern apps, I mean data-rich apps, you know, things that are more AI-driven not the conventional, not that people aren't doing, you know, SAP and CRM, they are, but there's a whole slew of new apps that are coming in that, you know, traditional architectures aren't well-suited to handle from a price performance standpoint. This changes that doesn't it? >> Well, you think also of, you know, going to the next stage, which is to go to market between the two organizations that before. At the moment, you know, HPE's running off doing various different things. We were running off to it again, it's that chaos that you're talking about. In cloud chaos, you got to go to market chaos. >> Yeah. >> But by simplifying four or five things, what are we going to do really well together? How do we embed those in GreenLake- >> Mhm. >> And be known in the marketplace for these solutions? Then you get a, you know, an organization that's really behind the go to market. You can help with sales activation the enablement, you know, and then we benefit from the scale of HPE. >> Yeah. >> What are those solutions I mean? Is it just, is it I.S.? Is it, you know, compute storage? >> Yeah. >> Is it, you know, specific, you know, SAP? Is it VDI? What are you seeing out there? >> So right now, for this specific technology, we're educating our customers on what that could be and, at its core, this solution allows customers to take services that normally and traditionally run on the compute system and run on a DPU now with Project Monterey, and this is now allowing customers to think about, okay, where are their use cases. So I'm, rather than going and, say, use it for this, we're allowing our customers to explore and say, okay, here's where it makes sense. Where do I have workloads that are using a lot of compute cycles on services at the compute level that could be somewhere else like networking as a great example, right? And allowing more of those compute cycles to be available. So where there are performance requirements for an application, where there is timely response that's needed for, you know, for results to be able to take action on, to be able to get insight from data really quick, those are places where we're starting to see those services moving onto something like a DPU and that's where this makes a whole lot more sense. >> Okay. So, to get this right, you got the hybrid cloud, right? >> [Ricky And Joseph] Yes. >> You got GreenLake and you got the distributed engine. What's that called the- >> For, it's HPE ProLiant- >> ProLiant with- >> The VMware- >> With vSphere. >> That's the compute- >> Distributed. >> Okay. So does the customer, how do you guys implement that with the customer? All three at the same time or they mix and match? What's that? How does that work? >> All three of those components. Yeah. So the beauty of the HP ProLiant with VMware vSphere-distributed services engine- >> Mhm. >> Also known as Project Monterey for those that are keeping notes at home- >> Mhm. >> It's, again, already pre-engineered. So we've already worked through all the mechanics of how you would have to do this. So it's not something you have to go figure out how you build, get deployment, you know, work through those details. That's already done. It is available through HPE GreenLake. So you can go and actually get it as a service in partnership with our customer, our friends here at VMware, and because, if you're familiar and comfortable with all the things that HP ProLiant has done from a security perspective, from a reliability perspective, trusted supply chain, all those sorts of things, you're getting all of that with this particular (indistinct). >> Sumit Dhawan had a great quote on theCUBE just an hour or so ago. He said you have to be early to be first. >> Yeah. (laughing) >> I love that quote. Okay. So you were- >> I fought the urge. >> You were first. You were probably a little early, but do you have a lead? I know you're going to say yes, okay. Let's just- >> Okay. >> Let's just assume that. >> Okay. Yeah. >> Relative to the competition, how do you know? How do you determine that? >> If we have a lead or not? >> Yeah. If you lead. If you're the best. >> We go to the source of the truth which is our customers. >> And what do they tell you? What do you look at and say, okay, now, I mean, when you have that honest conversation and say, okay, we are, we're first, we're early. We're keeping our lead. What are the things that you- >> I'll say it this way. I'll say it this way. We've been in a lot of businesses where there, where we do compete head-to-head in a lot of places. >> Mhm. >> And we know how that sales process normally works. We're seeing a different motion from our customers. When we talk about HPE GreenLake, there's not a lot of back and forth on, okay, well, let me go shop around. It is HP Green. Let's talk about how we actually build this solution. >> And I can tell you, from a VMware perspective, our customers are asking us for this the other way around. So that's a great sign is that, hey, we need to see this partnership come together in GreenLake. >> Yeah. >> It's the old adage that Amazon used to coin and Andy Jassy, you know, they do the undifferentiated heavy lifting. >> [Ricky And Joseph] Yeah. >> A lot of that's now Cloud operations. >> Mhm. >> Underneath it is infrastructure's code to the developer. >> That's right. >> That's at scale. >> That's right. >> And so you got a lot of heavy lifting being done with GreenLake- >> Right. >> Which is why there's no objections probably. >> Right. >> What's the choice? What are you going to shop? >> Yeah. >> There's nothing to shop around. >> Yeah, exactly. And then we've got, you know, that is really icing on the cake that we've, you know, that we've been building for quite some time and there is an understanding in the market that what we do with our infrastructure is hardened from a reliability and quality perspective. Like, times are tough right now. Supply chain issues, all that stuff. We've talked, all talked about it, but at HPE, we don't skimp on quality. We're going to spend the dollars and time on making sure we got reliability and security built in. It's really important to us. >> We had a great use case. The storage team, they were provisioning with containers. >> Yes. >> Storage is a service instantly we're seeing with you guys with VMware. Your customers' bringing in a lot of that into the mix as well. I got to ask 'cause every event we talk about AI and machine learning- >> Mhm. >> Automation and DevOps are now infiltrating in with the CICD pipeline. Security and data become a big conversation. >> [Ricky And Joseph] Agreed. >> Okay. So how do you guys look at that? Okay. You sold me on Green. Like, I've been a big fan from day one. Now, it's got maturity on it. I know it's going to get a lot more headroom to do. There's still a lot of work to do, but directionally it's pretty accurate, you know? It's going to be a success. There's still concern about security, the data layer. That's agnostic of environment, private cloud, hybrid, public, and Edge. So that's important and security- >> Great. >> Has got a huge service area. >> Yeah. >> These are on working progress. >> Yeah. Yeah. >> How do you guys view those? >> I think you've just hit the net on the head. I mean, I was in the press and journalist meetings yesterday and our answer was exactly the same. There is still so much work that can be done here and, you know, I don't think anybody is really emerging as a true leader. It's just a continuation of, you know, tryin' to get that right because it is what is the most important thing to our customers. >> Right. >> And the industry is really sort of catching up to that. >> And, you know, when you start talking about privacy and when you, it's not just about company information. It's about individuals' information. It's about, you know, information that, if exposed, actually could have real impact on people. >> Mhm. >> So it's more than just an I.T. problem. It is actually, and from HPE's perspective, security starts from when we're picking our suppliers for our components. Like, there are processes that we put into our entire trusted supply chain from the factory on the way up. I liken it to my golf swing. My golf swing. I slice right like you wouldn't believe. (John laughing) But when I go to the golf pros, they start me back at the mechanics, the foundational pieces. Here's where the problems are and start workin' on that. So my view is, our view is, if your infrastructure is not secure, you're goin' to have troubles with security as you go further up. >> Stay in the sandbox. >> Yeah. >> Yeah. So to speak, you know, they're driving range on the golf analogy there. I love that. Talk about supply chain security real quick because you mentioned supply chain on the hardware side. You're seeing a lot of open source and supply chain in software, trusted software. >> Yep. >> How does GreenLake look at that? How do you guys view that piece of it? That's an important part. >> Yeah. Security is one of the key pillars that we're actually driving as a company right now. As I said, it's important to our customers as they're making purchasing decisions and we're looking at it from the infrastructure all the way up to the actual service itself and that's the beauty of having something like HPE GreenLake. We don't have to pick, is the infrastructure or the middle where, or the top of stack application- >> It's (indistinct), right? >> It's all of it. >> Yeah. >> It's all of it. That matters. >> Quick question on the ecosystem posture. So- >> Sure. >> I remember when HP was, you know, one company and then the GSIs were a little weird with HP because of EDS, you know? You had data protector so we weren't really chatting up Veeam at the time, right? And as soon as the split happened, ecosystem exploded. Now you have a situation where you, Broadcom, is acquiring VMware. You guys, big Broadcom customer. Has your attitude changed or has it not because, oh, we meet with the customers already. Well, you've always said that, but have you have leaned in more? I mean, culturally, is HPE now saying, hmm, now we have some real opportunities to partner in new ways that we don't have to sleep with one eye open, maybe. (John laughing) >> So first of all, VMware and HPE, we've got a variety of different partners. We always have. >> Mhm. >> Well before any Broadcom announcement came along. >> Yeah, sure. >> We've been working with a variety of partners. >> And that hasn't changed. >> And that hasn't changed. And, if your question is, has our posture toward VMware changed at all, the answer's absolutely not. We believe in what VMware is doing. We believe in what our customers are doing with VMware and we're going to continue to work with VMware and partner with the (indistinct). >> And of course, you know, we had to spin out ourselves in November of last year, which I worked on, you know, the whole Dell thing. >> Yeah. We still had the same chairman. >> Yeah. There- (Dave chuckling) >> Yeah, but since then, I think what's really become very apparent and not, it's not just with HPE, but with many of our partners, many of the OEM partners, the opportunity in front of us is vast and we need to rely on each other to help us as, you know, solve the customer problems that are out there. So there's a willingness to overlook some things that, in the past, may have been, you know, barriers. >> But it's important to note also that it's not that we have not had history- >> Yeah. >> Right? Over, we've got over 200,000 customers join- >> Hundreds of millions of dollars of business- >> 100,000, over 10,000, or 100,000 channel partners that we all have in common. >> Yeah. Yeah. >> Yep. >> There's numerous- >> And independent of the whole Broadcom overhang there. >> Yeah. >> There's the ecosystem floor. >> Yeah. >> The expo floor. >> Right. >> I mean, it's vibrant. I mean, there's clearly a wave coming, Ricky. We talked about this briefly at HPE Discover. I want to get an update from your perspectives, both of you, if you don't mind weighing in on this. Clearly, the wave, we're calling it the Super Cloud, 'cause it's not just Multi-Cloud. It's completely different looking successes- >> Smart Cloud. >> It's not just vendors. It's also the customers turning into clouds themselves. You look at Goldman Sachs and- >> Yep. >> You know, I think every vertical will have its own power law of Cloud players in the future. We believe that to be true. We're still testing that assumption, but it's trending in when you got OPEX- >> [Ricky And Joseph] Right. >> Has to go to in-fund statement- >> Yeah. >> CapEx goes too. Thanks for the Cloud. All that's good, but there's a wave coming- >> Yeah. >> And we're trying to identify it. What do you guys see as this wave 'cause beyond Multi-Cloud and the obvious nature of that will end up happening as a state and what happens beyond that interoperability piece, that's a whole other story, and that's what everyone's fighting for, but everyone out in that ecosystem, it's a big wave coming. They've got their surfboards. They're ready to go. So what do you guys see? What is the next wave that everyone's jacked up about here? >> Well, I think that the Multi-Cloud is obviously at the epicenter. You know, if you look at the results that are coming in, a lot of our customers, this is what's leading the discussion and now we're in a position where, you know, we've brought many companies over the last few years. They're starting to come to fruition. They're starting to play a role in, you know, how we're moving forward. >> Yeah. >> Some of those are a bit more applicable to the commercial space. We're finding commercial customers that never bought from us before. Never. Hundreds and hundreds are coming through our partner networks every single quarter, you know? So brand new to VMware. The trick then is how do you nurture them? How do you encourage them? >> So new logos are comin' in. >> New logos are coming in all the time, all the time, from, you know, from across the ecosystem. It's not just the OEMs. It's all the way back- >> So the ecosystem's back of VMware. >> Unbelievably. So what are we doing to help that? There's two big things that we've announced in the recent weeks is that Partner Connect 2.0. When I talked to you about Multi-Cloud and what the (indistinct), you know, the customers are doing, you see that trend. Four, five different separate clouds that we've got here. The next piece is that they're changing their business models with the partners. Their services is becoming more and more apparent, et cetera, you know? And the use of other partners to do other services, deployment, or this stuff is becoming prevalent. Then you've got the distributors that I talked about with their, you know, their, then you route to market, then you route to business. So how do you encapsulate all of that and ensure your rewarding partners on all aspects of that? Whether it's deployment, whether it's test and depth, it's a points-based system we've put in place now- >> It's a big pie that's developing. The market's getting bigger. >> It's getting so much bigger. And then you help- >> I know you agree, obviously, with that. >> Yeah. Absolutely. In fact, I think for a long time we were asking the question of, is it going to be there or is it going to be here? Which was the wrong question. (indistinct cross talking) Now it's everything. >> Yeah. >> And what I think that, what we're seeing in the ecosystem, is that people are finding the spots that, where they're going to play. Am I going to be on the Edge? >> Yeah. >> Am I going to be on Analytics Play? Am I going to be, you know, Cloud Transition Play? There's a lot of players are now emerging and saying, we're- >> Yeah. >> We're, we now have a place, a part to play. And having that industry view not just of, you know, a commercial customer at that level, but the two of us are lookin' at Teleco, are looking at financial services, at healthcare, at manufacturing. How do these new ecosystem players fit into the- >> (indistinct) lifting. Everyone can see their position there. >> Right. >> We're now being asked for simplicity and talk to me about partner profitability. >> Yes. >> How do I know where to focus my efforts? Am I spread too thin? And, you know, that's, and my advice that the partner ecosystem out there is, hey, let's pick out spots together. Let's really go to, and then strategic solutions that we were talking about is a good example of that. >> Yeah. >> Sounds like composability to me, but not to go back- (laughing) Guys, thanks for comin' on. I think there's a big market there. I think the fog is lifted. People seeing their spot. There's value there. Value creation equals reward. >> Yeah. >> Simplicity. Ease of use. This is the new normal. Great job. Thanks for coming on and sharing. (cross talking) Okay. Back to live coverage after this short break with more day one coverage here from the blue set here in Moscone. (light corporate music)
SUMMARY :
coming down the cube. Great to have you on. A lot goin' on, the it's going to be, ya' know, going down. Clearly the cloud's happenin'. Yeah. Talk about where you guys There's a lot that you can Yeah. Yeah. Yeah. I got to say that, you know, but it was either one of you guys. (cross talking) Dave, remember when he said, you know, So, look at the OEMs, you know, So, you know, GreenLake They come to a, you know, an On-Premise. I mean, this is complicated stuff. where you can say, hey, Edge, or in the public cloud, as you go, obviously, the financial model, So that's another phenomenon, you know? It's interesting too, you know, I mean, One of the things I liked Because we've been saying, you know, And when you have Yeah. for all of the various but the old model in the with more complexity. is already in the cycle, so of the technologies, Yeah. What's, that's a great point you made. would be, you know, that I can press the cost out of the box. So you get in whatever. that are coming in that, you know, At the moment, you know, the enablement, you know, it, you know, compute storage? that's needed for, you know, So, to get this right, you You got GreenLake and you So does the customer, So the beauty of the HP ProLiant of how you would have to do this. He said you have to be early to be first. Yeah. So you were- early, but do you have a lead? If you're the best. We go to the source of the What do you look at and We've been in a lot of And we know how that And I can tell you, and Andy Jassy, you know, code to the developer. Which is why there's cake that we've, you know, provisioning with containers. a lot of that into the mix in with the CICD pipeline. I know it's going to get It's just a continuation of, you know, And the industry is really It's about, you know, I slice right like you wouldn't believe. So to speak, you know, How do you guys view that piece of it? is the infrastructure or the middle where, It's all of it. Quick question on the I remember when HP was, you know, So first of all, VMware and HPE, Well before any Broadcom a variety of partners. the answer's absolutely not. And of course, you know, on each other to help us as, you know, that we all have in common. And independent of the Clearly, the wave, we're It's also the customers We believe that to be true. Thanks for the Cloud. So what do you guys see? in a position where, you know, How do you encourage them? you know, from across the ecosystem. and what the (indistinct), you know, It's a big pie that's developing. And then you help- or is it going to be here? is that people are finding the spots that, view not just of, you know, Everyone can see their position there. simplicity and talk to me and my advice that the partner to me, but not to go back- This is the new normal.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Vellante | PERSON | 0.99+ |
Ricky Cooper | PERSON | 0.99+ |
HP | ORGANIZATION | 0.99+ |
Dave | PERSON | 0.99+ |
Joseph George | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Sumit Dhawan | PERSON | 0.99+ |
Pat Gelsinger | PERSON | 0.99+ |
Ricky | PERSON | 0.99+ |
five years | QUANTITY | 0.99+ |
AMD | ORGANIZATION | 0.99+ |
Four | QUANTITY | 0.99+ |
Andy Grove | PERSON | 0.99+ |
Teleco | ORGANIZATION | 0.99+ |
GreenLake | ORGANIZATION | 0.99+ |
20 years | QUANTITY | 0.99+ |
Goldman Sachs | ORGANIZATION | 0.99+ |
Andy Jassy | PERSON | 0.99+ |
VMware | ORGANIZATION | 0.99+ |
20% | QUANTITY | 0.99+ |
2013 | DATE | 0.99+ |
Broadcom | ORGANIZATION | 0.99+ |
10 years | QUANTITY | 0.99+ |
John Furrier | PERSON | 0.99+ |
HPA | ORGANIZATION | 0.99+ |
two companies | QUANTITY | 0.99+ |
two organizations | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
HPE | ORGANIZATION | 0.99+ |
yesterday | DATE | 0.99+ |
John | PERSON | 0.99+ |
four | QUANTITY | 0.99+ |
CapEx | ORGANIZATION | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
VMware Technologies | ORGANIZATION | 0.99+ |
Moscone | LOCATION | 0.99+ |
both | QUANTITY | 0.99+ |
OPEX | ORGANIZATION | 0.99+ |
Compute Industry Alliances | ORGANIZATION | 0.99+ |
HP Green | ORGANIZATION | 0.99+ |
Project Monterey | ORGANIZATION | 0.98+ |
two big things | QUANTITY | 0.98+ |
five things | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
one | QUANTITY | 0.98+ |
AMD Pensando | ORGANIZATION | 0.98+ |
Raghu | PERSON | 0.98+ |
first | QUANTITY | 0.98+ |
Intel | ORGANIZATION | 0.98+ |
HPE Discover | ORGANIZATION | 0.97+ |
over 200,000 customers | QUANTITY | 0.97+ |
vSphere | ORGANIZATION | 0.97+ |
100,000 | QUANTITY | 0.97+ |
VMware Explore | ORGANIZATION | 0.97+ |
one example | QUANTITY | 0.97+ |
this year | DATE | 0.97+ |
Said Ouissal, Zededa | VMware Explore 2022
>>Hey, everyone. Welcome back to San Francisco. Lisa Martin and John furrier live on the floor at VMware Explorer, 2022. This is our third day of wall to wall coverage on the cube. But you know that cuz you've been here the whole time. We're pleased to welcome up. First timer to the cubes we saw is here. The CEO and founder of ZDA. Saed welcome to the program. >>Thank you for having me >>Talk to me a little bit about what ZDA does in edge. >>Sure. So ZDA is a company purely focused in edge computing. I started a company about five years ago, go after edge. So what we do is we help customers with orchestrating their edge, helping them to deploy secure monitor application services and devices at the edge. >>What's the business model for you guys. We get that out there. So the targeting the edge, which is everything from telco to whatever. Yeah. What's the business model. Yeah. >>Maybe before we go there, let's talk about edge itself. Cuz edge is complex. There's a lot of companies. I call 'em lens company nowadays, if you're not a cloud company, you're probably an edge company at this point. So we are focusing something called the distributed edge. So distributed edge. When you start putting tiny servers in environments like factory floors, solar farms, wind farms, even inside machines or well sites, et cetera. And a question that people always ask me, like why, why would you want to put, you know, servers there on servers supposed to be in a data center in the cloud? And the answer to the question actually is data gravity. So traditionally wherever the data gets created is where your applications live. But as we're connecting more and more devices to the edge of the network, we basically customers now are required to push the applications to the edge cause they can't go all the data to the cloud. So basically that's where we focus on people call it the far edge as well. You know, that's the term we've heard in the past as well. And what we do in our business model is provide customers a, a software as a service solution where they can basically deploy and monitor these applications at these highly distributed environments. >>Data, gravity comes up a lot and I want you to take a minute to explain the definition as it is today. And people have used that term, you know, with big data, going back to 2010 leads when we covering the Hadoop wave, which ended up becoming, you know, data, data, bricks, and snowflake now, but, but a lots changed, but what does it mean to be data gravity? It means that staying local, it's just what specifically describe and, and define what data gravity is. >>Yeah. So for me, data gravity is where you need to process the data, right? It's where the data usually gets created. So if you think about a web app, where does the data get created? Where people click on buttons, they, they interface with it. They, they upload content to it, et cetera. So that's where the data gravity therefore is therefore that's where you do your analytics. That's where you do your visualization processing, machine learning and all of those pieces. So it's really where that data gets created is where the data gravity in my view says, >>What are some of the challenges that data and opportunities that data gravity presents to customers? >>Well, obviously I think every enterprise in this day is trying to take data and make it a competitive advantage, right? Like faster decisions, better decisions, outcompete your competition by, you know, being first with a product or being first with a product with the future, et cetera. So, so I think, you know, if you're not a data driven enterprise by now, then I think the future may be a little bit bleak. >>Okay. So you're targeting the market distributed edge business model, SAS technology, secret sauce. What's that piece. >>Yeah. So that's, that's what the interesting part comes in. I think, you know, if you kind of look at the data center in the cloud, we've had these virtualization and orchestration stacks create, I mean, we're here in VMware Explorer. And as an example, what we basically, what we saw is that the edge is so unique and so different than what we've seen in the data center, in the cloud that we needed to build a complete brand new purpose-built illustration and virtualization solution. So that's really what we, we set off to do. So there's two components that we do. One end is we built a purpose-built edge operating system for the edge and we actually open sourced it. And the reason we opensource it, we said, Hey, you know, edge is so diverse. You know, depending on the environment you're running in a machine or in a vehicle or in a well site, you have different hardware, different networks, different applications you need to enable. >>And we will never be able to support all of them ourselves. As a matter of fact, we actually think there's a need for standardization at the edge. We need to kind of cut through all these silos that have been created traditionally from the embedded way of thinking. So we created basically an open source project in the Linux foundation in LFS, which is a sister organization through the CNCF it's called project Eve. And the idea is to create the Android of the edge, basically what Android became for mobile computing, an a common operating system. So you build one app. You can run in any phone in the world that runs Android, build an architecture. You build one app. You can run in any Eve powered node in the world, >>So distributed edge and you get the tech here, get the secret sauce. We'll get more into that in a second, but I wanna just tie one kick quick point and get your clarification on edge is becoming much more about the physical side too. I mean, absolutely. So when you talk about Android, you're making the reference of a phone. I get that's metaphor to what you're doing at the edge, wind farms, factories, alarms, light bulbs, buildings. I mean, that's what you're talking about, right? Yes. We're getting down to that very, >>Very physical, dark distributed locations. >>We're gonna come back to the CISO CSO. We're gonna come back to the CISO versus CSO question because is the CISO or CIO or who runs that anyway? So that's true. What's the important thing that's happening because that sounds like old OT world, like yes. Operating technology, not it information technology, is it a complete reset of those worlds or is it a collision? >>It's a great question. So what we're seeing is first of all, there is already compute in these environments, industrial PCs of existed well beyond, you know, an industrial automation has been done for many, many decades. The point is that that stuff has been done. Collect data has been collected, but never connected, right? So with edge computing, we're connecting now this data from an industrial machine and industrial process to the cloud, right? And one of the problems is it's data that comes of that industrial process too much to upload to the cloud. So I gotta analyze, analyze it locally. So one of the, the things we saw early on in edge is there's a lot of brownfield. Most of our customers today actually have applications running on windows and they would love to make in Linux and containers and Kubernetes, but it took them 20, 30 years to build those apps. And they basically are the money makers of the enterprise. So they are in a, in a transitionary phase and they need something that can take them from the brown to the Greenfield. So to your point, you gotta support all of these types of unique brownfield applications. >>So you're, you're saying I don't really care if this is a customer, how you get the data, you wanna start new start fresh. That's cool. But if you wanna take your old data, you'll >>Take that. Yeah. You don't wanna rebuild the whole machine. You're >>Just, they can life cycle it out on their own timetable. Yeah. >>So we had to learn, first of all, how do we take and lift and shift windows based industrial application and make it run at the edge on, on our architecture. Right? And then the second step is how do we then Sen off that data that this application is generating and do we fuse it with cloud native capability? Like, >>So your cloud, so your staff is your open source that you're giving to the Linux foundation as part of that Eve project that's available to everybody. So they can, they can look at the code, which is great by the way. Yeah. So people wanna do that. Yeah. Your self source, I'm assuming, is your hardened version with support? >>Well, we took what we took, what the open source companies did, opensource companies traditionally have sold, you know, basically a support model around the open source. We actually saw another problem. Customers has like, okay, now I have this node running and I can, you know, do this data analytics, but what if I have 15 or 20,000 of these node? And they're all around the world in remote locations on satellite links or wireless connectivity, how do I orchestrate them? So we actually build an orchestration service for these nodes running this open source >>Software. So that's a key secret sauce right there. >>That is the business model that taking open store and a lot. >>And you're taking your own code that you have. Okay. Got it. Cool. And then the customer's customer piece is, is key. So that's the final piece, I guess who's using it. >>Yeah. Well, and, >>And, and one of the business outcomes that they're achieving. Oh >>Yeah. Well, so maybe start with that first. I mean, we are deployed in customers in all and gas, for instance, helping them with the transition to renewable energy, right? So basically we, we have customers for instance, that deploy us in the, how they drill Wells is one use case and doing that better, faster, and cheaper and, and less environmental impacting. But we also have customers that use us in wind farms. We have, and solar farms, like we, one of the leading solar energy companies in the world is using us to bring down the cost of power by predicting failures ahead of time, for >>Instance. And when you're working with customers to create the optimal solution at the distributed edge, who are you working with in, within an organization? Yeah. >>It's usually a mix of OT and it people. Okay. So the OT people typically they're >>Arm wrestling, well, or they're getting along, actually, >>I think they're getting along very well. Okay, good. But they also agree that they have to have swim lanes. The it folks, obviously their job is to make sure, you know, everything is secure. Everything is according to the compliance it's, it's, you know, the, the best TCO on the infrastructure, those type of things, the OT guy, they, they, or girl, they care about the application. They care about the services. They care about the support new business. So how can you create a model that too can coexist? And if you do that, they get along really well. >>You know, we had an event called Supercloud and@theurlsupercloud.world, if you're watching check it out, it's our version of what we think multicloud will merge into including edge cuz edge is just another node in the, in the, in the network. As far as we're concerned, hybrid is the steady state. That's distributed computing on premise, private cloud, public cloud. We know what that looks like. People love that things are happening. Edge is like a whole nother new area. That's blossoming and with disruption, yeah. There's a lot of existing market and incumbents that need to be disrupted. And there's also a new capabilities that are coming that we don't yet see. So we're seeing it with the super cloud idea that these new kinds of clouds are emerging. Like there could be an edge cloud. Yeah. Why isn't there a security cloud, whereas the financial services cloud, whereas the insurance cloud, whereas the, so these become super clouds where the CapEx could be done by the Amazon, whatnot you've been following them is edge cloud. Can you make that a cloud? Is that what you guys are trying to do? And if so, what does that look like? Cause we we're adding a new track to our super cloud site. I mentioned on edge specifically, we're trying to figure out you and if you share your opinion, it'd be great. Can the E can edge clouds exist and be run by companies? Yeah. Or is that what you guys are trying to do? >>I, I, I mean, I think first of all, there is no edge without cloud, right? So when I meet any customer who says, Hey, we're gonna do edge without cloud. Then I'm like, you're probably not gonna do edge computing. Right. And, and the way we built the company and the way we think about it, it's about extending the cloud experience all the way into these embedded distributed environments. That's really, I think what customers are looking for, cuz customers love the simplicity of the cloud. They love the ease of use agility, all of that greatness. And they're like, Hey, I want that. But not in a, you know, in an Amazon or Azure data center. I want that in my factories. I want that in my wealth sites, in my vehicles. And that's really what I think the future >>Is gonna. And how long have you guys been around? What's the, what's the history of the company because you might actually be that cloud. Yeah. And are you on AWS or Azure? You're building your own. What's the, >>Yeah. Yeah. So >>Take it through the, the architecture because yeah, yeah, sure. You're a modern startup. I mean you gotta, and the edges you're going after you gotta be geared up. Yeah. To win that. Yeah. >>So, so the company's about five years old. So we, when we started focusing on edge, people didn't necessarily talk as much about edge. We kind of identified the it's like, you know, how do you find a black hole in, in the universe? Cuz you can't see it, but you sort of look around that's why you in it. And so we were like looking at it, like there's something gonna happen here at the edge of the network, because everybody's saying we're connecting these vice upload the data to the cloud's never gonna work. My background is networking. I worked at companies like Juniper and Ericsson ran several products there. So I know how the internet networks have built. And it was very Evan to me. It's not gonna be possible. My co-founders come from open source companies like pivotal and Cloudera. My auto co-founder was a, an engineer at sun Microsystems built the first network stack in the solar is operating system. So a lot of experience that kind of came together to build this. >>Yeah. Cloudera is a big day. That's where the cube started by the way. Yeah. >>Yeah. So, so we, we, we have, I think a good view on the stack, the cloud stack and therefore a good view of what the ed stack needs to look like. And then I think, you know, to answer your other question, our orchestration service runs in the cloud. We have, we actually are multi-cloud company. So we offer customers choice where they want to orchestrate the node from the nodes themself, never sit in a data center. They always highly embedded. We have customers are putting machines or inside these factory lines, et cetera. Are >>You running your SAS on Amazon web services or which >>Cloud we're running it on several clouds, including Amazon, all of, pretty much the cloud. So some customers say, Hey, I'd prefer to be on the Amazon set. And others customers say, I wanna be on Azure set. >>And you leverage their CapEx on that side. Yes. On behalf of yeah. >>Yeah. We, yes. Yes. But the majority of the customer data and, and all the data that the nodes process, the customer send it to their clouds. They don't send it to us. We don't get a copy of the camera feed analytics or the machine data. We actually decouple those though. So basically the, the team production data go straight to the customer's cloud and that's why they love us. >>And they choose that they can control their own desktop. >>Yeah. So we separate the management plane from the data plane at the edge. Yeah. >>That's a good call >>Actually. Yeah. That was another very important part of the architecture early on. Cause customers don't want us to see their, you know, highly confidential production data and we don't wanna have it either. So >>We had a great chat with Chris Wolf who works with kit culvert about control plane, data, plane. So that seems to be the trend data, plane customers want full yeah. Management of that. Yeah. Control plane. Maybe give multiple >>Versions. Yeah. Yeah. So our cloud consumption what the data we stories about the apps, their behavior, the networking, the security, all of that. That's what we store in our cloud. And then customers can access that and monitor. But the actual machine that I go somewhere else >>Here we are at VMware. Explore. Talk a little bit about the VMware relationship. You just had some big news the other day. >>Yeah. So two days ago we actually made a big announcement with VMware. So we signed an OEM agreement with VMware. So we're part now of VMware's edge compute stack. So VMware customers, as they start using the recently announced edge compute stack 2.0, that was announced here. Basically it's powered by Edda technology. So it's a really exciting partnership as part of this, we actually building integrations with the VMware organization products. So that's basically now extending to more, you know, other groups inside VMware. >>So what's the value in it for VMware customers. >>Yeah. So I think the, the, the benefit of, of VMware customers, I think cus VMware customers want that multi-cloud multi edge orchestration experience. So they wanna be able to deploy workloads in the cloud. They wanna deploy the workloads in the data center. And of course also at the edge. So by us integrating in that vision customers now can have that unified experience from cloud to edge and anywhere in between. >>What's the big vision that you see happening at the edge. I mean, a lot of the VMware customers here, they're classic it that have evolved into ops now, dev ops. Now you've got second data ops coming. The edge is gonna right around the corner for them. They're dealing with it now, probably just kicking the tires, towing the water kind of thing. Where do you see the vision going? Cuz now, no matter what happens with VMware, the Broadcom, this wave is still here. You got AWS, got Azure, got Google cloud, you got Oracle, Alibaba internationally. And the cloud native surges here. How do you see that disrupting the existing edge? Because let's face it the O some of those OT players, a little bit old and antiquated, a little bit outdated. I mean, I was talking to a telco person. They, they puked the word open source. I mean, these people are so dogmatic on, on their architecture. Yeah. They're gonna get disrupted. It's a matter of time. Yeah. Where's the new guard come in. How do you see the configuration changing in the landscape? Because some people will cross over to the right side of the street here. Yeah. Some won't yeah. Open circle. Dominate cloud native will be key. Yeah. >>Well, I mean, I think, again, let's, let's take an example of a vertical that's heavily disrupted now as the automotive market, right? The, so look at Tesla and look at all these companies, they built, they built software first cars, right? Software, first delivery of capabilities and everything else. And the, and the incumbents. They have only two options, right? Either they try to respond by adopting open source cloud, native technologies. Like the, these new entrants have done and really, you know, compete with them at that level, or they can become commodity. Right. So, and I think that's the customers we're seeing the smart customers go like, we need to compete with these guys. We need to figure out how to take this technology in. And they need partners like us and partners like VMware for them. >>Do you see customers becoming cloud super cloud players? If they continue to keep leveraging the CapEx of the clouds and focus all their operational capital on top line revenue, generating activities. >>Yeah. I, so I think the CapEx model of the cloud is a great benefit of the cloud, but I think that is not, what's the longer term future of the cloud. I think the op the cloud operating model is the future. Like the agility, the ability imagine embedded software that, you know, you do an over the year update to fix a bug, but it's very hard to make a, an embedded device smarter over time. And then imagine if you can run cloud native software, you can roll out every two weeks new features and make that thing smarter, intelligent, and continue to help you in your business. That I think is what cloud did ultimately. And I think that is what really these customers are gonna need at their edge. >>Well, we talked about the value within it for customers with the VMware partnership, but what are some of your expectations? Obviously, this is a pretty powerful partnership for you guys. Yeah. What are some of the things that you're expecting that this is gonna drive? Yeah, >>So we, we, we have always operated at the more OT layer, distributed organizations in retail, energy, industrial automotive. Those are the verticals we, so we've developed. I think a lot of experience there, what, what we're seeing as we talk to those customers is they obviously have it organizations and the it organizations, Hey, that's great. You're looking at its computing, but how do we tie this into the existing investments we made with VMware? And how do we kind of take that also to this new environment? And I think that's the expectation I have is that I think we will be able to, to talk to the it folks and say, Hey, you can actually talk to the OT person. And both of you will speak the same language. You probably will both standardize on the same architecture and you'll be together deploying and enabling this new agility at the edge. >>What are some of the next things coming up for ZDA and the team? >>Well, so we've had a really amazing few quarters. We just close a series B round. So we've raised the companies raised over 55 million so far, we're growing very rapidly. We opened up no new international offices. I would say the, the early customers that we started deploying, wait a while back, they're now going into mass scale deployment. So we have now deployments underway in, you know, the 10 to hundred thousands of nodes at certain customers and in amazing environments. And so, so for us, it's continuing to prove the product in more and more verticals. Our, our product is really built for the largest of the largest. So, you know, for the size of the company, we are, we have a high concentration of fortune 500 global 500 customers, and some of them even invested in our rounds recently. So we we've been really, you know, honored with that support. Well, congratulations. Good stuff, edges popping. All right. Thank you. >>Thank you so much for joining us, talking about what you're doing in distributed edge. What's in it for customers, the VMware partnership, and by the way, congratulations on >>That too. Thank you. Thank you so much. Nice to meet you. Thank >>You. All right. Nice to meet you as well for our guest and John furrier. I'm Lisa Martin. You're watching the cube live from VMware Explorer, 22, John and I will be right back with our next guest.
SUMMARY :
But you know that cuz you've been here the whole time. So what we do is we help customers with orchestrating What's the business model for you guys. And the answer to the question actually And people have used that term, you know, with big data, going back to 2010 leads when we covering the Hadoop So that's where the data gravity therefore is therefore that's where you do your analytics. so I think, you know, if you're not a data driven enterprise by now, then I think the future may be a little bit bleak. What's that piece. And the reason we opensource it, And the idea is to create the Android of the edge, basically what Android became for mobile computing, So when you talk about Android, you're making the reference of a phone. So that's true. So one of the, the things we saw early But if you wanna take your old data, you'll You're Just, they can life cycle it out on their own timetable. So we had to learn, first of all, how do we take and lift and shift windows based industrial application So they can, they can look at the code, which is great by the way. So we actually build an orchestration service for these nodes running this open source So that's a key secret sauce right there. So that's the final piece, I guess who's using it. And, and one of the business outcomes that they're achieving. I mean, we are deployed in customers in all and gas, edge, who are you working with in, within an organization? So the OT people typically they're So how can you create a model that too can coexist? Or is that what you guys are trying to do? And, and the way we built the company and And are you on AWS or Azure? I mean you gotta, and the edges you're going after you gotta be We kind of identified the it's like, you know, how do you find a black hole in, That's where the cube started by the way. And then I think, you know, to answer your other question, So some customers say, And you leverage their CapEx on that side. the team production data go straight to the customer's cloud and that's why they love us. you know, highly confidential production data and we don't wanna have it either. So that seems to be the trend data, plane customers want full yeah. But the actual machine that I go somewhere else You just had some big news the other day. So that's basically now extending to more, you know, other groups inside VMware. And of course also at the edge. What's the big vision that you see happening at the edge. Like the, these new entrants have done and really, you know, compete with them at that level, Do you see customers becoming cloud super cloud players? that thing smarter, intelligent, and continue to help you in your business. What are some of the things that you're expecting that this is gonna drive? And I think that's the expectation I have is that I think we will be able to, to talk to the it folks and say, So we we've been really, you know, honored with that support. Thank you so much for joining us, talking about what you're doing in distributed edge. Thank you so much. Nice to meet you as well for our guest and John furrier.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Lisa Martin | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Ericsson | ORGANIZATION | 0.99+ |
VMware | ORGANIZATION | 0.99+ |
Juniper | ORGANIZATION | 0.99+ |
San Francisco | LOCATION | 0.99+ |
Chris Wolf | PERSON | 0.99+ |
Tesla | ORGANIZATION | 0.99+ |
Alibaba | ORGANIZATION | 0.99+ |
2010 | DATE | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
15 | QUANTITY | 0.99+ |
Android | TITLE | 0.99+ |
20 | QUANTITY | 0.99+ |
First | QUANTITY | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Zededa | PERSON | 0.99+ |
John | PERSON | 0.99+ |
both | QUANTITY | 0.99+ |
two components | QUANTITY | 0.99+ |
10 | QUANTITY | 0.99+ |
second step | QUANTITY | 0.99+ |
third day | QUANTITY | 0.99+ |
sun Microsystems | ORGANIZATION | 0.99+ |
one | QUANTITY | 0.99+ |
CNCF | ORGANIZATION | 0.99+ |
20,000 | QUANTITY | 0.99+ |
Linux | TITLE | 0.99+ |
CapEx | ORGANIZATION | 0.99+ |
windows | TITLE | 0.99+ |
Cloudera | ORGANIZATION | 0.99+ |
John furrier | PERSON | 0.99+ |
two days ago | DATE | 0.98+ |
telco | ORGANIZATION | 0.98+ |
over 55 million | QUANTITY | 0.98+ |
first | QUANTITY | 0.98+ |
two options | QUANTITY | 0.98+ |
one app | QUANTITY | 0.98+ |
500 customers | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
One end | QUANTITY | 0.98+ |
Hadoop wave | EVENT | 0.98+ |
Broadcom | ORGANIZATION | 0.97+ |
Kubernetes | TITLE | 0.97+ |
first network | QUANTITY | 0.96+ |
LFS | ORGANIZATION | 0.96+ |
multicloud | ORGANIZATION | 0.95+ |
VMware Explorer | TITLE | 0.95+ |
first cars | QUANTITY | 0.93+ |
one use case | QUANTITY | 0.91+ |
Ouissal | PERSON | 0.9+ |
about five years old | QUANTITY | 0.9+ |
2022 | DATE | 0.89+ |
ZDA | ORGANIZATION | 0.88+ |
pivotal | ORGANIZATION | 0.87+ |
about five years ago | DATE | 0.87+ |
series B round | OTHER | 0.86+ |
hundred thousands | QUANTITY | 0.85+ |
30 years | QUANTITY | 0.81+ |
*****NEEDS TO STAY UNLISTED FOR REVIEW***** Ricky Cooper & Joseph George | VMware Explore 2022
(bright intro music) >> Welcome back everyone to VMware Explore '22. I'm John Furrier, host of the key with David Lante, our 12th year covering VMware's user conference, formerly known as VM-World now rebranded as VMware Explore. You got two great Cube alumni coming on the Cube. Ricky Cooper, SVP worldwide partner commercial VMware. Great to see you, thanks for coming on. >> Thank you. >> We just had a great chat-- >> Good to see you again. >> At HPE discover. And of course, Joseph George, Vice President of Compute Industry Alliances. Great to have you on. Great to see you. >> Great to see you, John. >> So guys, this year is very curious, VMware, a lot going on. The name change of the event. Big move, Bold move. And then they changed the name of the event. Then Broadcom buys them. A lot of speculation, but at the end of the day, this conference... Kind of people were wondering what would be the barometer of the event. We were reporting this morning on the keynote analysis. Very good mojo in the keynote. Very transparent about the Broadcom relationship. The expo floor last night was buzzing. I mean, this is not a show that's looking like it's going to be, you know, going down. This is clearly a wave. We're calling it super cloud, multi-cloud's their theme. Clearly the cloud's happening. Not to date ourselves, but 2013 we were discussing on the-- >> We talked about that, yeah. >> HPE Discover about DevOps infrastructure as code. We're full realization now of that. This is where we're at. You guys had a great partnership with VMware and HPE. Talk about where you guys see this coming together because the customers are refactoring, they are looking at cloud native, the whole Broadcom visibility to the VMware customer bases activated them. They're here and they're leaning in. What's going on? >> Yeah absolutely, we're seeing a renewed interest now as customers are looking at their entire infrastructure, bottoms up all the way up the stack and the notion of a hybrid cloud, where you've got some visibility and control of your data and your infrastructure and applications. Customers want to live in that sort of a cloud environment. And so we're seeing a renewed interest, a lot of conversations we're having with customers now, a lot of customers committing to that model, where they have applications and workloads running at the edge in their data center and in the public cloud in a lot of cases. But having that mobility, having that control, being able to have security in their own control. There's a lot that you can do there. And obviously partnering with VMware. We've been partners for so long. >> 20 years, at least. >> At least 20 years. Back when they invented stuff. They were inventing way-- >> VMware's got a very technical culture, but Ricky, I got to say that we commented earlier when Ragu was on the CEO now CEO, I mean legendary product guy, set the trajectory to VMware, everyone knows that. I can't know whether it was VMware or HP, HP before HPE coined Hybrid. Cause you guys were both on, I can't recall Dave, which company coined it first, but it was either one of you guys. Nobody else was there. >> It was the partnership. (men chuckle) >> Hybrid Cloud I had a big thing with Pat Gelsinger, Dave. Remember when he said he got in my grill on theCube, live, but now you see. >> You focus on that multi-cloud aspect. So you've got a situation where our customers are looking at multi-cloud and they're looking at it, not just as a flash in the pan. This is here for five years, 10 years, 20 years. Okay. So what does that mean then to our partners and to our distributors, you're seeing a whole seed change. You're seeing partners now looking at this. So look at the OEMs, the ones that have historically been vSphere customers and now saying they're coming in, drove saying, okay, what is the next step? Well, how can I be a multi-cloud partner with you? How can I look at other aspects that we're driving here together? So GreenLake is a great example. We keep going back to GreenLake and we are partaking in GreenLake at the moment. The real big thing for us is going to be right. Let's make sure that we've got the agreements in place that support this Sasson subscription motion going forward. And then the sky's the limit for us. >> You're plugging that right into. >> Well, here's why, here's why, so customers are loving the fact that they can go to a public cloud and they can get an SLA. They come to an on-premise, you've got the hardware, you've got the software, you've got the guys on board to maintain this through its life cycle. I mean, this is complicated stuff. Now we've got a situation where you can say, Hey, we can get an SLA on premise. >> And I think what you're seeing is it's very analogous to having a financial advisor, just manage your portfolio. You're taking care of just submitting money. That's really a lot of what a lot of the customers have done with the public cloud. But now a lot of these customers are getting savvy. They have been working with VMware technologies and HPE for so long. they've got expertise. They know how they want their workloads architected. Now we've given them a model where they can leverage the cloud platform to be able to do this, whether it's on premise, the edge or in the public cloud, leveraging HPE GreenLake and VMware. >> Is it predominantly or exclusively a managed service or do you find some customers saying, hey, we want to manage ourself. What are you seeing is the mix there? >> It is not predominantly managed services right now. We're actually, as we are growing last time we talked at HPE discover. We talked about a whole bunch of new services that we've added to our catalog. It's growing by leaps and bounds. A lot of folks are definitely interested in the pay as you go, obviously the financial model, but are now getting exposed to all the other management that can happen. There are managed services capabilities, but actually running it as a service with your systems on-prem is a phenomenal idea for all these customers. And they're opening their eyes to some new ways to service their customers better. >> And another phenomenon we're seeing there is where partners such as HPA, using other partners for various areas of the services implementation as well. So that's another phenomenon. You're seeing the resale motion now going into a lot more of the services motion. >> It's interesting too. I mean the digital modernization that's going on, the transformation whatever you want to call it, is complicated, that's clear. One of the things I liked about the keynote today was the concept of cloud chaos, because we've been saying quoting Andy Grove, Next Intel, let chaos rain and rain in the chaos. And when you have inflection points, complexity, which is the chaos, needs to be solved and whoever solves it and kicks the inflection point, that's up and to the right. >> So prime idea right here. So. >> GreenLake is, well. >> Also look at the distribution model and how that's changed a couple of points on a deal. Now they're saying I'll be your aggregator. I'll take the strain and I'll give you scale. I'll give you VMware scale for all of the various different partners, et cetera. >> Yeah. So let's break this down because this is, I think a key point. So complexity is good, but the old model in the enterprise market was, you solve complexity with more complexity and everybody wins. Oh yeah, we're locked in. That's not what the market wants. They want self- service, they want as a service, they want easy, developer first security data ops. DevOps is already in the cycle. So they're going to want simpler, easier, faster. >> And this is kind of why I I'll say for the big announcement today here at VMware Explorer around the VMware vSphere distributed services engine, project Monterey that we've talked about for so long, HPE and VMware and AMD with the Pensando DPU actually work together to engineer a solution for exactly that. The capabilities are fairly straightforward in terms of the technologies, but actually doing the work to do integration, joint engineering, make sure that this is simple and easy and able to be running HPE GreenLake. >> We invested in Pensando right, we are investors. >> What's the benefit of that. That's a great point. You made what's the value to the customer bottom line, that deep, co-engineering, co-partnering, what is it deliver that others don't do? >> Yeah. Well, I think one example would be a lot of vendors can say we support it. >> Yep. That's great. That's actually a really good move, supporting it. It can be resold. That's another great move. I'm not mechanically inclined to where I would go build my own car. I'll go to a dealership and actually buy one that I can press the button and I can start it and I can do what I need to do with my car. And that's really what this does is the engineering work that's gone on between our two companies and AMD Pensando as well as the business work to make that simple and easy that transaction to work. And then to be able to make it available as a service is really what made, that's why it's such a winner here... >> But, it's also a lower cost out of the box. Yes. So you get in whatever it's called a 20%. Okay. But there's nuance because you're also on a new technology curve and you're able to absorb modern apps. We use that term as a promo, but when I say modern apps, I mean data, rich apps, things that are more AI driven. Not the conventional, not that people aren't doing, you know, SAP and CRM, they are. But, there's a whole slew of new apps that are coming in that traditional architectures aren't well suited to handle from a price performance standpoint. This changes that doesn't it? >> Well, you think also of going to the next stage, which is the go to market between the two organizations that before at the moment, HPE is running off doing various different things. We were running off to. Again, that chaos that you're talking about in cloud chaos, you got to go to market chaos, but by simplifying four or five things, what are we going to do really well together? How do we embed those in GreenLake and be known in the marketplace for these solutions? Then you get an organization that's really behind the go to market. You can help with sales, activation, the enablement. And then we benefit from the scale of HPE. >> Yeah. What are those solutions, I mean... Is it just, is it IS? Is it compute storage? Is it specific SAP? Is it VDI? What are you seeing out there? >> So right now for this specific technology, we're educating our customers on what that could be. And at its core, this solution allows customers to take services that normally and traditionally run on the compute system and run on a DPU now with project Monterey. And this is now allowing customers to think about where are their use cases. So I'm rather than going and say, use it for this. We're allowing our customers to explore and say, okay, here's where it makes sense. Where do I have workloads that are using a lot of compute cycles on services at the compute level? That could be somewhere else like networking as a great example, and allowing more of those compute cycles to be available. So where there are performance requirements for an application where there are timely response that's needed for results to be able to take action on, to be able to get insight from data really quick. Those are places where we're starting to see the services moving onto something like a DPU. And that's where this makes a whole lot more sense. >> Okay, so to get this right? You got the hybrid cloud, right? You got GreenLake and you got the distributed engine. What's that called? >> It's HPE Proliant Proliant with the VMware, VSphere. >> VSphere. That's the compute distributed. Okay. So does the customer, how do you guys implement that with the customer all three at the same time or they mix and match? How's that work? >> All three of those components. So the beauty of the HP Proliant with VMware vSphere distributed services engine also now is project Monterey for those that are keeping notes at home. Again already pre-engineered so we've already worked through all the mechanics of how you would have to do this. So it's not something you have to go figure out how you build, get deployment, work through those details. That's already done. It is available through HPE GreenLake. So you can go and actually get it as a service in partnership with our customer, our friends here at VMware. And because if you're familiar and comfortable with all the things that HP Proliant has done from a security perspective, from a reliability perspective, trusted supply chain, all those sorts of things, you're getting all of that with this particular solution. >> Sumit Dhawan had a great quote on theCube just a hour or so ago. He said you have to be early to be first. Love that quote. Okay. So you were first, you were probably a little early, but do you have a lead? I know you're going to say yes. Okay. Let's just assume that okay. Relative to the competition, how do you know? How do you determine that? >> If we have a lead or not? >> Yeah, if you lead, if you're the best. >> We go to the source of the truth, which is our customers. >> And what do they tell you? What do you look at and say, okay, now, I mean, when you have that honest conversation and say, okay, we are, we're first, we're early, we're keeping our lead. What are the things that you look at, as indicators? >> I'll say it this way. We've been in a lot of businesses where we do compete head-to-head in a lot of places and we know how that sales process normally works. We're seeing a different motion from our customers. When we talk about HPE GreenLake, there's not a lot of back and forth on, okay, well let me go shop around. It is HP GreenLake, let's talk about how we actually build this solution. >> And I can tell you from a VMware perspective, our customers are asking us for this the other way around. So that's a great sign. Is that, Hey, we need to see this partnership come together in GreenLake. >> Yeah. Okay. So you would concur with that? >> Absolutely. So third party validation. >> From Switzerland. Yeah. >> Bring it with you over here. >> We're talking about this earlier on, I mean, of course with I mentioned earlier on there's some contractual things that you've got to get in place as you are going through this migration into Sasson subscription, et cetera. And so we are working as hard as we can to make sure, Hey, let's really get this contract in place as quickly as possible, it's what the customers are asking us. >> We've been talking about this for years, you know, see containers being so popular. Now, Kubernetes becoming that layer of bringing people to bringing things together. It's the old adage that Amazon used to coin and Andy Jassy, they do the undifferentiated, heavy lifting. A lot of that's now that's now cloud operations. Underneath is infrastructure's code to the developer, right. That's at scale. >> That's right. >> And so you got a lot of heavy lifting being done with GreenLake. Which is why there's no objections probably. >> Right absolutely. >> What's the choice. What do you even shop? >> Yeah. There's nothing to shop around. >> Yeah, exactly. And then we've, that is really icing on the cake that we've, we've been building for quite some time. There is an understanding in the market that what we do with our infrastructure is hardened from a reliability and quality perspective. Times are tough right now, supply chain issues, all that stuff, we've talked about it. But at HPE, we don't skimp on quality. We're going to spend the dollars and time on making sure we got reliability and security built in. It's really important to us. >> We get a great use case, the storage team, they were provisioning with containers. Storage is a service, instantly. We're seeing with you guys with VMware, your customers bringing in a lot of that into the mix as well. I got to ask. Cause every event we talk about AI and machine learning, automation and DevOps are now infiltrating in with the Ci/CD pipeline security and data become a big conversation. >> Agreed. >> Okay. So how do you guys look at that? Okay. You sold me on green. I've been a big fan from day one. Now it's got maturity on it. I know it's going to get a lot more headroom to do there. It's still a lot of work to do, but directionally it's pretty accurate. It's going to be going to be success. There's still concerns about security, the data layer. That's agnostic of environment, private cloud hybrid, public and edge. So that's important and security has got a huge service area. These are a work in progress. How do you guys view those? >> I think you've just hit the nail on the head. I mean, I was in the press and journalist meetings yesterday and our answer was exactly the same. There is still so much work that can be done here. And I don't think anybody is really emerging as a true leader. It's just a continuation of trying to get that right. Because it is what is the most important thing to our customers. And the industry is really sort of catching up to that. >> And when you start talking about privacy and when you... It's not just about company information, it's about individuals information. It's about information that if exposed actually could have real impact on people. So it's more than just an IT problem. It is actually, and from HP's perspective, security starts from when we're picking our suppliers for our components. There are processes that we put into our entire trusted supply chain from the factory on the way up. I liken it to my golf swing, my golf swinging. I slice, right lik you wouldn't believe. But when I go to the golf pros, they start me back at the mechanics, the foundational pieces, here's where the problems are and start working on that. So my view is our view is if your infrastructure is not secure, you're going to have troubles with security as you go further up. >> Stay in the sandbox, so to speak, they're driving range on the golf analogy there. I love that. Talk about supply chain security real quick. Because you mentioned supply chain on the hardware side, you're seeing a lot of open source and supply chain in software trusted software. How does GreenLake look at that? How do you guys view that piece of it? That's an important part. >> Yeah, security is one of the key pillars that we're actually driving as a company right now. As I said, it's important to our customers as they're making purchasing decisions. And we're looking at it from the infrastructure all the way up to the actual service itself. And that's the beauty of having something like HP GreenLake, we don't have to pick is the infrastructure or the middle where, or the top of stack application, we can look at all of it. Yeah. It's all of it. That matters. >> Question on the ecosystem posture, so, I remember when HP was one company and then the GSIs were a little weird with HP because of EDS, you know, had data protector. So we weren't really chatting up Veeam at the time. And as soon as the split happened, ecosystem exploded. Now you have a situation where your Broadcom is acquiring VMware. You guys big Broadcom customer, has your attitude changed or has it not because, oh, we meet where the customers are. You've always said that, but have you have leaned in more? I mean, culturally is HPE, HPE now saying, hmm, now we have some real opportunities to partner in new ways that we don't have to sleep with one eye open, maybe. >> So I would some first of all, VMware and HPE, we've got a variety of different partners, we always have. If well, before any Broadcom announcement came along. We've been working with a variety of partners and that hasn't changed and that hasn't changed. And if your question is, has our posture toward VMware changed that all the answers absolutely not. We believe in what VMware is doing. We believe in what our customers are doing with VMware, and we're going to continue to work with VMware and partner with you. >> And of course we had to spin out ourselves in November of last year, which I worked on the whole Dell, whole Dell piece. >> But, you still had the same chairman. >> But since then, I think what's really become very apparent. And it's not just with HPE, but with many of our partners, many of the OEM partners, the opportunity in front of us is vast. And we need to rely on each other to help us solve the customer problems that are out there. So there's a willingness to overlook some things that in the past may have been barriers. >> But it's important to note also that it's not that we have not had history, right? Over... We've got over 200,000 customers join. >> Hundreds of millions of dollars of business. >> 100,000, over 10,000 or a 100,000 channel partners that we have in common. Numerous , numerous... >> And independent of the whole Broadcom overhang there, there's the ecosystem floor. Yeah, the expo floor. I mean, it's vibrant. I mean, there's clearly a wave coming. Ricky, we talked about this briefly at HPE Discover. I want to get an update from your perspective, both of you, if you don't mind weighing in on this, clearly the wave we calling it super cloud. Cause it's not just, multi-cloud completely different looking successes, >> Smart Cloud. >> It's not just vendors. It's also the customers turning into clouds themselves. You look at Goldman Sachs. I think every vertical will have its own power law of cloud players in the future. We believe that to be true. We're still testing that assumption, but it's trending in when you got OPEX has to go to in fund statement. CapEx goes to thanks for the cloud. All that's good, but there's a wave coming and we're trying to identify it. What do you guys see as this wave cause beyond multi-cloud and the obvious nature of that will end up happening as a state and what happens beyond that interoperability piece? That's a whole nother story and that's what everyone's fighting for. But everyone out in that ecosystem, it's a big wave coming. They got their surfboards. They're ready to go. So what do you guys see? What is the next wave that everyone's jacked up about here? >> Well, I think the multi-cloud is obviously at the epicenter. If you look at the results that are coming in, a lot of our customers, this is what's leading the discussion. And now we're in a position where we've brought many companies over the last few years, they're starting to come to fruition. They're starting to play a role in how we're moving forward. Some of those are a bit more applicable to the commercial space. We're finding commercial customers are never bought from us before never hundreds and hundreds are coming through our partner networks every single quarter. So brand new to VMware, the trick then is how do you nurture them? How do you encourage them? >> So new logos are coming in? >> New logos are coming in all the time, all the time from across the ecosystem. It's not just the OEMs, it's all the way back. >> So the ecosystem's back for VMware. >> Unbelievably. So what are we doing to help that? There's two big things that we've announced in the recent weeks is that partner connect 2.0. When I talk to you about multi-cloud and multicardt the customers are doing, you see that trend. Four, five different separate clouds that we've got here. The next piece is that they're changing their business models with the partners. Their services is becoming more and more apparent, etc. And the use of other partners to do other services deployment or this stuff is becoming prevalent. Then you've got the distributors that I talked about were there. Then you route to market, then you route to business. So how do you encapsulate all of that and ensure your rewarding partners on all aspects of that? Whether it's deployment, whether it's test and debt, it's a points based system we've put in place now. >> It's a big pie. That's developing the market's getting bigger. >> It's getting so much bigger and then help. >> You agree obviously with that. >> Yeah, absolutely, in fact, I think for a long time we were asking the question of, is it going to be there or is it going to be here? Which was the wrong question now it's everything. Yes. And what I think that what we're seeing in the ecosystem is people are finding the spots where they're going play. Am I going to be on the edge? Am I going to be an analytics play? Am I going to be a cloud transition play? A lot of players are now emerging and saying, we now have a place, a part to play. And having that industry view, not just of a commercial customer at that level, but the two of us are looking at Telco, are looking at financial services, at healthcare, at manufacturing. How do these new ecosystem players fit into it? >> ... is lifting, everyone can see their position there. >> We're now being asked for simplicity and talk to me about partner profitability. How do I know where to focus my efforts? Am I've spread too thin? And my advice that a partner ecosystem out there is, Hey, let's pick out spots together. Let's really go to, and then strategic solutions that we were talking about is good example of that. >> Sounds like composability to me, but not to go back guys. Thanks for coming on. I think there's a big market there. I think the fog is lifted, people seeing their spot there's value there. Value creation equals reward. Yeah. Simplicity, ease of use. This is the new normal great job. Thanks for coming on sharing. Okay. Back live coverage after this short break with more day one coverage here from the blue set here in Moscone.
SUMMARY :
the key with David Lante, Great to have you on. it's going to be, you know, going down. the whole Broadcom visibility and in the public cloud in a lot of cases. They were inventing way-- set the trajectory to VMware, It was the partnership. but now you see. So look at the OEMs, fact that they can go to a lot of the customers have done What are you seeing is the mix there? all the other management that can happen. You're seeing the resale motion One of the things I liked So prime idea right here. all of the various different DevOps is already in the cycle. but actually doing the right, we are investors. What's the benefit of that. a lot of vendors can say we And then to be able to make cost out of the box. behind the go to market. What are you seeing out there? of those compute cycles to be You got the hybrid cloud, right? with the VMware, VSphere. So does the customer, all the mechanics of how you So you were first, you We go to the source of the truth, What are the things that We've been in a lot of And I can tell you So you would concur with that? So third party validation. Yeah. got to get in place as you are It's the old adage that And so you got a lot of heavy lifting What's the choice. There's nothing to shop around. the market that what we do with We're seeing with you guys with VMware, So how do you guys look at that? And the industry is really the factory on the way up. Stay in the sandbox, so to speak, And that's the beauty of having And as soon as the split changed that all the And of course we had many of the OEM partners, But it's important to note Hundreds of millions that we have in common. And independent of the We believe that to be true. the trick then is how do you nurture them? It's not just the OEMs, When I talk to you about That's developing the It's getting so much Am I going to be on the edge? ... is lifting, everyone that we were talking about is This is the new normal great job.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Ricky Cooper | PERSON | 0.99+ |
Joseph George | PERSON | 0.99+ |
Telco | ORGANIZATION | 0.99+ |
HP | ORGANIZATION | 0.99+ |
GreenLake | ORGANIZATION | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
five years | QUANTITY | 0.99+ |
David Lante | PERSON | 0.99+ |
VMware | ORGANIZATION | 0.99+ |
Pat Gelsinger | PERSON | 0.99+ |
AMD | ORGANIZATION | 0.99+ |
OPEX | ORGANIZATION | 0.99+ |
2013 | DATE | 0.99+ |
Goldman Sachs | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Ricky | PERSON | 0.99+ |
Four | QUANTITY | 0.99+ |
20% | QUANTITY | 0.99+ |
Andy Grove | PERSON | 0.99+ |
HPE | ORGANIZATION | 0.99+ |
CapEx | ORGANIZATION | 0.99+ |
two companies | QUANTITY | 0.99+ |
Broadcom | ORGANIZATION | 0.99+ |
Dave | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
10 years | QUANTITY | 0.99+ |
20 years | QUANTITY | 0.99+ |
John | PERSON | 0.99+ |
four | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
yesterday | DATE | 0.99+ |
Andy Jassy | PERSON | 0.99+ |
Sumit Dhawan | PERSON | 0.99+ |
Moscone | LOCATION | 0.99+ |
five things | QUANTITY | 0.99+ |
HPA | ORGANIZATION | 0.99+ |
two organizations | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
hundreds | QUANTITY | 0.99+ |
Joseph George | PERSON | 0.99+ |
Switzerland | LOCATION | 0.99+ |
AMD Pensando | ORGANIZATION | 0.99+ |
first | QUANTITY | 0.98+ |
one | QUANTITY | 0.98+ |
Pensando | ORGANIZATION | 0.98+ |
one example | QUANTITY | 0.98+ |
HPE Discover | ORGANIZATION | 0.98+ |
12th year | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
One | QUANTITY | 0.98+ |
over 10,000 | QUANTITY | 0.98+ |
Ragu | PERSON | 0.98+ |
over 200,000 customers | QUANTITY | 0.98+ |
two big things | QUANTITY | 0.97+ |
last night | DATE | 0.96+ |
VSphere | TITLE | 0.96+ |
this year | DATE | 0.96+ |
Edward Naim, AWS | AWS Storage Day 2022
[Music] welcome back to aws storage day 2022 i'm dave vellante and we're pleased to have back on thecube edname the gm of aws file storage ed how you doing good to see you i'm good dave good good to see you as well you know we've been tracking aws storage for a lot of years 16 years actually we we've seen the evolution of services of course we started with s3 and object and saw that expand the block and file and and now the pace is actually accelerating and we're seeing aws make more moves again today and block an object but what about file uh it's one format in the world and the day wouldn't really be complete without talking about file storage so what are you seeing from customers in terms of let's start with data growth how are they dealing with the challenges what are those challenges if you could address you know specifically some of the issues that they're having that would be great and then later we're going to get into the role that cloud file storage plays take it away well dave i'm definitely increasingly hearing customers talk about the challenges in managing ever-growing data sets and they're especially challenged in doing that on-premises when we look at the data that's stored on premises zettabytes of data the fastest growing data sets consist of unstructured data that are stored as files and many cups have tens of petabytes or hundreds of petabytes or even exabytes of file data and this data is typically growing 20 30 percent a year and in reality on-premises models really designed to handle this amount of data in this type of growth and i'm not just talking about keeping up with hardware purchases and hardware floor space but a big part of the challenge is labor and talent to keep up with the growth they're seeing companies managing storage on-prem they really need an unprecedented number of skilled resources to manage the storage and these skill sets are in really high demand and they're in short supply and then another big part of the challenge that customers tell me all the time is that that operating at scale dealing with these ever-growing data sets at scale is really hard and it's not just hard in terms of the the people you need and the skill sets that you need but operating at scale presents net new challenges so for example it becomes increasingly hard to know what data you have and what storage media your data stored on when you have a massive amount of data that's spanning hundreds of thousands or uh thousands of applications and users and it's growing super fast each year and at scale you start seeing edge technical issues get triggered more commonly impacting your availability or your resiliency or your security and you start seeing processes that used to work when you were a much smaller scale no longer work it's just scale is hard it's really hard and then finally companies are wanting to do more with their fast growing data sets to get insights from it and they look at the machine learning and the analytics and the processing services and the compute power that they have at their fingertips on the cloud and having that data be in silos on-prem can really limit how they get the most out of their data you know i've been covering glad you brought up the skills gap i've been covering that quite extensively with my colleagues at etr you know our survey partner so that's a really important topic and we're seeing it across the board i mean really acute in cyber security but for sure just generally in i.t and frankly ceos they don't want to invest in training people to manage storage i mean it wasn't that long ago that managing loans was a was a talent and that's of course nobody does that anymore but they'd executives would much rather apply skills to get value from data so my specific question is what can be done what is aws doing to address this problem well with the growth of data that that we're seeing it it's just it's really hard for a lot of it teams to keep up with just the infrastructure management part that's needed so things like deploying capacity and provisioning resources and patching and conducting compliance reviews and that stuff is just table stakes the asks on these teams to your point are growing to be much bigger than than those pieces so we're really seeing fast uptake of our amazon fsx service because it's such an easy path for helping customers with these scaling challenges fsx enables customers to launch and to run and to scale feature rich and highly performant network attached file systems on aws and it provides fully managed file storage which means that we handle all of the infrastructure so all of that provisioning and that patching and ensuring high availability and customers simply make api calls to do things like scale up their storage or change their performance level at any point or change a backup policy and a big part of why fsx has been so feeling able to customers is it really enables them to to choose the file system technology that powers their storage so we provide four of the most popular file system technologies we provide windows file server netapp ontap open zfs and luster so that storage and application admins can use what they're familiar with so they essentially get the full capabilities and even the management clis that they're used to and that they've built workflows and applications around on-premises but they get along with that of course the benefits of fully managed elastic cloud storage that can be spin up and spun spin down and scaled on demand and performance changed on demand etc and what storage and application admins are seeing is that fsx not only helps them keep up with their scale and growth but it gives them the bandwidth to do more of what they want to do supporting strategic decision making helping their end customers figure out how they can get more value from their data identifying opportunities to reduce cost and what we realize is that for for a number of storage and application admins the cloud is is a different environment from what they're used to and we're making it a priority to help educate and train folks on cloud storage earlier today we talked about aws storage digital badges and we announced a dedicated file badge that helps storage admins and professionals to learn and demonstrate their aws skills in our aws storage badges you can think of them as credentials that represent cloud computing learning that customers can add to their repertoire add to their resume as they're embarking on this cloud journey and we'll be talking more in depth on this later today especially around the file badge which i'm very excited about so a couple things there that i wanted to comment on i mean i was there for the netapp you know your announcement we've covered that quite extensively this is just shows that it's not a zero-sum game necessarily right it's a win-win-win for customers you've got your you know specific aws services you've got partner services you know customers want want choice and then the managed service model you know to me is a no-brainer for most customers we learned this in the hadoop years i mean it just got so complicated then you saw what happened with the managed services around you know data lakes and lake houses it's just really simplified things for customers i mean there's still some customers that want to do it yourself but a managed service for the file storage sounds like a really easy decision especially for those it teams that are overburdened as we were talking about before and i also like you know the education component is nice touch too you get the badge thing so that's kind of cool so i'm hearing that if the fully managed file storage service is a catalyst for cloud adoption so the question is which workloads should people choose to move into the cloud where's the low friction low risk sweet spot ed well that's one of the first questions that customers ask when they're about to embark on their cloud journey and i wish i could give a simple or a single answer but the answer is really it varies and it varies per customer and i'll give you an example for some customers the cloud journey begins with what we call extending on-premises workloads into the cloud so an example of that is compute bursting workloads where customers have data on premises and they have some compute on premises but they want to burst their processing of that data to the cloud because they really want to take advantage of the massive amount of compute that they get on aws and that's common with workloads like visual effects ringer chip design simulation genomics analysis and so that's an example of extending to the cloud really leveraging the cloud first for your workloads another example is disaster recovery and that's a really common example customers will use a cloud for their secondary or their failover site rather than maintaining their their second on-prem location and so that's a lot of customers start with some of those workloads by extending to the cloud and then there's there's a lot of other customers where they've made the decision to migrate most or all of their workloads and they're not they're skipping the whole extending step they aren't starting there they're instead focused on going all in as fast as possible because they really want to get to the full benefits of the cloud as fast as possible and for them the migration journey is really it's a matter of sequencing sequencing which specific workloads to move and when and what's interesting is we're increasingly seeing customers prioritizing their most important and their most mission-critical applications ahead of their other workloads in terms of timing and they're they're doing that to get their workloads to benefit from the added resilience they get from running on the cloud so really it really does uh depend dave yeah thank you i mean that's pretty pretty good description of the options there and i i just come something you know bursting obviously i love those examples you gave around genomics chip design visual effects rendering the dr piece is again very common sort of cloud you know historical you know sweet spots for cloud but then the point about mission critical is interesting because i hear a lot of customers especially with the digital transformation push wanting to change their operating model i mean on the one hand not changing things put it in the cloud the lift and shift you have to change things low friction but then once they get there they're like wow we can do a lot more with the cloud so that was really helpful those those examples now last year at storage day you released a new file service and then you followed that up at re-event with another file service introduction sometimes i can admit i get lost in the array of services so help us understand when a customer comes to aws with like an nfs or an smb workload how do you steer them to the right managed service you know the right horse for the right course yeah well i'll start by saying uh you know a big part of our focus has been in providing choice to customers and what customers tell us is that the spectrum of options that we provide to them really helps them in their cloud journey because there really isn't a one-size-fits-all file system for all workloads and so having these options actually really helps them to to be able to move pretty easily to the cloud um and so my answer to your question about uh where do we steer a customer when they have a file workload is um it really depends on what the customer is trying to do and uh in many cases where they're coming from so i'll walk you through a little bit of of of how we think about this with customers so for storage and application admins who are extending existing workloads to the cloud or migrating workloads to aws the easiest path generally is to move to an fsx file system that provides the same or really similar underlying file system engine that they use on premises so for example if you're running a netapp appliance on premises or a windows file server on premises choosing that option within fsx provides the least effort for a customer to lift their application and their data set and they'll get the full safe set of capabilities that they're used to they'll get the performance profiles that they're used to but of course they'll get all the benefits of the cloud that i was talking about earlier like spin up and spin down and fully managed and elastic capacity then we also provide open source file systems within the fsx family so if you're a customer and you're used to those or if you aren't really wedded to a particular file system technology these are really good options and they're built on top of aws's latest infrastructure innovations which really allows them to provide pretty significant price and performance benefits to customers so for example the file system file servers for these offerings are powered by aws's graviton family of processors and under the hood we use storage technology that's built on top of aws's scalable reliable datagram transport protocol which really optimizes for for speed on the cloud and so for those two open source file systems we have open zfs and that provides a really powerful highly performant nfs v3 and v4 and 4.1 and 4.2 file system built on a fast and resilient open source linux file system it has a pretty rich set of capabilities it has things like point-to-time snapshots and in-place data cloning and our customers are really using it because of these capabilities and because of its performance for a pretty broad set of enterprise i.t workloads and vertically focused workloads like within the financial services space and the healthcare life sciences space and then luster is a scale-out file system that's built on the world's most popular high-performance file system which is the luster open source file system and customers are using it for compute intensive workloads where they're throwing tons of compute at massive data sets and they need to drive tens or hundreds of gigabytes per second of throughput it's really popular for things like machine learning training and high performance computing big data analytics video rendering and transcoding so really those scale out compute intensive workloads and then we have a very different type of customer very different persona and this is the individual that we call the aws builder and these are folks who are running cloud native workloads they leverage a broad spectrum of aws's compute and analytic services and they have really no history of on-prem examples are data scientists who require a file share for training sets research scientists who are performing analysis on lab data developers who are building containerized or serverless workloads and cloud practitioners who need a simple solution for storing assets for their cloud workflows and and these these folks are building and running a wide range of data focused workloads and they've grown up using services like lambda and building containerized workloads so most of these individuals generally are not storage experts and they look for storage that just works s3 and consumer file shares uh like dropbox are their reference point for how cloud storage works and they're indifferent to or unaware of bio protocols like smb or nfs and performing typical nas administrative tasks is just not it's not a natural experience for them it's not something they they do and we built amazon efs to meet the needs of that group it's fully elastic it's fully serverless spreads data across multiple availability zones by default it scales infinitely it works very much like s3 so for example you get the same durability and availability profile of s3 you get intelligent tiering of colder data just like you do on s3 so that service just clicks with cloud native practitioners it's it's intuitive and it just works there's mind-boggling the number of use cases you just went through and this is where it's so you know it's you know a lot of times people roll their eyes oh here's amazon talking about you know customer obsession again but if you don't stay close to your customers there's no way you could have predicted when you're building these services how they were going to be put to use the only way you can understand it is watch what customers do with it i loved the conversation about graviton we've written about that a lot i mean nitro we've written about that how it's you've completely rethought virtualization the security components in there the hpc luster piece and and the efs for data scientists so really helpful there thank you i'm going to change uh topics a little bit because there's been this theme that you've been banging on at storage day putting data to work and i tell you it's a bit of a passion of mine ed because frankly customers have been frustrated with the return on data initiatives it's been historically complicated very time consuming and expensive to really get value from data and often the business lines end up frustrated so let's talk more about that concept and i understand you have an announcement that fits with this scene can you tell us more about that absolutely today we're announcing a new service called amazon file cache and it's a service on aws that accelerates and simplifies hybrid workflows and specifically amazon file cache provides a high speed cache on aws that makes it easier to process file data regardless of where the data is stored and amazon file cache serves as a temporary high performance storage location and it's for data that's stored in on-premise file servers or in file systems or object stores in aws and what it does is it enables enterprises to make these dispersed data sets available to file based applications on aws with a unified view and at high speeds so think of sub millisecond latencies and and tens or hundreds of gigabytes per second of throughput and so a really common use case it supports is if you have data stored on premises and you want to burst the processing workload to the cloud you can set up this cache on aws and it allows you to have the working set for your compute workload be cached near your aws compute so what you would do as a customer when you want to use this is you spin up this cache you link it to one or more on-prem nfs file servers and then you mount this cache to your compute instances on aws and when you do this all of your on-prem data will appear up automatically as folders and files on the cache and when your aws compute instances access a file for the first time the cache downloads the data that makes up that file in real time and that data then would reside on the cache as you work with it and when it's in the cache your application has access to that data at those sub millisecond latencies and at up to hundreds of gigabytes per second of throughput and all of this data movement is done automatically and in the background completely transparent to your application that's running on the compute instances and then when you're done with your workload with your data processing job you can export the changes and all the new data back to your on-premises file servers and then tear down the cache another common use case is if you have a compute intensive file-based application and you want to process a data set that's in one or more s3 buckets you can have this cache serve as a really high speed layer that your compute instances mount as a network file system you can also place this cache in front of a mix of on-prem file servers and s3 buckets and even fsx file systems that are on aws all of the data from these locations will appear within a single name space that clients that mount the cache have access to and those clients get all the performance benefits of the cache and also get a unified view of their data sets and and to your point about listening to customers and really paying attention to customers dave we built this service because customers asked us to a lot of customers asked us to actually it's a really helpful enable enabler for a pretty wide variety of cloud bursting workloads and hybrid workflows ranging from media rendering and transcoding to engineering design simulation to big data analytics and it really aligns with that theme of extend that we were talking about earlier you know i often joke that uh aws has the best people working on solving the speed of light problem so okay but so this idea of bursting as i said has been a great cloud use case from the early days and and bringing it to file storage is very sound and approach with file cache looks really practical um when is the service available how can i get started you know bursting to aws give us the details there yeah well stay tuned we we announced it today at storage day and it will be generally available later this year and once it becomes available you can create a cache via the the aws management console or through the sdks or the cli and then within minutes of creating the cache it'll be available to your linux instances and your instances will be able to access it using standard file system mount commands and the pricing model is going to be a pretty familiar one to cloud customers customers will only pay for the cash storage and the performance they need and they can spin a cash up and use it for the duration of their compute burst workload and then tear it down so i'm really excited that amazon file cache will make it easier for customers to leverage the agility and the performance and the cost efficiency of aws for processing data no matter where the data is stored yeah cool really interested to see how that gets adopted ed always great to catch up with you as i said the pace is mind-boggling it's accelerating in the cloud overall but storage specifically so by asking us can we take a little breather here can we just relax for a bit and chill out uh not as long as customers are asking us for more things so there's there's more to come for sure all right ed thanks again great to see you i really appreciate your time thanks dave great catching up okay and thanks for watching our coverage of aws storage day 2022 keep it right there for more in-depth conversations on thecube your leader in enterprise and emerging tech coverage [Music] you
SUMMARY :
and then you mount this cache to your
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Edward Naim | PERSON | 0.99+ |
tens | QUANTITY | 0.99+ |
tens of petabytes | QUANTITY | 0.99+ |
hundreds of petabytes | QUANTITY | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
amazon | ORGANIZATION | 0.99+ |
aws | ORGANIZATION | 0.99+ |
hundreds of thousands | QUANTITY | 0.99+ |
last year | DATE | 0.98+ |
16 years | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
first time | QUANTITY | 0.97+ |
each year | QUANTITY | 0.97+ |
dave | PERSON | 0.97+ |
second | QUANTITY | 0.97+ |
dave vellante | PERSON | 0.97+ |
20 30 percent a year | QUANTITY | 0.97+ |
later this year | DATE | 0.96+ |
one | QUANTITY | 0.96+ |
aws | TITLE | 0.95+ |
windows | TITLE | 0.94+ |
thousands of applications | QUANTITY | 0.94+ |
later today | DATE | 0.93+ |
one format | QUANTITY | 0.93+ |
hundreds of gigabytes per second | QUANTITY | 0.93+ |
first questions | QUANTITY | 0.93+ |
hundreds of gigabytes per second | QUANTITY | 0.92+ |
two open source | QUANTITY | 0.92+ |
s3 | TITLE | 0.92+ |
fsx | TITLE | 0.89+ |
4.1 | TITLE | 0.88+ |
first | QUANTITY | 0.88+ |
a lot of years | QUANTITY | 0.87+ |
earlier today | DATE | 0.84+ |
linux | TITLE | 0.84+ |
four of the most popular file | QUANTITY | 0.79+ |
nitro | ORGANIZATION | 0.79+ |
netapp | TITLE | 0.78+ |
4.2 | TITLE | 0.74+ |
single answer | QUANTITY | 0.74+ |
graviton | TITLE | 0.74+ |
zettabytes | QUANTITY | 0.73+ |
day | EVENT | 0.73+ |
lot of customers | QUANTITY | 0.73+ |
exabytes | QUANTITY | 0.72+ |
a lot of other customers | QUANTITY | 0.71+ |
2022 | DATE | 0.71+ |
v4 | TITLE | 0.71+ |
single name | QUANTITY | 0.68+ |
tons of compute | QUANTITY | 0.64+ |
couple things | QUANTITY | 0.63+ |
minutes | QUANTITY | 0.56+ |
Day | EVENT | 0.54+ |
Ian Massingham, MongoDB and Robbie Belson, Verizon | MongoDB World 2022
>>Welcome back to NYC the Cube's coverage of Mongo DB 2022, a few thousand people here at least bigger than many people, perhaps expected, and a lot of buzz going on and we're gonna talk devs. I'm really excited to welcome back. Robbie Bellson who's the developer relations lead at Verizon and Ian Massingham. Who's the vice president of developer relations at Mongo DB Jens. Good to see you. Great >>To be here. >>Thanks having you. So Robbie, we just met a few weeks ago at the, the red hat summit in Boston and was blown away by what Verizon is doing in, in developer land. And of course, Ian, you know, Mongo it's rayon Detra is, is developers start there? Why is Mongo so developer friendly from your perspective? >>Well, it's been the ethos of MongoDB since day one. You know, back when we launched the first version of MongoDB back in 2009, we've always been about making developers lives easier. And then in 2016, we announced and released MongoDB Atlas, which is our cloud managed service for MongoDB, you know, starting with a small number of regions built on top of AWS and about 2,500 adoption events per week for MongoDB Atlas. After the first year today, MongoDB Atlas provides a managed service for MongoDB developers around the world. We're present in almost a hundred cloud regions across S DCP and Azure. And that adoption number is now running at about 25,000 developers a week. So, you know, the proof are in proof is really in the metrics. MongoDB is an incredibly popular platform for developers that wanna build data-centric applications. You just can't argue with the metrics really, >>You know, Ravi, sometimes there's an analyst who come up with these theories and one of the theories I've been spouting for a long time is that developers are gonna win the edge. And now to, to see you at Verizon building out this developer community was really exciting to me. So explain how you got this started with this journey. >>Absolutely. As you think about Verizon 5g edge or mobile edge computing portfolio, we knew from the start that developers would play a central role and not only consuming the service, but shaping the roadmap for what it means to build a 5g future. And so we started this journey back in late 20, 19 and fast forward to about a year ago with Mongo, we realized, well, wait a minute, you look at the core service offerings available at the edge. We didn't know really what to do with data. We wanted to figure it out. We wanted the vote of confidence from developers. So there I was in an apartment in Colorado racing, your open source Mongo against that in the region edge versus region, what would you see? And we saw tremendous performance improvements. It was so much faster. It's more than 40% faster for thousands and thousands of rights. And we said, well, wait a minute. There's something here. So what often starts is an organic developer, led intuition or hypothesis can really expand to a much broader go to market motion that really brings in the enterprise. And that's been our strategy from day one. Well, >>It's interesting. You talk about the performance. I, I just got off of a session talking about benchmarks in the financial services industry, you know, amazing numbers. And that's one of the hallmarks of, of Mongo is it can play in a lot of different places. So you guys both have developer relations in your title. Is that how you met some formal developer relations? >>We were a >>Program. >>Yeah, I would say that Verizon is one of the few customers that we also collaborate with on a developer relations effort. You know, it's in our mutual best interest to try to drive MongoDB consumption amongst developers using Verizon's 5g edge network and their platform. So of course we work together to help, to increase awareness of MongoDB amongst mobile developers that want to use that kind of technology. >>But so what's your story on this? >>I mean, as I, as I mentioned, everything starts with an organic developer discovery. It all started. I just cold messaged a developer advocate on Twitter and here we are at MongoDB world. It's amazing how things turn out. But one of the things that's really resonated with me as I was speaking with one of, one of your leads within your organization, they were mentioning that as Mongo DVIA developed over the years, the mantra really became, we wanna make software development easy. Yep. And that really stuck with me because from a network perspective, we wanna make networking easy. Developers are not gonna care about the internals of 5g network. In fact, they want us to abstract away those complexities so that they can focus on building their apps. So what better co-innovation opportunity than taking MongoDB, making software easy, and we make the network easy. >>So how do you think about the edge? How does you know variety? I mean, to me, you know, there's a lot of edge use cases, you know, think about the home Depot or lows. Okay, great. I can put like a little mini data center in there. That's cool. That's that's edge. Like, but when I think of Verizon, I mean, you got cell towers, you've got the far edge. How do you think about edge Robbie? >>Well, the edge is a, I believe a very ambiguous term by design. The edge is the device, the mobile device, an IOT device, right? It could be the radio towers that you mentioned. It could be in the Metro edge. The CDN, no one edge is better than the other. They're all just serving different use cases. So when we talk about the edge, we're focused on the mobile edge, which we believe is most conducive to B2B applications, a fleet of IOT devices that you can control a manufacturing plant, a fleet of ground and aerial robotics. And in doing so you can create a powerful compute mesh where you could have a private network and private mobile edge computing by way of say an AWS outpost and then public mobile edge computing by way of AWS wavelength. And why keep them separate. You could have a single compute mesh even with MongoDB. And this is something that we've been exploring. You can extend Atlas, take a cluster, leave it in the region and then use realm the mobile portfolio and spread it all across the edge. So you're creating that unified compute and data mesh together. >>So you're describing what we've been expecting is a new architecture emerging, and that's gonna probably bring new economics of new use cases, right? Where are we today in that first of all, is that a reasonable premise that this is a sort of a new architecture that's being built out and where are we in that build out? How, how do you think about the, the future of >>That? Absolutely. It's definitely early days. I think we're still trying to figure it out, but the architecture is definitely changing the idea to rip out a mobile device that was initially built and envisioned for the device and only for the device and say, well, wait a minute. Why can't it live at the edge? And ultimately become multi-tenant if that's the data volume that may be produced to each of those edge zones with hypothesis that was validated by developers that we continue to build out, but we recognize that we can't, we can't get that static. We gotta keep evolving. So one of our newest ideas as we think about, well, wait a minute, how can Mongo play in the 5g future? We started to get really clever with our 5g network APIs. And I, I think we talked about this briefly last time, 5g, programmability and network APIs have been talked about for a while, but developers haven't had a chance to really use them and our edge discovery service answering the question in this case of which database is the closest database, doesn't have to be invoked by the device anymore. You can take a thin client model and invoke it from the cloud using Atlas functions. So we're constantly permuting across the entire portfolio edge or otherwise for what it means to build at the edge. We've seen such tremendous results. >>So how does Mongo think about the edge and, and, and playing, you know, we've been wondering, okay, which database is actually gonna be positioned best for the edge? >>Well, I think if you've got an ultra low latency access network using data technology, that adds latency is probably not a great idea. So MongoDB since the very formative years of the company and product has been built with performance and scalability in mind, including things like in memory storage for the storage engine that we run as well. So really trying to match the performance characteristics of the data infrastructure with the evolution in the mobile network, I think is really fundamentally important. And that first principles build of MongoDB with performance and scalability in mind is actually really important here. >>So was that a lighter weight instance of, of Mongo or not >>Necessarily? No, not necessarily. No, no, not necessarily. We do have edge cashing with realm, the mobile databases Robbie's already mentioned, but the core database is designed from day one with those performance and scalability characteristics in mind, >>I've been playing around with this. This is kind of a, I get a lot of heat for this term, but super cloud. So super cloud, you might have data on Preem. You might have data in various clouds. You're gonna have data out at the edge. And, and you've got an abstraction that allows a developer to, to, to tap services without necessarily if, if he or she wants to go deep into the S great, but then there's a higher level of services that they can actually build for their customers. So is that a technical reality from a developer standpoint, in your view, >>We support that with the Mongo DB multi-cloud deployment model. So you can place Mongo DB, Atlas nodes in any one of the three hyperscalers that we mentioned, AWS, GCP or Azure, and you can distribute your data across nodes within a cluster that is spread across different cloud providers. So that kinds of an kind of answers the question about how you do data placement inside the MongoDB clustered environment that you run across the different providers. And then for the abstraction layer. When you say that I hear, you know, drivers ODMs the other intermediary software components that we provide to make developers more productive in manipulating data in MongoDB. This is one of the most interesting things about the technology. We're not forcing developers to learn a different dialect or language in order to interact with MongoDB. We meet them where they are by providing idiomatic interfaces to MongoDB in JavaScript in C sharp, in Python, in rust, in that in fact in 12 different pro programming languages that we support as a first party plus additional community contributed programming languages that the community have created drivers for ODMs for. So there's really that model that you've described in hypothesis exist in reality, using >>Those different Compli. It's not just a series of siloed instances in, >>In different it's the, it's the fabric essentially. Yeah. >>What, what does the Verizon developer look like? Where does that individual come from? We talked about this a little bit a few weeks ago, but I wonder if you could describe it. >>Absolutely. My view is that the Verizon or just mobile edge ecosystem in general for developers are present at this very conference. They're everywhere. They're building apps. And as Ian mentioned, those idiomatic interfaces, we need to take our network APIs, take the infrastructure that's being exposed and make sure that it's leveraging languages, frameworks, automation, tools, the likes of Terraform and beyond. We wanna meet developers where they are and build tools that are easy for them to use. And so you had talked about the super cloud. I often call it the cloud continuum. So we, we took it P abstraction by abstraction. We started with, will it work in one edge? Will it work in multiple edges, public and private? Will it work in all of the edges for a given region, public or private, will it work in multiple regions? Could it work in multi clouds? We've taken it piece by piece by piece and in doing so abstracting way, the complexity of the network, meaning developers, where they are providing those idiomatic interfaces to interact with our API. So think the edge discovery, but not in a silo within Atlas functions. So the way that we're able to converge portfolios, using tools that dev developers already use know and love just makes it that much easier. Do, >>Do you feel like I like the cloud continuum cause that's really what it is. The super cloud does the security model, how does the security model evolve with that? >>At least in the context of the mobile edge, the attack surface is a lot smaller because it's only for mobile traffic not to say that there couldn't be various configuration and human error that could be entertained by a given application experience, but it is a much more secure and also reliable environment from a failure domain perspective, there's more edge zones. So it's less conducive to a regionwide failure because there's so many more availability zones. And that goes hand in hand with security. Mm. >>Thoughts on security from your perspective, I mean, you added, you've made some announcements this week, the, the, the encryption component that you guys announced. >>Yeah. We, we issued a press release this morning about a capability called queryable encryption, which actually as we record this Mark Porter, our CTO is talking about in his keynote, and this is really the next generation of security for data stored within databases. So the trade off within field level encryption within databases has always been very hard, very, very rigid. Either you have keys stored within your database, which means that your memory, so your data is decrypted while it's resident in memory on your database engine. This allow, of course, allows you to perform query operations on that data. Or you have keys that are managed and stored in the client, which means the data is permanently OBS from the engine. And therefore you can't offload query capabilities to your data platform. You've gotta do everything in the client. So if you want 10 records, but you've got a million encrypted records, you have to pull a million encrypted records to the client, decrypt them all and see performance hit in there. Big performance hit what we've got with queryable encryption, which we announced today is the ability to keep data encrypted in memory in the engine, in the database, in the data platform, issue queries from the client, but use a technology called structural encryption to allow the database engine, to make decisions, operate queries, and find data without ever being able to see it without it ever being decrypted in the memory of the engine. So it's groundbreaking technology based on research in the field of structured encryption with a first commercial database provided to bring this to market. >>So how does the mobile edge developer think about that? I mean, you hear a lot about shifting left and not bolting on security. I mean, is this, is this an example of that? >>It certainly could be, but I think the mobile edge developer still stuck with how does this stuff even work? And I think we need to, we need to be mindful of that as we build out learning journeys. So one of my favorite moments with Mongo was an immersion day. We had hosted earlier last year where we, our, from an enterprise perspective, we're focused on BW BS, but there's nothing stopping us. You're building a B2C app based on the theme of the winner Olympics. At the time, you could take a picture of Sean White or of Nathan Chen and see that it was in fact that athlete and then overlaid on that web app was the number of medals they accrued with the little trumpeteer congratulating you for selecting that athlete. So I think it's important to build trust and drive education with developers with a more simple experience and then rapidly evolve overlaying the features that Ian just mentioned over time. >>I think one of the keys with cryptography is back to the familiar messaging for the cloud offloading heavy lifting. You actually need to make it difficult to impossible for developers to get this wrong, and you wanna make it as easy as possible for developers to deal with cryptography. And that of course is what we're trying to do with our driver technology combined with structure encryption, with query encryption. >>But Robbie, your point is lots of opportunity for education. I mean, I have to say the developers that I work with, it's, I'm, I'm in awe of how they solve problems and I, and the way they solve problems, if they don't know the answer, they figure out how to go get it. So how, how are your two communities and other communities, you know, how are they coming together to, to solve such problems and share whether it's best practices or how do I do this? >>Well, I'm not gonna lie in person. Events are a bunch of fun. And one of the easiest domain knowledge exchange opportunities, when you're all in person, you can ideate, you can whiteboard, you can brainstorm. And often those conversations are what leads to that infrastructure module that an immersion day features. And it's just amazing what in person events can do, but community groups of interest, whether it's a Twitch stream, whether it's a particular code sample, we rely heavily on digital means today to upscale the developer community, but also build on by, by means of a simple port request, introduce new features that maybe you weren't even thinking of before. >>Yeah. You know, that's a really important point because when you meet people face to face, you build a connection. And so if you ask a question, you're more likely perhaps to get an answer, or if one doesn't exist in a, in a search, you know, you, oh, Hey, we met at the, at the conference and let's collaborate on this guys. Congratulations on, on this brave new world. You're in a really interesting spot. You know, developers, developers, developers, as Steve bomber says screamed. And I was glad to see Dave was not screaming and jumping up and down on the stage like that, but, but the message still resonates. So thank you, definitely appreciate. All right, keep it right there. This is Dave ante for the cubes coverage of Mago DB world 2022 from New York city. We'll be right back.
SUMMARY :
Who's the vice president of developer relations at Mongo DB Jens. And of course, Ian, you know, Mongo it's rayon Detra is, is developers start Well, it's been the ethos of MongoDB since day one. So explain how you versus region, what would you see? So you guys both have developer relations in your So of course we But one of the things that's really resonated with me as I was speaking with one So how do you think about the edge? It could be the radio towers that you mentioned. the idea to rip out a mobile device that was initially built and envisioned for the of the company and product has been built with performance and scalability in mind, including things like the mobile databases Robbie's already mentioned, but the core database is designed from day one So super cloud, you might have data on Preem. So that kinds of an kind of answers the question about how It's not just a series of siloed instances in, In different it's the, it's the fabric essentially. but I wonder if you could describe it. So the way that we're able to model, how does the security model evolve with that? And that goes hand in hand with security. week, the, the, the encryption component that you guys announced. So it's groundbreaking technology based on research in the field of structured So how does the mobile edge developer think about that? At the time, you could take a picture of Sean White or of Nathan Chen And that of course is what we're trying to do with our driver technology combined with structure encryption, with query encryption. and other communities, you know, how are they coming together to, to solve such problems And one of the easiest domain knowledge exchange And so if you ask a question, you're more likely perhaps to get an answer, or if one doesn't exist
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Steve | PERSON | 0.99+ |
Verizon | ORGANIZATION | 0.99+ |
Robbie Bellson | PERSON | 0.99+ |
Ian Massingham | PERSON | 0.99+ |
Ian | PERSON | 0.99+ |
10 records | QUANTITY | 0.99+ |
Robbie | PERSON | 0.99+ |
Robbie Belson | PERSON | 0.99+ |
Colorado | LOCATION | 0.99+ |
2009 | DATE | 0.99+ |
Dave | PERSON | 0.99+ |
2016 | DATE | 0.99+ |
Mark Porter | PERSON | 0.99+ |
thousands | QUANTITY | 0.99+ |
Mongo | ORGANIZATION | 0.99+ |
Boston | LOCATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
MongoDB | ORGANIZATION | 0.99+ |
Sean White | PERSON | 0.99+ |
Nathan Chen | PERSON | 0.99+ |
Olympics | EVENT | 0.99+ |
Python | TITLE | 0.99+ |
MongoDB | TITLE | 0.99+ |
today | DATE | 0.99+ |
NYC | LOCATION | 0.99+ |
late 20 | DATE | 0.99+ |
more than 40% | QUANTITY | 0.99+ |
two communities | QUANTITY | 0.99+ |
Ravi | PERSON | 0.98+ |
MongoDB Atlas | TITLE | 0.98+ |
Mongo DB | ORGANIZATION | 0.98+ |
one | QUANTITY | 0.98+ |
JavaScript | TITLE | 0.98+ |
this morning | DATE | 0.98+ |
one edge | QUANTITY | 0.97+ |
12 different pro programming languages | QUANTITY | 0.97+ |
New York city | LOCATION | 0.97+ |
first version | QUANTITY | 0.97+ |
this week | DATE | 0.97+ |
both | QUANTITY | 0.97+ |
Azure | TITLE | 0.96+ |
ORGANIZATION | 0.95+ | |
Atlas | TITLE | 0.95+ |
C sharp | TITLE | 0.95+ |
a million encrypted records | QUANTITY | 0.95+ |
about 25,000 developers a week | QUANTITY | 0.93+ |
Twitch | ORGANIZATION | 0.93+ |
first year | QUANTITY | 0.93+ |
19 | DATE | 0.89+ |
Mark Lyons, Dremio | AWS Startup Showcase S2 E2
(upbeat music) >> Hello, everyone and welcome to theCUBE presentation of the AWS startup showcase, data as code. This is season two, episode two of the ongoing series covering the exciting startups from the AWS ecosystem. Here we're talking about operationalizing the data lake. I'm your host, John Furrier, and my guest here is Mark Lyons, VP of product management at Dremio. Great to see you, Mark. Thanks for coming on. >> Hey John, nice to see you again. Thanks for having me. >> Yeah, we were talking before we came on camera here on this showcase we're going to spend the next 20 minutes talking about the new architectures of data lakes and how they expand and scale. But we kind of were reminiscing by the old big data days, and how this really changed. There's a lot of hangovers from (mumbles) kind of fall through, Cloud took over, now we're in a new era and the theme here is data as code. Really highlights that data is now in the developer cycles of operations. So infrastructure is code-led DevOps movement for Cloud programmable infrastructure. Now you got data as code, which is really accelerating DataOps, MLOps, DatabaseOps, and more developer focus. So this is a big part of it. You guys at Dremio have a Cloud platform, query engine and a data tier innovation. Take us through the positioning of Dremio right now. What's the current state of the offering? >> Yeah, sure, so happy to, and thanks for kind of introing into the space that we're headed. I think the world is changing, and databases are changing. So today, Dremio is a full database platform, data lakehouse platform on the Cloud. So we're all about keeping your data in open formats in your Cloud storage, but bringing that full functionality that you would want to access the data, as well as manage the data. All the functionality folks would be used to from NC SQL compatibility, inserts updates, deletes on that data, keeping that data in Parquet files in the iceberg table format, another level of abstraction so that people can access the data in a very efficient way. And going even further than that, what we announced with Dremio Arctic which is in public preview on our Cloud platform, is a full get like experience for the data. So just like you said, data as code, right? We went through waves and source code and infrastructure as code. And now we can treat the data as code, which is amazing. You can have development branches, you can have staging branches, ETL branches, which are separate from production. Developers can do experiments. You can make changes, you can test those changes before you merge back to production and let the consumers see that data. Lots of innovation on the platform, super fast velocity of delivery, and lots of customers adopting it in just in the first month here since we announced Dremio Cloud generally available where the adoption's been amazing. >> Yeah, and I think we're going to dig into the a lot of the architecture, but I want to highlight your point you made about the branching off and taking a branch of Git. This is what developers do, right? The developers use GitHub, Git, they bake branches from code. They build on top of other code. That's open source. This is what's been around for generations. Now for the first time we're seeing data sets being taken out of production to be worked on and coded and tested and even doing look backs or even forward looking analysis. This is data being programmed. This is data as code. This is really, you couldn't get any closer to data as code. >> Yeah. It's all done through metadata by the way. So there's no actual copying of these data sets 'cause in these big data systems, Cloud data lakes and stuff, and these tables are billions of records, trillions of records, super wide, hundreds of columns wide, thousands of columns wide. You have to do this all through metadata operations so you can control what version of the data basically a individual's working with and which version of the data the production systems are seeing because these data sets are too big. You don't want to be moving them. You can't be moving them. You can't be copying them. It's all metadata and manifest files and pointers to basically keep track of what's going on. >> I think this is the most important trend we've seen in a long time, because if you think about what Agile did for developers, okay, speed, DevOps, Cloud scale, now you've got agility in the data side of it where you're basically breaking down the old proprietary, old ways of doing data warehousing, but not killing the functionality of what data warehouses did. Just doing more volume data warehouses where proprietary, not open. They were different use cases. They were single application developers when used data warehouse query, not a lot of volume. But as you get volume, these things are inadequate. And now you've got the new open Agile. Is this Agile data engineering at play here? >> Yeah, I think it totally is. It's bringing it as far forward in as possible. We're talking about making the data engineering process easier and more productive for the data engineer, which ultimately makes the consumers of that data much happier as well as way more experiments can happen. Way more use cases can be tried. If it's not a burden and it doesn't require building a whole new pipeline and defining a schema and adding columns and data types and all this stuff, you can do a lot more with your data much faster. So it's really going to be super impactful to all these businesses out there trying to be data driven, especially when you're looking at data as a code and branching, a branch off, you can de-risk your changes. You're not worried about messing up the production system, messing up that data, having it seen by end user. Some businesses data is their business so that data would be going all the way to a consumer, a third party. And then it gets really scary. There's a lot of risk if you show the wrong credit score to a consumer or you do something like that. So it's really de-risking... >> Even updating machine learning algorithms. So for instance, if the data sets change, you can always be iterating on things like machine learning or learning algorithms. This is kind of new. This is awesome, right? >> I think it's going to change the world because this stuff was so painful to do. The data sets had gotten so much bigger as you know, but we were still doing it in the old way, which was typically moving data around for everyone. It was copying data down, sampling data, moving data, and now we're just basically saying, hey, don't do that anymore. We got to stop moving the data. It doesn't make any sense. >> So I got to ask you Mark, data lakes are growing in popularity. I was originally down on data lakes. I called them data swamps. I didn't think they were going to be as popular because at that time, distributed file systems like Hadoop, and object store in the Cloud were really cool. So what happened between that promise of distributed file systems and object store and data lakes? What made data lakes popular? What made that work in your opinion? >> Yeah, it really comes down to the metadata, which I already mentioned once. But we went through these waves. John you saw we did the EDWs to the data lakes and then the Cloud data warehouses. I think we're at the start of a cycle back to the data lake. And it's because the data lakes this time around with the Apache iceberg table format, with project (mumbles) and what Dremio's working on around metadata, these things aren't going to become data swamps anymore. They're actually going to be functional systems that do inserts updates into leads. You can see all the commits. You can time travel them. And all the files are actually managed and optimized so you have to partition the data. You have to merge small files into larger files. Oh, by the way, this is stuff that all the warehouses have done behind the scenes and all the housekeeping they do, but people weren't really aware of it. And the data lakes the first time around didn't solve all these problems so that those files landing in a distributed file system does become a mess. If you just land JSON, Avro or Parquet files, CSV files into the HDFS, or in S3 compatible, object store doesn't matter, if you're just parking files and you're going to deal with it as schema and read instead of schema and write, you're going to have a mess. If you don't know which tool changed the files, which user deleted a file, updated a file, you will end up with a mess really quickly. So to take care of that, you have to put a table format so everyone's looking at Apache iceberg or the data bricks Delta format, which is an interesting conversation similar to the Parquet and org file format that we saw play out. And then you track the metadata. So you have those manifest files. You know which files change when, which engine, which commit. And you can actually make a functional system that's not going to become a swamp. >> Another trend that's extending on beyond the data lake is other data sources, right? So you have a lot of other data, not just in data lakes so you have to kind of work with that. How do you guys answer the question around some of the mission critical BI dashboards out there on the latency side? A lot of people have been complaining that these mission critical BI dashboards aren't getting the kind of performance as they add more data sources and they try to do more. >> Yeah, that's a great question. Dremio does actually a bunch of interesting things to bring the performance of these systems up because at the end of the day, people want to access their data really quickly. They want the response times of these dashboards to be interactive. Otherwise the data's not interesting if it takes too long to get it. To answer a question, yeah, a couple of things. First of all, from a data source's side, Dremio is very proficient with our Parquet files in an object store, like we just talked about, but it also can access data in other relational systems. So whether that's a Postgres system, whether that's a Teradata system or an Oracle system. That's really useful if you have dimensional data, customer data, not the largest data set in the world, not the fastest moving data set in the world, but you don't want to move it. We can query that where it resides. Bringing in new sources is definitely, we all know that's a key to getting better insights. It's in your data, is joining sources together. And then from a query speed standpoint, there's a lot of things going on here. Everything from kind of Apache, the Apache Avro project, which is in memory format of Parquet and not kind of serialize and de-serialize the data back and forth. As well as what we call reflection, which is basically a re-indexing or pre-computing of the data, but we leave it in Parquet format, in a open format in the customer's account so that you can have aggregates and other things that are really popular in these dashboards pre-computed. So millisecond response, lightning fast, like tricks that a warehouse would do that the warehouses have been doing forever. Right? >> Yeah, more deals coming in. And obviously the architecture we'll get into that now has to handle the growth. And as your customers and practitioners see the volume and the variety and the velocity of the data coming in, how are they adjusting their data strategies to respond to this? Again, Cloud is clearly the answer, not the data warehouse, but what are they doing? What's the strategy adjustment? >> It's interesting when we start talking to folks, I think sometimes it's a really big shift in thinking about data architectures and data strategies when you look at the Dremio approach. It's very different than what most people are doing today around ETL pipelines and then bringing stuff into a warehouse and oh, the warehouse is too overloaded so let's build some cubes and extracts into the next tier of tools to speed up those dashboards for those tools. And Dremio has totally flipped this on a sentence and said, no, let's not do all those things. That's time consuming. It's brittle, it breaks. And actually your agility and the scope of what you can do with your data decreases. You go from all your data and all your data sources to smaller and smaller. We actually call it the perimeter doom and a lot of people look at this and say, yeah, that kind of looks like how we're doing things today. So from a Dremio perspective, it's really about no copy, try to keep as much data in one place, keep it in one open format and less data movement. And that's a very different approach for people. I think they don't realize how much you can accomplish that way. And your latency shrinks down too. Your actual latency from data created to insight is much shorter. And it's not because of the query response time, that latency is mostly because of data movement and copy and all these things. So you really want to shrink your time to insight. It's not about getting a faster query from a few seconds down, it's about changing the architecture. >> The data drift as they say, interesting there. I got to ask you on the personnel side, team side, you got the technical side, you got the non-technical consumers of the data, you got the data science or data engineering is ramping up. We mentioned earlier data engineering being Agile, is a key innovation here. As you got to blend the two personas of technical and non-technical people playing with data, coding with data, we're the bottlenecks in this process today. How can data teams overcome these bottlenecks? >> I think we see a lot of bottlenecks in the process today, a lot of data movement, a lot of change requests, update this dashboard. Oh, well, that dashboard update requires an ETL pipeline update, requires a column to be added to this warehouse. So then you've got these personas, like you said, some more technical, less technical, the data consumers, the data engineers. Well, the data engineers are getting totally overloaded with requests and work. And it's not even super value-add work to the business. It's not really driving big changes in their culture and insights and new new use cases for data. It's turning through kind of small changes, but it's taking too much time. It's taking days, if not weeks for these organizations to manage small changes. And then the data consumers, the less technical folks, they can't get the answers that they want. They're waiting and waiting and waiting and they don't understand why things are so challenging, how things could take so much time. So from a Dremio perspective, it's amazing to watch these organizations unleash their data. Get the data engineers, their productivity up. Stop dealing with some of the last mile ETL and small changes to the data. And Dremio actually says, hey, data consumers, here's a really nice gooey. You don't need to be a SQL expert, well, the tool will write the joints for you. You can click on a column and say, hey, I want to calculate a new field and calculate that field. And it's all done virtually so it's not changing the physical data sets. The actual data engineering team doesn't even really need to care at that point. So you get happier data consumers at the end of the day. They're doing things more self-service. They're learning about the data and the data engineering teams can go do value-add things. They can re-architecture the platform for the future. They can do POCs to test out new technologies that could support new use cases and bring those into the organization. Things that really add value, instead of just churning through backlogs of, hey, can we get a column added or we change... Everyone's doing app development, AB testing, and those developers are king. Those pipelines stream all this data down when the JSON files change. You need agility. And if you don't have that agility, you just get this endless backlog that you never... >> This is data as code in action. You're committing data back into the main brand that's been tested. That's what developers do. So this is really kind of the next step function. I got to put the customer hat on for a second and ask you kind of the pessimist question. Okay, we've had data lakes, I've got data lakes, it's been data lakes around, I got query engines here and there, they're all over the place, what's missing? What's been missing from the architecture to fully realize the potential of a data lakehouse? >> Yeah, I think that's a great question. The customers say exactly that John. They say, "I've got 22 databases, you got to be kidding me. You showed up with another database." Or, hey, let's talk about a Cloud data lake or a data lake. Again, I did the data lake thing. I had a data lake and it wasn't everything I thought it was going to be. >> It was bad. It was data swamp. >> Yeah, so customers really think this way, and you say, well, what's different this time around? Well, the Cloud in the original data lake world, and I'm just going to focus on data lakes, so the original data lake worlds, everything was still direct attached storage, so you had to scale your storage and compute out together. And we built these huge systems. Thousands of thousands of HDFS nodes and stuff. Well, the Cloud brought the separated compute and storage, but data lakes have never seen separated compute and storage until now. We went from the data lake with directed tap storage to the Cloud data warehouse with separated compute and storage. So the Cloud architecture and getting compute and storage separated is a huge shift in the data lake world. And that agility of like, well, I'm only going to apply it, the compute that I need for this question, for this answer right now, and not get 5,000 servers of compute sitting around at some peak moment. Or just 5,000 compute servers because I have five petabytes or 50 petabytes of data that need to be stored in the discs that are attached to them. So I think the Cloud architecture and separating compute and storage is the first thing that's different this time around about data lakes. But then more importantly than that is the metadata tier. Is the data tier and having sufficient metadata to have the functionality that people need on the data lake. Whether that's for governance and compliance standpoints, to actually be able to do a delete on your data lake, or that's for productivity and treating that data as code, like we're talking about today, and being able to time travel it, version it, branch it. And now these data lakes, the data lakes back in the original days were getting to 50 petabytes. Now think about how big these Cloud data lakes could be. Even larger and you can't move that data around so we have to be really intelligent and really smart about the data operations and versioning all that data, knowing which engine touch the data, which person was the last commit and being able to track all that, is ultimately what's going to make this successful. Because if you don't have the governance in place these days with data, the projects are going to fail. >> Yeah, and I think separating the query layer or SQL layer and the data tier is another innovation that you guys have. Also it's a managed Cloud service, Dremio Cloud now. And you got the open source angle too, which is also going to open up more standardization around some of these awesome features like you mentioned the joints, and I think you guys built on top of Parquet and some other cool things. And you got a community developing, so you get the Cloud and community kind of coming together. So it's the real world that is coming to light saying, hey, I need real world applications, not the theory of old school. So what use cases do you see suited for this kind of new way, new architecture, new community, new programability? >> Yeah, I see people doing all sorts of interesting things and I'm sure with what we've introduced with Dremio Arctic and the data is code is going to open up a whole new world of things that we don't even know about today. But generally speaking, we have customers doing very interesting things, very data application things. Like building really high performance data into use cases whether that's a supply chain and manufacturing use case, whether that's a pharma or biotech use case, a banking use case, and really unleashing that data right into an application. We also see a lot of traditional data analytics use cases more in the traditional business intelligence or dashboarding use cases. That stuff is totally achievable, no problems there. But I think the most interesting stuff is companies are really figuring out how to bring that data. When we offer the flexibility that we're talking about, and the agility that we're talking about, you can really start to bring that data back into the apps, into the work streams, into the places where the business gets more value out of it. Not in a dashboard that some person might have access to, or a set of people have access to. So even in the Dremio Cloud announcement, the press release, there was a customer, they're in Europe, it's called Garvis AI and they do AI for supply chains. It's an intelligent application and it's showing customers transparently how they're getting to these predictions. And they stood this all up in a very short period of time, because it's a Cloud product. They don't have to deal with provisioning, management, upgrades. I think they had their stuff going in like 30 minutes or something, like super quick, which is amazing. The data was already there, and a lot of organizations, their data's already in these Cloud storages. And if that's the case... >> If they have data, they're a use case. This is agility. This is agility coming to the data engineering field, making data programmable, enabling the data applications, the data ops for everybody, for coding... >> For everybody. And for so many more use cases at these companies. These data engineering teams, these data platform teams, whether they're in marketing or ad tech or Fiserv or Telco, they have a list. There's a list about a roadmap of use cases that they're waiting to get to. And if they're drowning underwater in the current tooling and barely keeping that alive, and oh, by the way, John, you can't go higher 30 new data engineers tomorrow and bring on the team to get capacity. You have to innovate at the architecture level, to unlock more data use cases because you're not going to go triple your team. That's not possible. >> It's going to unlock a tsunami of value. Because everyone's clogged in the system and it's painful. Right? >> Yeah. >> They've got delays, you've got bottlenecks. you've got people complaining it's hard, scar tissue. So now I think this brings ease of use and speed to the table. >> Yeah. >> I think that's what we're all about, is making the data super easy for everyone. This should be fun and easy, not really painful and really hard and risky. In a lot of these old ways of doing things, there's a lot of risk. You start changing your ETL pipeline. You add a column to the table. All of a sudden, you've got potential risk that things are going to break and you don't even know what's going to break. >> Proprietary, not a lot of volume and usage, and on-premises, open, Cloud, Agile. (John chuckles) Come on, which path? The curtain or the box, what are you going to take? It's a no brainer. >> Which way do you want to go? >> Mark, thanks for coming on theCUBE. Really appreciate it for being part of the AWS startup showcase data as code, great conversation. Data as code is going to enable a next wave of innovation and impact the future of data analytics. Thanks for coming on theCUBE. >> Yeah, thanks John and thanks to the AWS team. A great partnership between AWS and Dremio too. Talk to you soon. >> Keep it right there, more action here on theCUBE. As part of the showcase, stay with us. This is theCUBE, your leader in tech coverage. I'm John Furrier, your host, thanks for watching. (downbeat music)
SUMMARY :
of the AWS startup showcase, data as code. Hey John, nice to see you again. and the theme here is data as code. Lots of innovation on the platform, Now for the first time the production systems are seeing in the data side of it for the data engineer, So for instance, if the data sets change, I think it's going to change the world and object store in the And it's because the data extending on beyond the data lake of the data, but we leave and the variety and the the scope of what you can do I got to ask you on the and the data engineering teams kind of the pessimist question. Again, I did the data lake thing. It was data swamp. and really smart about the data operations and the data tier is another and the data is code is going the data engineering field, and bring on the team to get capacity. Because everyone's clogged in the system to the table. is making the data The curtain or the box, and impact the future of data analytics. Talk to you soon. As part of the showcase, stay with us.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
AWS | ORGANIZATION | 0.99+ |
John | PERSON | 0.99+ |
Europe | LOCATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
Mark Lyons | PERSON | 0.99+ |
30 minutes | QUANTITY | 0.99+ |
Telco | ORGANIZATION | 0.99+ |
Mark | PERSON | 0.99+ |
50 petabytes | QUANTITY | 0.99+ |
five petabytes | QUANTITY | 0.99+ |
two personas | QUANTITY | 0.99+ |
5,000 servers | QUANTITY | 0.99+ |
tomorrow | DATE | 0.99+ |
hundreds of columns | QUANTITY | 0.99+ |
22 databases | QUANTITY | 0.99+ |
Dremio | ORGANIZATION | 0.99+ |
trillions of records | QUANTITY | 0.99+ |
Dremio | PERSON | 0.99+ |
Dremio Arctic | ORGANIZATION | 0.99+ |
Fiserv | ORGANIZATION | 0.99+ |
first time | QUANTITY | 0.98+ |
30 new data engineers | QUANTITY | 0.98+ |
billions of records | QUANTITY | 0.98+ |
thousands of columns | QUANTITY | 0.98+ |
first thing | QUANTITY | 0.98+ |
Thousands of thousands | QUANTITY | 0.98+ |
today | DATE | 0.97+ |
one place | QUANTITY | 0.97+ |
Oracle | ORGANIZATION | 0.97+ |
Apache | ORGANIZATION | 0.96+ |
S3 | TITLE | 0.96+ |
Git | TITLE | 0.96+ |
Cloud | TITLE | 0.95+ |
Hadoop | TITLE | 0.95+ |
first month | QUANTITY | 0.94+ |
Parquet | TITLE | 0.94+ |
Dremio Cloud | TITLE | 0.91+ |
5,000 compute servers | QUANTITY | 0.91+ |
one | QUANTITY | 0.91+ |
JSON | TITLE | 0.89+ |
First | QUANTITY | 0.89+ |
single application | QUANTITY | 0.89+ |
Garvis | ORGANIZATION | 0.88+ |
GitHub | ORGANIZATION | 0.87+ |
Apache | TITLE | 0.82+ |
episode | QUANTITY | 0.79+ |
Agile | TITLE | 0.77+ |
season two | QUANTITY | 0.74+ |
Agile | ORGANIZATION | 0.69+ |
DevOps | TITLE | 0.67+ |
Startup Showcase S2 E2 | EVENT | 0.66+ |
Teradata | ORGANIZATION | 0.65+ |
theCUBE | ORGANIZATION | 0.64+ |
Changing the Game for Cloud Networking | Pluribus Networks
>>Everyone wants a cloud operating model. Since the introduction of the modern cloud. Last decade, the entire technology landscape has changed. We've learned a lot from the hyperscalers, especially from AWS. Now, one thing is certain in the technology business. It's so competitive. Then if a faster, better, cheaper idea comes along, the industry will move quickly to adopt it. They'll add their unique value and then they'll bring solutions to the market. And that's precisely what's happening throughout the technology industry because of cloud. And one of the best examples is Amazon's nitro. That's AWS has custom built hypervisor that delivers on the promise of more efficiently using resources and expanding things like processor, optionality for customers. It's a secret weapon for Amazon. As, as we, as we wrote last year, every infrastructure company needs something like nitro to compete. Why do we say this? Well, Wiki Bon our research arm estimates that nearly 30% of CPU cores in the data center are wasted. >>They're doing work that they weren't designed to do well, specifically offloading networking, storage, and security tasks. So if you can eliminate that waste, you can recapture dollars that drop right to the bottom line. That's why every company needs a nitro like solution. As a result of these developments, customers are rethinking networks and how they utilize precious compute resources. They can't, or won't put everything into the public cloud for many reasons. That's one of the tailwinds for tier two cloud service providers and why they're growing so fast. They give options to customers that don't want to keep investing in building out their own data centers, and they don't want to migrate all their workloads to the public cloud. So these providers and on-prem customers, they want to be more like hyperscalers, right? They want to be more agile and they do that. They're distributing, networking and security functions and pushing them closer to the applications. >>Now, at the same time, they're unifying their view of the network. So it can be less fragmented, manage more efficiently with more automation and better visibility. How are they doing this? Well, that's what we're going to talk about today. Welcome to changing the game for cloud networking made possible by pluribus networks. My name is Dave Vellante and today on this special cube presentation, John furrier, and I are going to explore these issues in detail. We'll dig into new solutions being created by pluribus and Nvidia to specifically address offloading, wasted resources, accelerating performance, isolating data, and making networks more secure all while unifying the network experience. We're going to start on the west coast and our Palo Alto studios, where John will talk to Mike of pluribus and AMI, but Donnie of Nvidia, then we'll bring on Alessandra Bobby airy of pluribus and Pete Lummus from Nvidia to take a deeper dive into the technology. And then we're gonna bring it back here to our east coast studio and get the independent analyst perspective from Bob Liberte of the enterprise strategy group. We hope you enjoy the program. Okay, let's do this over to John >>Okay. Let's kick things off. We're here at my cafe. One of the TMO and pluribus networks and NAMI by Dani VP of networking, marketing, and developer ecosystem at Nvidia. Great to have you welcome folks. >>Thank you. Thanks. >>So let's get into the, the problem situation with cloud unified network. What problems are out there? What challenges do cloud operators have Mike let's get into it. >>Yeah, it really, you know, the challenges we're looking at are for non hyperscalers that's enterprises, governments, um, tier two service providers, cloud service providers, and the first mandate for them is to become as agile as a hyperscaler. So they need to be able to deploy services and security policies. And second, they need to be able to abstract the complexity of the network and define things in software while it's accelerated in hardware. Um, really ultimately they need a single operating model everywhere. And then the second thing is they need to distribute networking and security services out to the edge of the host. Um, we're seeing a growth in cyber attacks. Um, it's, it's not slowing down. It's only getting worse and, you know, solving for this security problem across clouds is absolutely critical. And the way to do it is to move security out to the host. >>Okay. With that goal in mind, what's the pluribus vision. How does this tie together? >>Yeah. So, um, basically what we see is, uh, that this demands a new architecture and that new architecture has four tenants. The first tenant is unified and simplified cloud networks. If you look at cloud networks today, there's, there's sort of like discreet bespoke cloud networks, you know, per hypervisor, per private cloud edge cloud public cloud. Each of the public clouds have different networks that needs to be unified. You know, if we want these folks to be able to be agile, they need to be able to issue a single command or instantiate a security policy across all those locations with one command and not have to go to each one. The second is like I mentioned, distributed security, um, distributed security without compromise, extended out to the host is absolutely critical. So micro-segmentation and distributed firewalls, but it doesn't stop there. They also need pervasive visibility. >>You know, it's, it's, it's sort of like with security, you really can't see you can't protect what you can't see. So you need visibility everywhere. The problem is visibility to date has been very expensive. Folks have had to basically build a separate overlay network of taps, packet brokers, tap aggregation infrastructure that really needs to be built into this unified network I'm talking about. And the last thing is automation. All of this needs to be SDN enabled. So this is related to my comment about abstraction abstract, the complexity of all of these discreet networks, physic whatever's down there in the physical layer. Yeah. I don't want to see it. I want to abstract it. I wanted to find things in software, but I do want to leverage the power of hardware to accelerate that. So that's the fourth tenant is SDN automation. >>Mike, we've been talking on the cube a lot about this architectural shift and customers are looking at this. This is a big part of everyone who's looking at cloud operations next gen, how do we get there? How do customers get this vision realized? >>That's a great question. And I appreciate the tee up. I mean, we're, we're here today for that reason. We're introducing two things today. Um, the first is a unified cloud networking vision, and that is a vision of where pluribus is headed with our partners like Nvidia longterm. Um, and that is about, uh, deploying a common operating model, SDN enabled SDN, automated hardware, accelerated across all clouds. Um, and whether that's underlying overlay switch or server, um, hype, any hypervisor infrastructure containers, any workload doesn't matter. So that's ultimately where we want to get. And that's what we talked about earlier. Um, the first step in that vision is what we call the unified cloud fabric. And this is the next generation of our adaptive cloud fabric. Um, and what's nice about this is we're not starting from scratch. We have a, a, an award-winning adaptive cloud fabric product that is deployed globally. Um, and in particular, uh, we're very proud of the fact that it's deployed in over a hundred tier one mobile operators as the network fabric for their 4g and 5g virtualized cores. We know how to build carrier grade, uh, networking infrastructure, what we're doing now, um, to realize this next generation unified cloud fabric is we're extending from the switch to this Nvidia Bluefield to DPU. We know there's a, >>Hold that up real quick. That's a good, that's a good prop. That's the blue field and video. >>It's the Nvidia Bluefield two DPU data processing unit. And, um, uh, you know, what we're doing, uh, fundamentally is extending our SDN automated fabric, the unified cloud fabric out to the host, but it does take processing power. So we knew that we didn't want to do, we didn't want to implement that running on the CPU, which is what some other companies do because it consumes revenue generating CPU's from the application. So a DPU is a perfect way to implement this. And we knew that Nvidia was the leader with this blue field too. And so that is the first that's, that's the first step in the getting into realizing this vision. >>I mean, Nvidia has always been powering some great workloads of GPU. Now you've got DPU networking and then video is here. What is the relationship with clothes? How did that come together? Tell us the story. >>Yeah. So, you know, we've been working with pluribus for quite some time. I think the last several months was really when it came to fruition and, uh, what pluribus is trying to build and what Nvidia has. So we have, you know, this concept of a Bluefield data processing unit, which if you think about it, conceptually does really three things, offload, accelerate an isolate. So offload your workloads from your CPU to your data processing unit infrastructure workloads that is, uh, accelerate. So there's a bunch of acceleration engines. So you can run infrastructure workloads much faster than you would otherwise, and then isolation. So you have this nice security isolation between the data processing unit and your other CPU environment. And so you can run completely isolated workloads directly on the data processing unit. So we introduced this, you know, a couple of years ago, and with pluribus, you know, we've been talking to the pluribus team for quite some months now. >>And I think really the combination of what pluribus is trying to build and what they've developed around this unified cloud fabric, uh, is fits really nicely with the DPU and running that on the DPU and extending it really from your physical switch, all the way to your host environment, specifically on the data processing unit. So if you think about what's happening as you add data processing units to your environment. So every server we believe over time is going to have data processing units. So now you'll have to manage that complexity from the physical network layer to the host layer. And so what pluribus is really trying to do is extending the network fabric from the host, from the switch to the host, and really have that single pane of glass for network operators to be able to configure provision, manage all of the complexity of the network environment. >>So that's really how the partnership truly started. And so it started really with extending the network fabric, and now we're also working with them on security. So, you know, if you sort of take that concept of isolation and security isolation, what pluribus has within their fabric is the concept of micro-segmentation. And so now you can take that extended to the data processing unit and really have, um, isolated micro-segmentation workloads, whether it's bare metal cloud native environments, whether it's virtualized environments, whether it's public cloud, private cloud hybrid cloud. So it really is a magical partnership between the two companies with their unified cloud fabric running on, on the DPU. >>You know, what I love about this conversation is it reminds me of when you have these changing markets, the product gets pulled out of the market and, and you guys step up and create these new solutions. And I think this is a great example. So I have to ask you, how do you guys differentiate what sets this apart for customers with what's in it for the customer? >>Yeah. So I mentioned, you know, three things in terms of the value of what the Bluefield brings, right? There's offloading, accelerating, isolating, that's sort of the key core tenants of Bluefield. Um, so that, you know, if you sort of think about what, um, what Bluefields, what we've done, you know, in terms of the differentiation, we're really a robust platform for innovation. So we introduced Bluefield to, uh, last year, we're introducing Bluefield three, which is our next generation of Bluefields, you know, we'll have five X, the arm compute capacity. It will have 400 gig line rate acceleration, four X better crypto acceleration. So it will be remarkably better than the previous generation. And we'll continue to innovate and add, uh, chips to our portfolio every, every 18 months to two years. Um, so that's sort of one of the key areas of differentiation. The other is the, if you look at Nvidia and, and you know, what we're sort of known for is really known for our AI artificial intelligence and our artificial intelligence software, as well as our GPU. >>So you look at artificial intelligence and the combination of artificial intelligence plus data processing. This really creates the, you know, faster, more efficient, secure AI systems from the core of your data center, all the way out to the edge. And so with Nvidia, we really have these converged accelerators where we've combined the GPU, which does all your AI processing with your data processing with the DPU. So we have this convergence really nice convergence of that area. And I would say the third area is really around our developer environment. So, you know, one of the key, one of our key motivations at Nvidia is really to have our partner ecosystem, embrace our technology and build solutions around our technology. So if you look at what we've done with the DPU, with credit and an SDK, which is an open SDK called Doka, and it's an open SDK for our partners to really build and develop solutions using Bluefield and using all these accelerated libraries that we expose through Doka. And so part of our differentiation is really building this open ecosystem for our partners to take advantage and build solutions around our technology. >>You know, what's exciting is when I hear you talk, it's like you realize that there's no one general purpose network anymore. Everyone has their own super environment Supercloud or these new capabilities. They can really craft their own, I'd say, custom environment at scale with easy tools. Right. And it's all kind of, again, this is the new architecture Mike, you were talking about, how does customers run this effectively? Cost-effectively and how do people migrate? >>Yeah, I, I think that is the key question, right? So we've got this beautiful architecture. You, you know, Amazon nitro is a, is a good example of, of a smart NIC architecture that has been successfully deployed, but enterprises and serve tier two service providers and tier one service providers and governments are not Amazon, right? So they need to migrate there and they need this architecture to be cost-effective. And, and that's, that's super key. I mean, the reality is deep user moving fast, but they're not going to be, um, deployed everywhere on day one. Some servers will have DPS right away, some servers will have use and a year or two. And then there are devices that may never have DPS, right. IOT gateways, or legacy servers, even mainframes. Um, so that's the beauty of a solution that creates a fabric across both the switch and the DPU, right. >>Um, and by leveraging the Nvidia Bluefield DPU, what we really like about it is it's open. Um, and that drives, uh, cost efficiencies. And then, um, uh, you know, with this, with this, our architectural approach effectively, you get a unified solution across switch and DPU workload independent doesn't matter what hypervisor it is, integrated visibility, integrated security, and that can, uh, create tremendous cost efficiencies and, and really extract a lot of the expense from, from a capital perspective out of the network, as well as from an operational perspective, because now I have an SDN automated solution where I'm literally issuing a command to deploy a network service or to create or deploy our security policy and is deployed everywhere, automatically saving the oppor, the network operations team and the security operations team time. >>All right. So let me rewind that because that's super important. Get the unified cloud architecture, I'm the customer guy, but it's implemented, what's the value again, take, take me through the value to me. I have a unified environment. What's the value. >>Yeah. So I mean, the value is effectively, um, that, so there's a few pieces of value. The first piece of value is, um, I'm creating this clean D mark. I'm taking networking to the host. And like I mentioned, we're not running it on the CPU. So in implementations that run networking on the CPU, there's some conflict between the dev ops team who owned the server and the NetApps team who own the network because they're installing software on the, on the CPU stealing cycles from what should be revenue generating. Uh CPU's. So now by, by terminating the networking on the DPU, we click create this real clean DMARC. So the dev ops folks are happy because they don't necessarily have the skills to manage network and they don't necessarily want to spend the time managing networking. They've got their network counterparts who are also happy the NetApps team, because they want to control the networking. >>And now we've got this clean DMARC where the DevOps folks get the services they need and the NetApp folks get the control and agility they need. So that's a huge value. Um, the next piece of value is distributed security. This is essential. I mentioned earlier, you know, put pushing out micro-segmentation and distributed firewall, basically at the application level, right, where I create these small, small segments on an by application basis. So if a bad actor does penetrate the perimeter firewall, they're contained once they get inside. Cause the worst thing is a bad actor, penetrates a perimeter firewall and can go wherever they want and wreak havoc. Right? And so that's why this, this is so essential. Um, and the next benefit obviously is this unified networking operating model, right? Having, uh, uh, uh, an operating model across switch and server underlay and overlay, workload agnostic, making the life of the NetApps teams much easier so they can focus their time on really strategy instead of spending an afternoon, deploying a single villain, for example. >>Awesome. And I think also from my standpoint, I mean, perimeter security is pretty much, I mean, they're out there, it gets the firewall still out there exists, but pretty much they're being breached all the time, the perimeter. So you have to have this new security model. And I think the other thing that you mentioned, the separation between dev ops is cool because the infrastructure is code is about making the developers be agile and build security in from day one. So this policy aspect is, is huge. Um, new control points. I think you guys have a new architecture that enables the security to be handled more flexible. >>Right. >>That seems to be the killer feature here, >>Right? Yeah. If you look at the data processing unit, I think one of the great things about sort of this new architecture, it's really the foundation for zero trust it's. So like you talked about the perimeter is getting breached. And so now each and every compute node has to be protected. And I think that's sort of what you see with the partnership between pluribus and Nvidia is the DPU is really the foundation of zero trust. And pluribus is really building on that vision with, uh, allowing sort of micro-segmentation and being able to protect each and every compute node as well as the underlying network. >>This is super exciting. This is an illustration of how the market's evolving architectures are being reshaped and refactored for cloud scale and all this new goodness with data. So I gotta ask how you guys go into market together. Michael, start with you. What's the relationship look like in the go to market with an Nvidia? >>Sure. Um, I mean, we're, you know, we're super excited about the partnership, obviously we're here together. Um, we think we've got a really good solution for the market, so we're jointly marketing it. Um, uh, you know, obviously we appreciate that Nvidia is open. Um, that's, that's sort of in our DNA, we're about open networking. They've got other ISV who are gonna run on Bluefield too. We're probably going to run on other DPS in the, in the future, but right now, um, we're, we feel like we're partnered with the number one, uh, provider of DPS in the world and, uh, super excited about, uh, making a splash with it. >>I'm in get the hot product. >>Yeah. So Bluefield too, as I mentioned was GA last year, we're introducing, uh, well, we now also have the converged accelerator. So I talked about artificial intelligence or artificial intelligence with the Bluefield DPU, all of that put together on a converged accelerator. The nice thing there is you can either run those workloads. So if you have an artificial intelligence workload and an infrastructure workload, you can warn them separately on the same platform or you can actually use, uh, you can actually run artificial intelligence applications on the Bluefield itself. So that's what the converged accelerator really brings to the table. Uh, so that's available now. Then we have Bluefield three, which will be available late this year. And I talked about sort of, you know, uh, how much better that next generation of Bluefield is in comparison to Bluefield two. So we will see Bluefield three shipping later on this year, and then our software stack, which I talked about, which is called Doka we're on our second version are Doka one dot two. >>We're releasing Doka one dot three, uh, in about two months from now. And so that's really our open ecosystem framework. So allow you to program the Bluefields. So we have all of our acceleration libraries, um, security libraries, that's all packed into this STK called Doka. And it really gives that simplicity to our partners to be able to develop on top of Bluefield. So as we add new generations of Bluefield, you know, next, next year, we'll have, you know, another version and so on and so forth Doka is really that unified unified layer that allows, um, Bluefield to be both forwards compatible and backwards compatible. So partners only really have to think about writing to that SDK once, and then it automatically works with future generations of Bluefields. So that's sort of the nice thing around, um, around Doka. And then in terms of our go to market model, we're working with every, every major OEM. So, uh, later on this year, you'll see, you know, major server manufacturers, uh, releasing Bluefield enabled servers. So, um, more to come >>Awesome, save money, make it easier, more capabilities, more workload power. This is the future of, of cloud operations. >>Yeah. And, and, and, uh, one thing I'll add is, um, we are, um, we have a number of customers as you'll hear in the next segment, um, that are already signed up and we'll be working with us for our, uh, early field trial starting late April early may. Um, we are accepting registrations. You can go to www.pluribusnetworks.com/e F T a. If you're interested in signing up for, um, uh, being part of our field trial and providing feedback on the product, >>Awesome innovation and network. Thanks so much for sharing the news. Really appreciate it. Thanks so much. Okay. In a moment, we'll be back to look deeper in the product, the integration security zero trust use cases. You're watching the cube, the leader in enterprise tech coverage, >>Cloud networking is complex and fragmented slowing down your business. How can you simplify and unify your cloud networks to increase agility and business velocity? >>Pluribus unified cloud networking provides a unified simplify and agile network fabric across all clouds. It brings the simplicity of a public cloud operation model to private clouds, dramatically reducing complexity and improving agility, availability, and security. Now enterprises and service providers can increase their business philosophy and delight customers in the distributed multi-cloud era. We achieve this with a new approach to cloud networking, pluribus unified cloud fabric. This open vendor, independent network fabric, unifies, networking, and security across distributed clouds. The first step is extending the fabric to servers equipped with data processing units, unifying the fabric across switches and servers, and it doesn't stop there. The fabric is unified across underlay and overlay networks and across all workloads and virtualization environments. The unified cloud fabric is optimized for seamless migration to this new distributed architecture, leveraging the power of the DPU for application level micro-segmentation distributed fireball and encryption while still supporting those servers and devices that are not equipped with a DPU. Ultimately the unified cloud fabric extends seamlessly across distributed clouds, including central regional at edge private clouds and public clouds. The unified cloud fabric is a comprehensive network solution. That includes everything you need for clouds, networking built in SDN automation, distributed security without compromises, pervasive wire speed, visibility and application insight available on your choice of open networking switches and DP use all at the lowest total cost of ownership. The end result is a dramatically simplified unified cloud networking architecture that unifies your distributed clouds and frees your business to move at cloud speed, >>To learn more, visit www.pluribusnetworks.com. >>Okay. We're back I'm John ferry with the cube, and we're going to go deeper into a deep dive into unified cloud networking solution from Clovis and Nvidia. And we'll examine some of the use cases with Alessandra Burberry, VP of product management and pullovers networks and Pete Bloomberg who's director of technical marketing and video remotely guys. Thanks for coming on. Appreciate it. >>Yeah. >>So deep dive, let's get into the what and how Alexandra we heard earlier about the pluribus Nvidia partnership and the solution you're working together on what is it? >>Yeah. First let's talk about the water. What are we really integrating with the Nvidia Bluefield, the DPO technology, uh, plugable says, um, uh, there's been shipping, uh, in, uh, in volume, uh, in multiple mission critical networks. So this advisor one network operating systems, it runs today on a merchant silicone switches and effectively it's a standard open network operating system for data center. Um, and the novelty about this system that integrates a distributed control plane for, at water made effective in SDN overlay. This automation is a completely open and interoperable and extensible to other type of clouds is not enclosed them. And this is actually what we're now porting to the Nvidia DPO. >>Awesome. So how does it integrate into Nvidia hardware and specifically how has pluribus integrating its software with the Nvidia hardware? >>Yeah, I think, uh, we leverage some of the interesting properties of the Bluefield, the DPO hardware, which allows actually to integrate, uh, um, uh, our software, our network operating system in a manner which is completely isolated and independent from the guest operating system. So the first byproduct of this approach is that whatever we do at the network level on the DPU card that is completely agnostic to the hypervisor layer or OSTP layer running on, uh, on the host even more, um, uh, we can also independently manage this network, know that the switch on a Neek effectively, um, uh, managed completely independently from the host. You don't have to go through the network operating system, running on x86 to control this network node. So you throw yet the experience effectively of a top of rack for virtual machine or a top of rack for, uh, Kubernetes bots, where instead of, uh, um, if you allow me with the analogy instead of connecting a server knee directly to a switchboard, now you're connecting a VM virtual interface to a virtual interface on the switch on an ache. >>And, uh, also as part of this integration, we, uh, put a lot of effort, a lot of emphasis in, uh, accelerating the entire, uh, data plane for networking and security. So we are taking advantage of the DACA, uh, Nvidia DACA API to program the accelerators. And these accomplished two things with that. Number one, uh, you, uh, have much greater performance, much better performance. They're running the same network services on an x86 CPU. And second, this gives you the ability to free up, I would say around 20, 25% of the server capacity to be devoted either to, uh, additional workloads to run your cloud applications, or perhaps you can actually shrink the power footprint and compute footprint of your data center by 20%, if you want to run the same number of compute workloads. So great efficiencies in the overall approach, >>And this is completely independent of the server CPU, right? >>Absolutely. There is zero code from running on the x86, and this is what we think this enables a very clean demarcation between computer and network. >>So Pete, I gotta get, I gotta get you in here. We heard that, uh, the DPU is enabled cleaner separation of dev ops and net ops. Can you explain why that's important because everyone's talking DevSecOps right now, you've got net ops, net, net sec ops, this separation. Why is this clean separation important? >>Yeah, I think it's a, you know, it's a pragmatic solution in my opinion. Um, you know, we wish the world was all kind of rainbows and unicorns, but it's a little, a little messier than that. And I think a lot of the dev ops stuff and that, uh, mentality and philosophy, there's a natural fit there. Right? You have applications running on servers. So you're talking about developers with those applications integrating with the operators of those servers. Well, the network has always been this other thing and the network operators have always had a very different approach to things than compute operators. And, you know, I think that we, we in the networking industry have gotten closer together, but there's still a gap there's still some distance. And I think in that distance, isn't going to be closed. And so, you know, again, it comes down to pragmatism and I think, you know, one of my favorite phrases is look good fences, make good neighbors. And that's what this is. >>Yeah. That's a great point because dev ops has become kind of the calling card for cloud, right. But dev ops is as simply infrastructure as code and infrastructure is networking, right? So if infrastructure is code, you know, you're talking about, you know, that part of the stack under the covers under the hood, if you will, this is super important distinction. And this is where the innovation is. Can you elaborate on how you see that? Because this is really where the action is right now. >>Yeah, exactly. And I think that's where, um, one from, from the policy, the security that the zero trust aspect of this, right? If you get it wrong on that network side, all of a sudden you, you can totally open up that those capabilities. And so security is part of that. But the other part is thinking about this at scale, right? So we're taking one top of rack switch and adding, you know, up to 48 servers per rack. And so that ability to automate, orchestrate and manage at scale becomes absolutely critical. >>I'll Sandra, this is really the why we're talking about here, and this is scale. And again, getting it right. If you don't get it right, you're going to be really kind of up, you know what you know, so this is a huge deal. Networking matters, security matters, automation matters, dev ops, net ops, all coming together, clean separation, um, help us understand how this joint solution with Nvidia fits into the pluribus unified cloud networking vision, because this is what people are talking about and working on right now. >>Yeah, absolutely. So I think here with this solution, we're attacking two major problems in cloud networking. One is, uh, operation of, uh, cloud networking. And the second is a distributing security services in the cloud infrastructure. First, let me talk about the first water. We really unifying. If we're unifying something, something must be at least fragmented or this jointed and the, what is this joint that is actually the network in the cloud. If you look holistically, how networking is deployed in the cloud, you have your physical fabric infrastructure, right? Your switches and routers, you'll build your IP clause fabric leaf in spine typologies. This is actually a well understood the problem. I, I would say, um, there are multiple vendors, uh, uh, with, uh, um, uh, let's say similar technologies, um, very well standardized, whether you will understood, um, and almost a commodity, I would say building an IP fabric these days, but this is not the place where you deploy most of your services in the cloud, particularly from a security standpoint, two services are actually now moved into the compute layer where you actually were called builders, have to instrument the, a separate, uh, network virtualization layer, where they deploy segmentation and security closer to the workloads. >>And this is where the complication arise. These high value part of the cloud network is where you have a plethora of options that they don't talk to each other. And they are very dependent on the kind of hypervisor or compute solution you choose. Um, for example, the networking API to be between an GSXI environment or an hyper V or a Zen are completely disjointed. You have multiple orchestration layers. And when, and then when you throw in also Kubernetes in this, in this, in this type of architecture, uh, you're introducing yet another level of networking. And when Kubernetes runs on top of VMs, which is a prevalent approach, you actually just stacking up multiple networks on the compute layer that they eventually run on the physical fabric infrastructure. Those are all ships in the nights effectively, right? They operate as completely disjointed. And we're trying to attack this problem first with the notion of a unified fabric, which is independent from any workloads, whether it's this fabric spans on a switch, which can be con connected to a bare metal workload, or can span all the way inside the DPU, uh, where, um, you have, uh, your multi hypervisor compute environment. >>It's one API, one common network control plane, and one common set of segmentation services for the network. That's probably the number one, >>You know, it's interesting you, man, I hear you talking, I hear one network month, different operating models reminds me of the old serverless days. You know, there's still servers, but they call it serverless. Is there going to be a term network list? Because at the end of the day, it should be one network, not multiple operating models. This, this is a problem that you guys are working on. Is that right? I mean, I'm not, I'm just joking server listen network list, but the idea is it should be one thing. >>Yeah, it's effectively. What we're trying to do is we are trying to recompose this fragmentation in terms of network operation, across physical networking and server networking server networking is where the majority of the problems are because of the, uh, as much as you have standardized the ways of building, uh, physical networks and cloud fabrics with IP protocols and internet, you don't have that kind of, uh, uh, sort of, uh, um, um, uh, operational efficiency, uh, at the server layer. And, uh, this is what we're trying to attack first. The, with this technology, the second aspect we're trying to attack is are we distribute the security services throughout the infrastructure, more efficiently, whether it's micro-segmentation is a stateful firewall services, or even encryption. Those are all capabilities enabled by the blue field, uh, uh, the Butte technology and, uh, uh, we can actually integrate those capabilities directly into the nettle Fabrica, uh, limiting dramatically, at least for east-west traffic, the sprawl of, uh, security appliances, whether virtual or physical, that is typically the way the people today, uh, segment and secure the traffic in the cloud. >>Awesome. Pete, all kidding aside about network lists and serverless kind of fun, fun play on words there, the network is one thing it's basically distributed computing, right? So I love to get your thoughts about this distributed security with zero trust as the driver for this architecture you guys are doing. Can you share in more detail the depth of why DPU based approach is better than alternatives? >>Yeah, I think what's, what's beautiful and kind of what the DPU brings. That's new to this model is a completely isolated compute environment inside. So, you know, it's the, uh, yo dog, I heard you like a server, so I put a server inside your server. Uh, and so we provide, uh, you know, armed CPU's memory and network accelerators inside, and that is completely isolated from the host. So the server, the, the actual x86 host just thinks it has a regular Nick in there, but you actually have this full control plane thing. It's just like taking your top of rack switch and shoving it inside of your compute node. And so you have not only the separation, um, within the data plane, but you have this complete control plane separation. So you have this element that the network team can now control and manage, but we're taking all of the functions we used to do at the top of rack switch, and we're just shooting them now. >>And, you know, as time has gone on we've, we've struggled to put more and more and more into that network edge. And the reality is the network edge is the compute layer, not the top of rack switch layer. And so that provides this phenomenal enforcement point for security and policy. And I think outside of today's solutions around virtual firewalls, um, the other option is centralized appliances. And even if you can get one that can scale large enough, the question is, can you afford it? And so what we end up doing is we kind of hope that of aliens good enough, or we hope that if the excellent tunnel is good enough and we can actually apply more advanced techniques there because we can't physically, you know, financially afford that appliance to see all of the traffic. And now that we have a distributed model with this accelerator, we could do it. >>So what's the what's in it for the customer. I real quick, cause I think this is interesting point. You mentioned policy, everyone in networking knows policy is just a great thing and it adds, you hear it being talked about up the stack as well. When you start getting to orchestrating microservices and whatnot, all that good stuff going on there, containers and whatnot and modern applications. What's the benefit to the customers with this approach? Because what I heard was more scale, more edge deployment, flexibility, relative to security policies and application enablement. I mean, is that what what's the customer get out of this architecture? What's the enablement. >>It comes down to, uh, taking again the capabilities that were in that top of rack switch and asserting them down. So that makes simplicity smaller blast radiuses for failure, smaller failure domains, maintenance on the networks, and the systems become easier. Your ability to integrate across workloads becomes infinitely easier. Um, and again, you know, we always want to kind of separate each one of those layers. So just as in say, a VX land network, my leaf and spine don't have to be tightly coupled together. I can now do this at a different layer. And so you can run a DPU with any networking in the core there. And so you get this extreme flexibility. You can start small, you can scale large. Um, you know, to me, the, the possibilities are endless. Yes, >>It's a great security control plan. Really flexibility is key. And, and also being situationally aware of any kind of threats or new vectors or whatever's happening in the network. Alessandra, this is huge upside, right? You've already identified some successes with some customers on your early field trials. What are they doing and why are they attracted to the solution? >>Yeah, I think the response from customers has been, uh, the most, uh, encouraging and, uh, exciting, uh, for, uh, for us to, uh, to sort of continue and work and develop this product. And we have actually learned a lot in the process. Um, we talked to tier two tier three cloud providers. Uh, we talked to, uh, SP um, software Tyco type of networks, uh, as well as a large enterprise customers, um, in, uh, one particular case. Um, uh, one, uh, I think, um, let me, let me call out a couple of examples here, just to give you a flavor. Uh, there is a service provider, a cloud provider, uh, in Asia who is actually managing a cloud, uh, where they are offering services based on multiple hypervisors. They are native services based on Zen, but they also are on ramp into the cloud, uh, workloads based on, uh, ESI and, uh, uh, and KVM, depending on what the customer picks from the piece on the menu. >>And they have the problem of now orchestrating through their orchestrate or integrating with the Zen center with vSphere, uh, with, uh, open stack to coordinate these multiple environments and in the process to provide security, they actually deploy virtual appliances everywhere, which has a lot of costs, complication, and eats up into the server CPU. The problem is that they saw in this technology, they call it actually game changing is actually to remove all this complexity of in a single network and distribute the micro-segmentation service directly into the fabric. And overall, they're hoping to get out of it, uh, uh, tremendous, uh, um, opics, uh, benefit and overall, um, uh, operational simplification for the cloud infrastructure. That's one potent a use case. Uh, another, uh, large enterprise customer global enterprise customer, uh, is running, uh, both ESI and hyper V in that environment. And they don't have a solution to do micro-segmentation consistently across hypervisors. >>So again, micro-segmentation is a huge driver security looks like it's a recurring theme, uh, talking to most of these customers and in the Tyco space, um, uh, we're working with a few types of customers on the CFT program, uh, where the main goal is actually to our Monet's network operation. They typically handle all the VNF search with their own homegrown DPDK stack. This is overly complex. It is frankly also as low and inefficient, and then they have a physical network to manage the, the idea of having again, one network, uh, to coordinate the provision in our cloud services between the, the take of VNF, uh, and, uh, the rest of the infrastructure, uh, is extremely powerful on top of the offloading capability of the, by the bluefin DPOs. Those are just some examples. >>That was a great use case, a lot more potential. I see that with the unified cloud networking, great stuff, feed, shout out to you guys at Nvidia had been following your success for a long time and continuing to innovate as cloud scales and pluribus here with the unified networking, kind of bring it to the next level. Great stuff. Great to have you guys on. And again, software keeps driving the innovation again, networking is just a part of it, and it's the key solution. So I got to ask both of you to wrap this up. How can cloud operators who are interested in, in this, uh, new architecture and solution, uh, learn more because this is an architectural shift. People are working on this problem. They're trying to think about multiple clouds of trying to think about unification around the network and giving more security, more flexibility, uh, to their teams. How can people learn more? >>Yeah, so, uh, all Sandra and I have a talk at the upcoming Nvidia GTC conference. Um, so that's the week of March 21st through 24th. Um, you can go and register for free and video.com/at GTC. Um, you can also watch recorded sessions if you ended up watching us on YouTube a little bit after the fact. Um, and we're going to dive a little bit more into the specifics and the details and what we're providing in the solution. >>Alexandra, how can people learn more? >>Yeah, absolutely. People can go to the pluribus, a website, www boost networks.com/eft, and they can fill up the form and, uh, they will contact durables to either know more or to know more and actually to sign up for the actual early field trial program, which starts at the end of April. >>Okay. Well, we'll leave it there. Thanks. You both for joining. Appreciate it up next. You're going to hear an independent analyst perspective and review some of the research from the enterprise strategy group ESG. I'm John ferry with the >>Cube. Thanks for watching. >>Okay. We've heard from the folks at networks and Nvidia about their effort to transform cloud networking and unify bespoke infrastructure. Now let's get the perspective from an independent analyst and to do so. We welcome in ESG, senior analysts, Bob LA Liberte, Bob. Good to see you. Thanks for coming into our east coast studios. >>Oh, thanks for having me. It's great to be >>Here. Yeah. So this, this idea of unified cloud networking approach, how serious is it? What's what's driving it. >>Yeah, there's certainly a lot of drivers behind it, but probably the first and foremost is the fact that application environments are becoming a lot more distributed, right? So the, it pendulum tends to swing back and forth. And we're definitely on one that's swinging from consolidated to distributed. And so applications are being deployed in multiple private data centers, multiple public cloud locations, edge locations. And as a result of that, what you're seeing is a lot of complexity. So organizations are having to deal with this highly disparate environment. They have to secure it. They have to ensure connectivity to it and all that's driving up complexity. In fact, when we asked in one of our last surveys and last year about network complexity, more than half 54% came out and said, Hey, our network environment is now either more or significantly more complex than it used to be. >>And as a result of that, what you're seeing is it's really impacting agility. So everyone's moving to these modern application environments, distributing them across areas so they can improve agility yet it's creating more complexity. So a little bit counter to the fact and, you know, really counter to their overarching digital transformation initiatives. From what we've seen, you know, nine out of 10 organizations today are either beginning in process or have a mature digital transformation process or initiative, but their top goals, when you look at them, it probably shouldn't be a surprise. The number one goal is driving operational efficiency. So it makes sense. I've distributed my environment to create agility, but I've created a lot of complexity. So now I need these tools that are going to help me drive operational efficiency, drive better experience. >>I mean, I love how you bring in the data yesterday. Does a great job with that. Uh, questions is, is it about just unifying existing networks or is there sort of a need to rethink kind of a do-over network, how networks are built? >>Yeah, that's a, that's a really good point because certainly unifying networks helps right. Driving any kind of operational efficiency helps. But in this particular case, because we've made the transition to new application architectures and the impact that's having as well, it's really about changing and bringing in new frameworks and new network architectures to accommodate those new application architectures. And by that, what I'm talking about is the fact that these new modern application architectures, microservices, containers are driving a lot more east west traffic. So in the old days, it used to be easier in north south coming out of the server, one application per server, things like that. Right now you've got hundreds, if not thousands of microservices communicating with each other users communicating to them. So there's a lot more traffic and a lot of it's taking place within the servers themselves. The other issue that you starting to see as well from that security perspective, when we were all consolidated, we had those perimeter based legacy, you know, castle and moat security architectures, but that doesn't work anymore when the applications aren't in the castle, right. >>When everything's spread out that that no longer happens. So we're absolutely seeing, um, organizations trying to, trying to make a shift. And, and I think much, like if you think about the shift that we're seeing with all the remote workers and the sassy framework to enable a secure framework there, this it's almost the same thing. We're seeing this distributed services framework come up to support the applications better within the data centers, within the cloud data centers, so that you can drive that security closer to those applications and make sure they're, they're fully protected. Uh, and that's really driving a lot of the, you know, the zero trust stuff you hear, right? So never trust, always verify, making sure that everything is, is, is really secure micro-segmentation is another big area. So ensuring that these applications, when they're connected to each other, they're, they're fully segmented out. And that's again, because if someone does get a breach, if they are in your data center, you want to limit the blast radius, you want to limit the amount of damage that's done. So that by doing that, it really makes it a lot harder for them to see everything that's in there. >>You know, you mentioned zero trust. It used to be a buzzword, and now it's like become a mandate. And I love the mode analogy. You know, you build a moat to protect the queen and the castle, the Queens left the castles, it's just distributed. So how should we think about this, this pluribus and Nvidia solution. There's a spectrum, help us understand that you've got appliances, you've got pure software solutions. You've got what pluribus is doing with Nvidia, help us understand that. >>Yeah, absolutely. I think as organizations recognize the need to distribute their services to closer to the applications, they're trying different models. So from a legacy approach, you know, from a security perspective, they've got these centralized firewalls that they're deploying within their data centers. The hard part for that is if you want all this traffic to be secured, you're actually sending it out of the server up through the rack, usually to in different location in the data center and back. So with the need for agility, with the need for performance, right, that adds a lot of latency. Plus when you start needing to scale, that means adding more and more network connections, more and more appliances. So it can get very costly as well as impacting the performance. The other way that organizations are seeking to solve this problem is by taking the software itself and deploying it on the servers. Okay. So that's a, it's a great approach, right? It brings it really close to the applications, but the things you start running into there, there's a couple of things. One is that you start seeing that the DevOps team start taking on that networking and security responsibility, which they >>Don't want to >>Do, they don't want to do right. And the operations teams loses a little bit of visibility into that. Um, plus when you load the software onto the server, you're taking up precious CPU cycles. So if you're really wanting your applications to perform at an optimized state, having additional software on there, isn't going to, isn't going to do it. So, you know, when we think about all those types of things, right, and certainly the other side effects of that is the impact of the performance, but there's also a cost. So if you have to buy more servers because your CPU's are being utilized, right, and you have hundreds or thousands of servers, right, those costs are going to add up. So what, what Nvidia and pluribus have done by working together is to be able to take some of those services and be able to deploy them onto a smart Nick, right? >>To be able to deploy the DPU based smart SMARTNICK into the servers themselves. And then pluribus has come in and said, we're going to unify create that unified fabric across the networking space, into those networking services all the way down to the server. So the benefits of having that are pretty clear in that you're offloading that capability from the server. So your CPU's are optimized. You're saving a lot of money. You're not having to go outside of the server and go to a different rack somewhere else in the data center. So your performance is going to be optimized as well. You're not going to incur any latency hit for every trip round trip to the, to the firewall and back. So I think all those things are really important. Plus the fact that you're going to see from a, an organizational aspect, we talked about the dev ops and net ops teams. The network operations teams now can work with the security teams to establish the security policies and the networking policies. So that they've dev ops teams. Don't have to worry about that. So essentially they just create the guardrails and let the dev op team run. Cause that's what they want. They want that agility and speed. >>Yeah. Your point about CPU cycles is key. I mean, it's estimated that 25 to 30% of CPU cycles in the data center are wasted. The cores are wasted doing storage offload or, or networking or security offload. And, you know, I've said many times everybody needs a nitro like Amazon nugget, but you can't go, you can only buy Amazon nitro if you go into AWS. Right. Everybody needs a nitro. So is that how we should think about this? >>Yeah. That's a great analogy to think about this. Um, and I think I would take it a step further because it's, it's almost the opposite end of the spectrum because pluribus and video are doing this in a very open way. And so pluribus has always been a proponent of open networking. And so what they're trying to do is extend that now to these distributed services. So leverage working with Nvidia, who's also open as well, being able to bring that to bear so that organizations can not only take advantage of these distributed services, but also that unified networking fabric, that unified cloud fabric across that environment from the server across the switches, the other key piece of what pluribus is doing, because they've been doing this for a while now, and they've been doing it with the older application environments and the older server environments, they're able to provide that unified networking experience across a host of different types of servers and platforms. So you can have not only the modern application supported, but also the legacy environments, um, you know, bare metal. You could go any type of virtualization, you can run containers, et cetera. So a wide gambit of different technologies hosting those applications supported by a unified cloud fabric from pluribus. >>So what does that mean for the customer? I don't have to rip and replace my whole infrastructure, right? >>Yeah. Well, think what it does for, again, from that operational efficiency, when you're going from a legacy environment to that modern environment, it helps with the migration helps you accelerate that migration because you're not switching different management systems to accomplish that. You've got the same unified networking fabric that you've been working with to enable you to run your legacy as well as transfer over to those modern applications. Okay. >>So your people are comfortable with the skillsets, et cetera. All right. I'll give you the last word. Give us the bottom line here. >>So yeah, I think obviously with all the modern applications that are coming out, the distributed application environments, it's really posing a lot of risk on these organizations to be able to get not only security, but also visibility into those environments. And so organizations have to find solutions. As I said, at the beginning, they're looking to drive operational efficiency. So getting operational efficiency from a unified cloud networking solution, that it goes from the server across the servers to multiple different environments, right in different cloud environments is certainly going to help organizations drive that operational efficiency. It's going to help them save money for visibility, for security and even open networking. So a great opportunity for organizations, especially large enterprises, cloud providers who are trying to build that hyperscaler like environment. You mentioned the nitro card, right? This is a great way to do it with an open solution. >>Bob, thanks so much for, for coming in and sharing your insights. Appreciate it. >>You're welcome. Thanks. >>Thanks for watching the program today. Remember all these videos are available on demand@thekey.net. You can check out all the news from today@siliconangle.com and of course, pluribus networks.com many thanks diplomas for making this program possible and sponsoring the cube. This is Dave Volante. Thanks for watching. Be well, we'll see you next time.
SUMMARY :
And one of the best examples is Amazon's nitro. So if you can eliminate that waste, and Pete Lummus from Nvidia to take a deeper dive into the technology. Great to have you welcome folks. Thank you. So let's get into the, the problem situation with cloud unified network. and the first mandate for them is to become as agile as a hyperscaler. How does this tie together? Each of the public clouds have different networks that needs to be unified. So that's the fourth tenant How do customers get this vision realized? And I appreciate the tee up. That's the blue field and video. And so that is the first that's, that's the first step in the getting into realizing What is the relationship with clothes? So we have, you know, this concept of a Bluefield data processing unit, which if you think about it, the host, from the switch to the host, and really have that single pane of glass for So it really is a magical partnership between the two companies with pulled out of the market and, and you guys step up and create these new solutions. Um, so that, you know, if you sort of think about what, So if you look at what we've done with the DPU, with credit and an SDK, which is an open SDK called And it's all kind of, again, this is the new architecture Mike, you were talking about, how does customers So they need to migrate there and they need this architecture to be cost-effective. And then, um, uh, you know, with this, with this, our architectural approach effectively, Get the unified cloud architecture, I'm the customer guy, So now by, by terminating the networking on the DPU, Um, and the next benefit obviously So you have to have this new security model. And I think that's sort of what you see with the partnership between pluribus and Nvidia is the DPU is really the the go to market with an Nvidia? in the future, but right now, um, we're, we feel like we're partnered with the number one, And I talked about sort of, you know, uh, how much better that next generation of Bluefield So as we add new generations of Bluefield, you know, next, This is the future of, of cloud operations. You can go to www.pluribusnetworks.com/e Thanks so much for sharing the news. How can you simplify and unify your cloud networks to increase agility and business velocity? Ultimately the unified cloud fabric extends seamlessly across And we'll examine some of the use cases with Alessandra Burberry, Um, and the novelty about this system that integrates a distributed control So how does it integrate into Nvidia hardware and specifically So the first byproduct of this approach is that whatever And second, this gives you the ability to free up, I would say around 20, and this is what we think this enables a very clean demarcation between computer and So Pete, I gotta get, I gotta get you in here. And so, you know, again, it comes down to pragmatism and I think, So if infrastructure is code, you know, you're talking about, you know, that part of the stack And so that ability to automate, into the pluribus unified cloud networking vision, because this is what people are talking but this is not the place where you deploy most of your services in the cloud, particularly from a security standpoint, on the kind of hypervisor or compute solution you choose. That's probably the number one, I mean, I'm not, I'm just joking server listen network list, but the idea is it should the Butte technology and, uh, uh, we can actually integrate those capabilities directly So I love to get your thoughts about Uh, and so we provide, uh, you know, armed CPU's memory scale large enough, the question is, can you afford it? What's the benefit to the customers with this approach? And so you can run a DPU You've already identified some successes with some customers on your early field trials. couple of examples here, just to give you a flavor. And overall, they're hoping to get out of it, uh, uh, tremendous, and then they have a physical network to manage the, the idea of having again, one network, So I got to ask both of you to wrap this up. Um, so that's the week of March 21st through 24th. more or to know more and actually to sign up for the actual early field trial program, You're going to hear an independent analyst perspective and review some of the research from the enterprise strategy group ESG. Now let's get the perspective It's great to be What's what's driving it. So organizations are having to deal with this highly So a little bit counter to the fact and, you know, really counter to their overarching digital transformation I mean, I love how you bring in the data yesterday. So in the old days, it used to be easier in north south coming out of the server, So that by doing that, it really makes it a lot harder for them to see And I love the mode analogy. but the things you start running into there, there's a couple of things. So if you have to buy more servers because your CPU's are being utilized, the server and go to a different rack somewhere else in the data center. So is that how we should think about this? environments and the older server environments, they're able to provide that unified networking experience across environment, it helps with the migration helps you accelerate that migration because you're not switching different management I'll give you the last word. that it goes from the server across the servers to multiple different environments, right in different cloud environments Bob, thanks so much for, for coming in and sharing your insights. You're welcome. You can check out all the news from today@siliconangle.com and of course,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Donnie | PERSON | 0.99+ |
Bob Liberte | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Alessandra Burberry | PERSON | 0.99+ |
Sandra | PERSON | 0.99+ |
Dave Volante | PERSON | 0.99+ |
Nvidia | ORGANIZATION | 0.99+ |
Pete Bloomberg | PERSON | 0.99+ |
Michael | PERSON | 0.99+ |
Asia | LOCATION | 0.99+ |
Alexandra | PERSON | 0.99+ |
hundreds | QUANTITY | 0.99+ |
Pete Lummus | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Bob LA Liberte | PERSON | 0.99+ |
Mike | PERSON | 0.99+ |
John | PERSON | 0.99+ |
ESG | ORGANIZATION | 0.99+ |
Bob | PERSON | 0.99+ |
two companies | QUANTITY | 0.99+ |
25 | QUANTITY | 0.99+ |
Alessandra Bobby | PERSON | 0.99+ |
two years | QUANTITY | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
thousands | QUANTITY | 0.99+ |
Bluefield | ORGANIZATION | 0.99+ |
NetApps | ORGANIZATION | 0.99+ |
demand@thekey.net | OTHER | 0.99+ |
20% | QUANTITY | 0.99+ |
last year | DATE | 0.99+ |
a year | QUANTITY | 0.99+ |
March 21st | DATE | 0.99+ |
First | QUANTITY | 0.99+ |
www.pluribusnetworks.com/e | OTHER | 0.99+ |
Tyco | ORGANIZATION | 0.99+ |
late April | DATE | 0.99+ |
Doka | TITLE | 0.99+ |
400 gig | QUANTITY | 0.99+ |
yesterday | DATE | 0.99+ |
second version | QUANTITY | 0.99+ |
two services | QUANTITY | 0.99+ |
first step | QUANTITY | 0.99+ |
third area | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
second aspect | QUANTITY | 0.99+ |
One | QUANTITY | 0.99+ |
Each | QUANTITY | 0.99+ |
www.pluribusnetworks.com | OTHER | 0.99+ |
Pete | PERSON | 0.99+ |
last year | DATE | 0.99+ |
one application | QUANTITY | 0.99+ |
two things | QUANTITY | 0.99+ |
Rajesh Pohani and Dan Stanzione | CUBE Conversation, February 2022
(contemplative upbeat music) >> Hello and welcome to this CUBE Conversation. I'm John Furrier, your host of theCUBE, here in Palo Alto, California. Got a great topic on expanding capabilities for urgent computing. Dan Stanzione, he's Executive Director of TACC, the Texas Advanced Computing Center, and Rajesh Pohani, VP of PowerEdge, HPC Core Compute at Dell Technologies. Gentlemen, welcome to this CUBE Conversation. >> Thanks, John. >> Thanks, John, good to be here. >> Rajesh, you got a lot of computing in PowerEdge, HPC, Core Computing. I mean, I get a sense that you love compute, so we'll jump right into it. And of course, I got to love TACC, Texas Advanced Computing Center. I can imagine a lot of stuff going on there. Let's start with TACC. What is the Texas Advanced Computing Center? Tell us a little bit about that. >> Yeah, we're part of the University of Texas at Austin here, and we build large-scale supercomputers, data systems, AI systems, to support open science research. And we're mainly funded by the National Science Foundation, so we support research projects in all fields of science, all around the country and around the world. Actually, several thousand projects at the moment. >> But tied to the university, got a lot of gear, got a lot of compute, got a lot of cool stuff going on. What's the coolest thing you got going on right now? >> Well, for me, it's always the next machine, but I think science-wise, it's the machines we have. We just finished deploying Lonestar6, which is our latest supercomputer, in conjunction with Dell. A little over 600 nodes of those PowerEdge servers that Rajesh builds for us. Which makes more than 20,000 that we've had here over the years, of those boxes. But that one just went into production. We're designing new systems for a few years from now, where we'll be even larger. Our Frontera system was top five in the world two years ago, just fell out of the top 10. So we've got to fix that and build the new top-10 system sometime soon. We always have a ton going on in large-scale computing. >> Well, I want to get to the Lonestar6 in a minute, on the next talk track, but... What are some of the areas that you guys are working on that are making an impact? Take us through, and we talked before we came on camera about, obviously, the academic affiliation, but also there's a real societal impact of the work you're doing. What are some of the key areas that the TACC is making an impact? >> So there's really a huge range from new microprocessors, new materials design, photovoltaics, climate modeling, basic science and astrophysics, and quantum mechanics, and things like that. But I think the nearest-term impacts that people see are what we call urgent computing, which is one of the drivers around Lonestar and some other recent expansions that we've done. And that's things like, there's a hurricane coming, exactly where is it going to land? Can we refine the area where there's going to be either high winds or storm surge? Can we assess the damage from digital imagery afterwards? Can we direct first responders in the optimal routes? Similarly for earthquakes, and a lot recently, as you might imagine, around COVID. In 2020, we moved almost a third of our resources to doing COVID work, full-time. >> Rajesh, I want to get your thoughts on this, because Dave Vellante and I have been talking about this on theCUBE recently, a lot. Obviously, people see what cloud's, going on with the cloud technology, but compute and on-premises, private cloud's been growing. If you look at the hyperscale on-premises and the edge, if you include that in, you're seeing a lot more user consumption on-premises, and now, with 5G, you got edge, you mentioned first responders, Dan. This is now pointing to a new architectural shift. As the VP of PowerEdge and HPC and Core Compute, you got to look at this and go, "Hmm." If Compute's going to be everywhere, and in locations, you got to have that compute. How does that all work together? And how do you do advanced computing, when you have these urgent needs, as well as real-time in a new architecture? >> Yeah, John, I mean, it's a pretty interesting time when you think about some of the changing dynamics and how customers are utilizing Compute in the compute needs in the industry. Seeing a couple of big trends. One, the distribution of Compute outside of the data center, 5G is really accelerating that, and then you're generating so much data, whether what you do with it, the insights that come out of it, that we're seeing more and more push to AI, ML, inside the data center. Dan mentioned what he's doing at TACC with computational analysis and some of the work that they're doing. So what you're seeing is, now, this push that data in the data center and what you do with it, while data is being created out at the edge. And it's actually this interesting dichotomy that we're beginning to see. Dan mentioned some of the work that they're doing in medical and on COVID research. Even at Dell, we're making cycles available for COVID research using our Zenith cluster, that's located in our HPC and AI Innovation Lab. And we continue to partner with organizations like TACC and others on research activities to continue to learn about the virus, how it mutates, and then how you treat it. So if you think about all the things, and data that's getting created, you're seeing that distribution and it's really leading to some really cool innovations going forward. >> Yeah, I want to get to that COVID research, but first, you mentioned a few words I want to get out there. You mentioned Lonestar6. Okay, so first, what is Lonestar6, then we'll get into the system aspect of it. Take us through what that definition is, what is Lonestar6? >> Well, as Dan mentioned, Lonestar6 is a Dell technology system that we developed with TACC, it's located at the University of Texas at Austin. It consists of more than 800 Dell PowerEdge 6525 servers that are powered with 3rd Generation AMD EPYC processors. And just to give you an example of the scale of this cluster, it could perform roughly three quadrillion operations per second. That's three petaFLOPS, and to match what Lonestar6 can compute in one second, a person would have to do one calculation every second for a hundred million years. So it's quite a good-size system, and quite a powerful one as well. >> Dan, what's the role that the system plays, you've got petaFLOPS, what, three petaFLOPS, you mentioned? That's a lot of FLOPS! So obviously urgent computing, what's cranking through the system there? Take us through, what's it like? >> Sure, well, there there's a mix of workloads on it, and on all our systems. So there's the urgent computing work, right? Fast turnaround, near real-time, whether it's COVID research, or doing... Project now where we bring in MRI data and are doing sort of patient-specific dosing for radiation treatments and chemotherapy, tailored to your tumor, instead of just the sort of general for people your size. That all requires sort of real-time turnaround. There's a lot AI research going on now, we're incorporating AI in traditional science and engineering research. And that uses an awful lot of data, but also consumes a huge amount of cycles in training those models. And then there's all of our traditional, simulation-based workloads and materials and digital twins for aircraft and aircraft design, and more efficient combustion in more efficient photovoltaic materials, or photovoltaic materials without using as much lead, and things like that. And I'm sure I'm missing dozens of other topics, 'cause, like I said, that one really runs every field of science. We've really focused the Lonestar line of systems, and this is obviously the sixth one we built, around our sort of Texas-centric users. It's the UT Austin users, and then with contributions from Texas A&M , and Texas Tech and the University of Texas system, MD Anderson Healthcare Center, the University of North Texas. So users all around the state, and every research problem that you might imagine, those are into. We're just ramping up a project in disaster information systems, that's looking at the probabilities of flooding in coastal Texas and doing... Can we make building code changes to mitigate impact? Do we have to change the standard foundation heights for new construction, to mitigate the increasing storm surges from these sort of slow storms that sit there and rain, like hurricanes didn't used to, but seem to be doing more and more. All those problems will run on Lonestar, and on all the systems to come, yeah. >> It's interesting, you mentioned urgent computing, I love that term because it could be an event, it could be some slow kind of brewing event like that rain example you mentioned. It could also be, obviously, with the healthcare, and you mentioned COVID earlier. These are urgent, societal challenges, and having that available, the processing capability, the compute, the data. You mentioned digital twins. I can imagine all this new goodness coming from that. Compare that, where we were 10 years ago. I mean, just from a mind-blowing standpoint, you have, have come so far, take us through, try to give a context to the level of where we are now, to do this kind of work, and where we were years ago. Can you give us a feel for that? >> Sure, there's a lot of ways to look at that, and how the technology's changed, how we operate around those things, and then sort of what our capabilities are. I think one of the big, first, urgent computing things for us, where we sort of realized we had to adapt to this model of computing was about 15 years ago with the big BP Gulf Oil spill. And suddenly, we were dumping thousands of processors of load to figure out where that oil spill was going to go, and how to do mitigation, and what the potential impacts were, and where you need to put your containment, and things like that. And it was, well, at that point we thought of it as sort of a rare event. There was another one, that I think was the first real urgent computing one, where the space shuttle was in orbit, and they knew something had hit it during takeoff. And we were modeling, along with NASA and a bunch of supercomputers around the world, the heat shield and could they make reentry safely? You have until they come back to get that problem done, you don't have months or years to really investigate that. And so, what we've sort of learned through some of those, the Japanese tsunami was another one, there have been so many over the years, is that one, these sort of disasters are all the time, right? One thing or another, right? If we're not doing hurricanes, we're doing wildfires and drought threat, if it's not COVID. We got good and ready for COVID through SARS and through the swine flu and through HIV work, and things like that. So it's that we can do the computing very fast, but you need to know how to do the work, right? So we've spent a lot of time, not only being able to deliver the computing quickly, but having the data in place, and having the code in place, and having people who know the methods who know how to use big computers, right? That's been a lot of what the COVID Consortium, the White House COVID Consortium, has been about over the last few years. And we're actually trying to modify that nationally into a strategic computing reserve, where we're ready to go after these problems, where we've run drills, right? And if there's a, there's a train that derails, and there's a chemical spill, and it's near a major city, we have the tools and the data in place to do wind modeling, and we have the terrain ready to go. And all those sorts of things that you need to have to be ready. So we've really sort of changed our sort of preparedness and operational model around urgent computing in the last 10 years. Also, just the way we scheduled the system, the ability to sort of segregate between these long-running workflows for things that are really important, like we displaced a lot of cancer research to do COVID research. And cancer's still important, but it's less likely that we're going to make an impact in the next two months, right? So we have to shuffle how we operate things and then just, having all that additional capacity. And I think one of the things that's really changed in the models is our ability to use AI, to sort of adroitly steer our simulations, or prune the space when we're searching parameters for simulations. So we have the operational changes, the system changes, and then things like adding AI on the scientific side, since we have the capacity to do that kind of things now, all feed into our sort of preparedness for this kind of stuff. >> Dan, you got me sold, I want to come work with you. Come on, can I join the team over there? It sounds exciting. >> Come on down! We always need good folks around here, so. (laughs) >> Rajesh, when I- >> Almost 200 now, and we're always growing. >> Rajesh, when I hear the stories about kind of the evolution, kind of where the state of the art is, you almost see the innovation trajectory, right? The growth and the learning, adding machine learning only extends out more capabilities. But also, Dan's kind of pointing out this kind of response, rapid compute engine, that they could actually deploy with learnings, and then software, so is this a model where anyone can call up and get some cycles to, say, power an autonomous vehicle, or, hey, I want to point the machinery and the cycles at something? Is the service, do you guys see this going that direction, or... Because this sounds really, really good. >> Yeah, I mean, one thing that Dan talked about was, it's not just the compute, it's also having the right algorithms, the software, the code, right? The ability to learn. So I think when those are set up, yeah. I mean, the ability to digitally simulate in any number of industries and areas, advances the pace of innovation, reduces the time to market of whatever a customer is trying to do or research, or even vaccines or other healthcare things. If you can reduce that time through the leverage of compute on doing digital simulations, it just makes things better for society or for whatever it is that we're trying to do, in a particular industry. >> I think the idea of instrumenting stuff is here forever, and also simulations, whether it's digital twins, and doing these kinds of real-time models. Isn't really much of a guess, so I think this is a huge, historic moment. But you guys are pushing the envelope here, at University of Texas and at TACC. It's not just research, you guys got real examples. So where do you guys see this going next? I see space, big compute areas that might need some data to be cranked out. You got cybersecurity, you got healthcare, you mentioned oil spill, you got oil and gas, I mean, you got industry, you got climate change. I mean, there's so much to tackle. What's next? >> Absolutely, and I think, the appetite for computing cycles isn't going anywhere, right? And it's only going to, it's going to grow without bound, essentially. And AI, while in some ways it reduces the amount of computing we do, it's also brought this whole new domain of modeling to a bunch of fields that weren't traditionally computational, right? We used to just do engineering, physics, chemistry, were all super computational, but then we got into genome sequencers and imaging and a whole bunch of data, and that made biology computational. And with AI, now we're making things like the behavior of human society and things, computational problems, right? So there's this sort of growing amount of workload that is, in one way or another, computational, and getting bigger and bigger. So that's going to keep on growing. I think the trick is not only going to be growing the computation, but growing the software and the people along with it, because we have amazing capabilities that we can bring to bear. We don't have enough people to hit all of them at once. And so, that's probably going to be the next frontier in growing out both our AI and simulation capability, is the human element of it. >> It's interesting, when you think about society, right? If the things become too predictable, what does a democracy even look like? If you know the election's going to be over two years from now in the United States, or you look at these major, major waves >> Human companies don't know. >> of innovation, you say, "Hmm." So it's democracy, AI, maybe there's an algorithm for checking up on the AI 'cause biases... So, again, there's so many use cases that just come out of this. It's incredible. >> Yeah, and bias in AI is something that we worry about and we work on, and on task forces where we're working on that particular problem, because the AI is going to take... Is based on... Especially when you look at a deep learning model, it's 100% a product of the data you show it, right? So if you show it a biased data set, it's going to have biased results. And it's not anything intrinsic about the computer or the personality, the AI, it's just data mining, right? In essence, right, it's learning from data. And if you show it all images of one particular outcome, it's going to assume that's always the outcome, right? It just has no choice, but to see that. So how we deal with bias, how do we deal with confirmation, right? I mean, in addition, you have to recognize, if you haven't, if it gets data it's never seen before, how do you know it's not wrong, right? So there's about data quality and quality assurance and quality checking around AI. And that's where, especially in scientific research, we use what's starting to be called things like physics-informed or physics-constrained AI, where the neural net that you're using to design an aircraft still has to follow basic physical laws in its output, right? Or if you're doing some materials or astrophysics, you still have to obey conservation of mass, right? So I can't say, well, if you just apply negative mass on this other side and positive mass on this side, everything works out right for stable flight. 'Cause we can't do negative mass, right? So you have to constrain it in the real world. So this notion of how we bring in the laws of physics and constrain your AI to what's possible is also a big part of the sort of AI research going forward. >> You know, Dan, you just, to me just encapsulate the science that's still out there, that's needed. Computer science, social science, material science, kind of all converging right now. >> Yeah, engineering, yeah, >> Engineering, science, >> slipstreams, >> it's all there, >> physics, yeah, mmhmm. >> it's not just code. And, Rajesh, data. You mentioned data, the more data you have, the better the AI. We have a world what's going from silos to open control planes. We have to get to a world. This is a cultural shift we're seeing, what's your thoughts? >> Well, it is, in that, the ability to drive predictive analysis based on the data is going to drive different behaviors, right? Different social behaviors for cultural impacts. But I think the point that Dan made about bias, right, it's only as good as the code that's written and the way that the data is actually brought into the system. So making sure that that is done in a way that generates the right kind of outcome, that allows you to use that in a predictive manner, becomes critically important. If it is biased, you're going to lose credibility in a lot of that analysis that comes out of it. So I think that becomes critically important, but overall, I mean, if you think about the way compute is, it's becoming pervasive. It's not just in selected industries as damage, and it's now applying to everything that you do, right? Whether it is getting you more tailored recommendations for your purchasing, right? You have better options that way. You don't have to sift through a lot of different ideas that, as you scroll online. It's tailoring now to some of your habits and what you're looking for. So that becomes an incredible time-saver for people to be able to get what they want in a way that they want it. And then you look at the way it impacts other industries and development innovation, and it just continues to scale and scale and scale. >> Well, I think the work that you guys are doing together is scratching the surface of the future, which is digital business. It's about data, it's about out all these new things. It's about advanced computing meets the right algorithms for the right purpose. And it's a really amazing operation you guys got over there. Dan, great to hear the stories. It's very provocative, very enticing to just want to jump in and hang out. But I got to do theCUBE day job here, but congratulations on success. Rajesh, great to see you and thanks for coming on theCUBE. >> Thanks for having us, John. >> Okay. >> Thanks very much. >> Great conversation around urgent computing, as computing becomes so much more important, bigger problems and opportunities are around the corner. And this is theCUBE, we're documenting it all here. I'm John Furrier, your host. Thanks for watching. (contemplative music)
SUMMARY :
the Texas Advanced Computing Center, good to be here. And of course, I got to love TACC, and around the world. What's the coolest thing and build the new top-10 of the work you're doing. in the optimal routes? and now, with 5G, you got edge, and some of the work that they're doing. but first, you mentioned a few of the scale of this cluster, and on all the systems to come, yeah. and you mentioned COVID earlier. in the models is our ability to use AI, Come on, can I join the team over there? Come on down! and we're always growing. Is the service, do you guys see this going I mean, the ability to digitally simulate So where do you guys see this going next? is the human element of it. of innovation, you say, "Hmm." the AI is going to take... You know, Dan, you just, the more data you have, the better the AI. and the way that the data Rajesh, great to see you are around the corner.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dan | PERSON | 0.99+ |
Dan Stanzione | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Rajesh | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Rajesh Pohani | PERSON | 0.99+ |
National Science Foundation | ORGANIZATION | 0.99+ |
TACC | ORGANIZATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
Texas A&M | ORGANIZATION | 0.99+ |
February 2022 | DATE | 0.99+ |
NASA | ORGANIZATION | 0.99+ |
100% | QUANTITY | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
Texas Advanced Computing Center | ORGANIZATION | 0.99+ |
United States | LOCATION | 0.99+ |
2020 | DATE | 0.99+ |
COVID Consortium | ORGANIZATION | 0.99+ |
Texas Tech | ORGANIZATION | 0.99+ |
one second | QUANTITY | 0.99+ |
Austin | LOCATION | 0.99+ |
Texas | LOCATION | 0.99+ |
thousands | QUANTITY | 0.99+ |
University of Texas | ORGANIZATION | 0.99+ |
Palo Alto, California | LOCATION | 0.99+ |
first | QUANTITY | 0.99+ |
HPC | ORGANIZATION | 0.99+ |
AI Innovation Lab | ORGANIZATION | 0.99+ |
University of North Texas | ORGANIZATION | 0.99+ |
PowerEdge | ORGANIZATION | 0.99+ |
two years ago | DATE | 0.99+ |
White House COVID Consortium | ORGANIZATION | 0.99+ |
more than 20,000 | QUANTITY | 0.99+ |
10 years ago | DATE | 0.98+ |
Dell Technologies | ORGANIZATION | 0.98+ |
Texas Advanced Computing Center | ORGANIZATION | 0.98+ |
more than 800 | QUANTITY | 0.98+ |
one | QUANTITY | 0.98+ |
both | QUANTITY | 0.98+ |
dozens | QUANTITY | 0.97+ |
PowerEdge 6525 | COMMERCIAL_ITEM | 0.97+ |
one calculation | QUANTITY | 0.96+ |
MD Anderson Healthcare Center | ORGANIZATION | 0.95+ |
top 10 | QUANTITY | 0.95+ |
first responders | QUANTITY | 0.95+ |
One | QUANTITY | 0.94+ |
AMD | ORGANIZATION | 0.93+ |
HIV | OTHER | 0.92+ |
Core Compute | ORGANIZATION | 0.92+ |
over two years | QUANTITY | 0.89+ |
Lonestar | ORGANIZATION | 0.88+ |
last 10 years | DATE | 0.88+ |
every second | QUANTITY | 0.88+ |
Gulf Oil spill | EVENT | 0.87+ |
Almost 200 | QUANTITY | 0.87+ |
a hundred million years | QUANTITY | 0.87+ |
Lonestar6 | COMMERCIAL_ITEM | 0.86+ |
Dave Brown, AWS | AWS re:Invent 2021
(bright music) >> Welcome back everyone to theCUBE's coverage of AWS re:Invent 2021 in person. So a live event, physical in-person, also virtual hybrid. So a lot of great action online, check out the website. All the videos are there on theCUBE, as well as what's going on all of the actions on site and theCUBE's here. I'm John Furrier, your host with Dave Vellante, my cohost. Finally, we've got David Brown, VP of Elastic Compute Cloud. EC2, the bread and butter. Our favorite part of Amazon. David, great to have you back on theCUBE in person. >> John, it's great to be back. It's the first time I'd been on theCUBE in person as well. A lot of virtual events with you guys, but it's amazing to be back at re:Invent. >> We're so excited for you. I know, Matt Garman and I've talked in the past. We've talked in the past. EC2 is just an amazing product. It's always been the core block of AWS. More and more action happening and developers are now getting more action and there's well, we wrote a big piece about it. What's going on? The Silicon's really paying off. You've got to also general purpose Intel and AMD, and you've got the custom silicon, all working together. What's the new update? Give us a scoop. >> Well, John, it's actually 15 years of EC2 this year and I've been lucky to be on that team for 14 years and so incredible to see the growth. It's been an amazing journey. The thing that's really driven us, two things. One is supporting new workloads. And so what are the workloads that customers have available out there trying to do on the cloud that we don't support and launch new instance types. And that's the first thing. The second one is price performance. How do we give customers more performance at a continuously decreasing price year-over-year? And that's just driven innovation across EC2 over the years with things like Graviton. All of our inferential chips are custom silicon, but also instance types with the latest Intel Ice Lake CPU's, latest Milan. We just announced the AMD Milan instance. It's just constantly innovation across the ever-increasing list of instances. So super exciting. >> So instances become the new thing. Provision an instance, spin up an instance. Instance becomes, and you can get instances, flavors, almost like flavors, right? >> David: Yeah. >> Take us through the difference between an instance and then the EC2 itself. >> That's correct, yeah. So we actually have, by end of the year, right now we have over 475 different instances available to you whether it's GPU accelerators, high-performance computing instances, memory optimized, just enormous number. We'll actually hit 500 by the end of the year, but that is it. I mean, customers are looking for different types of machines and those are the instances. >> So the Custom Silicon, it's one of the most interesting developments. We've written about it. AWS secret weapon is one of them. I wonder if you could take us back to the decision points and the journey. The Annapurna acquisition, you started working with them as a partner, then you said, all right, let's just buy the company. >> David: Yeah. >> And then now, you're seeing the acceleration, your time to tapeout is way, way compressed. Maybe what was the catalyst and maybe we can get into where it's going. >> Yeah, absolutely. Super interesting story 'cause it actually starts all the way back in 2008. In 2008, EC2 had actually been around for just a little under two years. And if you remember back then, everybody was like, will virtualize and hypervisors, specialization would never really get you the same performances, what they were calling bare metal back then. Everybody's looking at the cloud. And so we took a look at that. And I mean, network latencies, in some cases with hypervisors were as high as 200 or 300 milliseconds. And it was a number of real challenges. And so we knew that we would have to change the way that virtualization works and get into hardware. And so in 2010, 2011, we started to look at how could I offload my network processing, my IO processing to additional hardware. And that's what we delivered our first Nitro card in 2012 and 2013. We actually offloaded all of the processing of network to a Nitro card. And that Nitro card actually had a Annapurna arm chip on it. Our Nitro 1 chip. >> For the offload? >> The offload card, yeah. And so that's when my team started to code for Arm. We started to work on our Linux works for Arm. We actually had to write our own operating system initially 'cause there weren't any operating systems available we could use. And so that's what we started this journey. And over the years, when we saw how well it worked for networking, we said, let's do it for storage as well. And then we said, Hey, we could actually improve security significantly. And by 2017, we'd actually offloaded 100% of everything we did on that server to our offload cards Leaving a 100% of the server available for customers. And we're still actually the only cloud provider that does that today. >> Just to interject, in the data center today, probably 30% of the general purpose cores are used for offloads. You're saying 0% in the cloud. >> On our nitro instances, so every instance we've launched since 2017, our C5. We use 0% of that central core. And you can actually see that in our instance types. If you look at our largest instance type, you can see that we're giving you 96 cores and we're giving you, and our largest instance, 24 terabytes of memory. We're not giving you 23.6 terabytes 'cause we need some. It's all given to you as the customer. >> So much more efficient, >> Much, much more efficient, much better, better price performance as well. But then ultimately those Nitro chips, we went through Nitro 1, Nitro 2, Nitro 3, Nitro 4. We said, Hey, could we build a general purpose server chip? Could we actually bring Arm into the cloud? And in 2018, we launched the A1 instance, which was our Graviton1 instance. And what we didn't tell people at the time is that it was actually the same chip we were using on our network card. So essentially, it was a network card that we were giving to you as a server. But what it did is it sparked the ecosystem. That's why we put it out there. And I remember before launch, some was saying, is this just going to be a university project? Are we going to see people from big universities using Arm in the cloud? Was it really going to take off? And the response was amazing. The ecosystem just grew. We had customers move to it and immediately begin to see improvements. And we knew that a year later, Graviton2 was going to come out. And Graviton2 was just an amazing chip. It continues to see incredible adoption, 40% price performance improvement over other instances. >> So this is worth calling out because I think that example of the network card, I mean, innovation can come from anywhere. This is what Jassy always would say is do the experiments. Think about the impact of what's going on here. You're focused on a mission. Let's get that processing of the lowest cost, pick up some workloads. So you're constantly tinkering with tuning the engine. New discovery comes in. Nitro is born. The chip comes in. But I think the fundamental thing, and I want to get your reaction to this 'cause we've put this out there on our post on Sunday. And I said, in every inflection point, I'm old enough, my birthday was yesterday. I'm old enough to know that. >> David: I saw that. >> I'm old enough to know that in the eighties, the client server shifts. Every inflection point where development changed, the methodology, the mindset or platforms change, all the apps went to the better platform. Who wants to run their application on a slower platform? And so, and those inflects. So now that's happening now, I believe. So you got better performance and I'm imagining that the app developers are coding for it. Take us through how you see that because okay, you're offering up great performance for workloads. Now it's cloud workloads. That's almost all apps. Can you comment on that? >> Well, it has been really interesting to see. I mean, as I said, we were unsure who was going to use it when we initially launched and the adoption has been amazing. Initially, obviously it's always, a lot of the startups, a lot of the more agile companies that can move a lot faster, typically a little bit smaller. They started experimenting, but the data got out there. That 40% price performance was a reality. And not only for specific workloads, it was broadly successful across a number of workloads. And so we actually just had SAP who obviously is an enormous enterprise, supporting enterprises all over the world, announced that they are going to be moving the S/4 HANA Cloud to run on Graviton2. It's just phenomenal. And we've seen enterprises of that scale and game developers, every single vertical looking to move to Graviton2 and get that 40% price performance. >> Now we have to, as analysts, we have to say, okay, how did you get to that 40%? And you have to make some assumptions obviously. And it feels like you still have some dry powder when you looked at Graviton2. I think you were running, I don't know, it's speculated anyway. I don't know if you guys, it's your data, two and a half, 2.5 gigahertz. >> David: Yeah. >> I don't know if we can share what's going on with Graviton3, but my point is you had some dry powder and now with Graviton3, quite a range of performance, 'cause it really depends on the workload. >> David: That's right. >> Maybe you could give some insight as to that. What can you share about how you tuned Graviton3? >> When we look at benchmarking, we don't want to be trying to find that benchmark that's highly tuned and then put out something that is, Hey, this is the absolute best we can get it to and that's 40%. So that 40% is actually just on average. So we just went and ran real world workloads. And we saw some that were 55%. We saw some that were 25. It depends on what it was, but on average, it was around the 35, 45%, and we said 40%. And the great thing about that is customers come back and say, Hey, we saw 40% in this workload. It wasn't that I had to tune it. And so with Graviton3, launching this week. Available in our C7g instance, we said 25%. And that is just a very standard benchmark in what we're seeing. And as we start to see more customer workloads, I think it's going to be incredible to see what that range looks like. Graviton2 for single-threaded applications, it didn't give you that much of a performance. That's what we meant by cloud applications, generally, multi-threaded. In Graviton3, that's no longer the case. So we've had some customers report up to 80% performance improvements of Graviton2 to Graviton3 when the application was more of a single-threaded application. So we started to see. (group chattering) >> You have to keep going, the time to market is compressing. So you have that, go ahead, sorry. >> No, no, I always want to add one thing on the difference between single and multi-threaded applications. A lot of legacy, you're single threaded. So this is kind of an interesting thing. So the mainframe, migration stuff, you start to see that. Is that where that comes in the whole? >> Well, a lot of the legacy apps, but also even some of the new apps, like single threading like video transcoding, for example, is all done on a single core. It's very difficult. I mean, almost impossible to do that multi-threaded way. A lot of the crypto algorithms as well, encryption and cryptography is often single core. So with Graviton3, we've seen a significant performance boost for video encoding, cryptographic algorithms, that sort of thing, which really impacts even the most modern applications. >> So that's an interesting point because now single threaded is where the vertical use cases come in. It's not like more general purpose OS kind of things. >> Yeah, and Graviton has already been very broad. I think we're just knocking down the last few verticals where maybe it didn't support it and now it absolutely does. >> And if an ISV then ports, like an SAP's ports to Graviton, then the customer doesn't see any, I mean, they're going to see the performance difference, but they don't have to think about it. >> David: Yeah. >> They just say, I choose that instance and I'm going to get better price performance. >> Exactly, so we've seen that from our ISVs. We've also been doing that with our AWS services. So services like EMR, RDS, Elastic Cache, it will be moving and making Graviton2 available for customers, which means the customer doesn't have to do the migration at all. It's all done for them. They just pick the instance and get the price performance benefits, and so yeah. >> I think, oh, no, that was serverless. Sorry. >> Well, Lambda actually just did launch on Graviton2. And I think they were talking about a 35% price performance improvement. >> Who was that? >> Lambda, a couple of months ago. >> So what does an ISV have to do to port to Graviton. >> It's relatively straightforward, and this is actually one of the things that has slowed customers down is the, wow, that must be a big migration. And that ecosystem that I spoke about is the important part. And today, with all the Linux operating systems being available for Arm running on Graviton2, with all of the container runtimes being available, and then slowly open source applications in ISV is being available. It's actually really, really easy. And we just ran the Graviton2 four-day challenge. And we did that because we actually had an enterprise migrate one of the largest production applications in just four days. Now, I probably wouldn't recommend that to most enterprises that we see is a little too fast, but they could actually do that. >> But just from a numbers standpoint, that's insanely amazing. I mean, when you think about four days. >> Yeah. >> And when we talked on virtually last year, this year, I can't remember now. You said, we'll just try it. >> David: That's right. >> And see what happens, so I presume a lot of people have tried it. >> Well, that's my advice. It's the unknown, it's the what will it take? So take a single engineer, tell them and give them a time. Say you have one week, get this running on Graviton2, and I think the results are pretty amazing, very surprised. >> We were one of the first, if not the first to say that Arm is going to be dominant in the enterprise. We know it's dominant in the Edge. And when you look at the performance curves and the time to tape out, it's just astounding. And I don't know if people appreciate that relative to the traditional Moore's law curve. I mean, it's a style. And then when you combine the power of the CPU, the GPU, the NPU, kind of what Apple does in the iPhone, it blows away the historical performance curves. And you're on that curve. >> That's right. >> I wonder if you could sort of explain that. >> So with Graviton, we're optimizing just across every single part of AWS. So one of the nice things is we actually own that end-to-end. So when it starts with the early design of Graviton2 and Graviton3, and we obviously working on other chips right now. We're actually using the cloud to do all of the electronic design automation. So we're able to test with AWS how that Graviton3 chip is going to work long before we've even started taping it out. And so those workloads are running on high-frequency CPU's on Graviton. Actually we're using Graviton to build Graviton now in the cloud. The other thing we're doing is we're making sure that the Annapurna team that's building those CPUs is deeply engaged with my team and we're going to ultimately go and build those instances so that when that chip arrives from tapeout. I'm not waiting nine months or two years, like would normally be the case, but I actually had an instance up and running within a week or two on somebody's desk studying to do the integration. And that's something we've optimized significantly to get done. And so it allows us to get that iteration time. It also allows us to be very, very accurate with our tapeouts. We're not having to go back with Graviton. They're all A1 chips. We're not having to go back and do multiple runs of these things because we can do so much validation and performance testing in the cloud ahead of time. >> This is the epiphany of the Arm model. >> It really is. >> It's a standard. When you send it to the fab, they know what's going to work. You hit volume and it's just no fab. >> Well, this is a great thread. We'll stay on this 'cause Adam told us when we met with them for re:Invent that they're seeing a lot more visibility into use cases at the scale. So the scale gives you an advantage on what instances might work. >> And makes the economics works. >> Makes the economics work, hence the timing, the shrinking time to market, not there, but also for the apps. Talk about the scale advantage you guys have. >> Absolutely. I mean, the scale advantage of AWS plays out in a number of ways for our customers. The first thing is being able to deliver highly optimized hardware. So we don't just look at the Graviton3 CPU, you were speaking about the core count and the frequency and Peter spoke about a lot of that in his keynote yesterday. But we look at how does the Graviton3 CPU work with the rest of the instance. What is the right balance between the CPU and memory? The CPU and the Hydro. What's the performance and the drive? We just launched the Nitro SSD, which is now we've actually building our own custom SSDs for Nitro getting better performance, being able to do updates, better security, making it more cloudy. We're just saying, we've been challenged with the SSD in the parts. The other place that scales really helping is in capacity. Being able to make sure that we can absorb things like the COVID spike, or the stuff you see in the financial industry with just enormous demand for compute. We can do that because of our scale. We are able to scale. And the final area is actually in quality because I have such an enormous fleet. I'm actually able to drive down AFR. So annual failure rates, are we well below what the mathematical theoretical tenant or possibility is? So if you look at what's put on that actual sticker on the box that says you should be able to get a full percent AFR. At scale and with focus, we're actually able to get that down to significantly below what the mathematical entitlement was actually be. >> Yeah, it's incredible. I've got a great, and this is the advantage, and that's why I believe anyone who's writing applications that has includes a database, data transfer, any kind of execution of code will use the stack. >> Why would they? Really, why? We've seen this, like you said before, whether it was PC, then the fastest Pentium or somebody. >> Why would you want your app to run slower? >> Unix box, right? ISVS want it to run as fast and as cheaply as possible. Now power plays into it as well. >> Yeah, well, we do have, I agree with what you're saying. We do have a number of customers that are still looking to run on x86, but obviously customers that want windows. Windows isn't available for Arm and so that's a challenge. They'll continue to do that. And you know the way we do look at it is most law kind of died out on us in 2002, 2003. And what I'm hoping is, not necessarily bringing wars a little back, but then we say, let's not accept the 10%, 15% improvement year-over-year. There's absolutely more we can all be doing. And so I'm excited to see where the x86 world's going and they doing a lot of great stuff. Intel Ice Lakes looking amazing. Milan is really great to have an AWS as well. >> Well, I'm thinking it's fair point 'cause we certainly look what Pat's doing it at Intel and he's remaking the company. I've said he's going to follow on the Arm playbook in my mind a little bit, and which is the right thing to do. So competition is a good thing. >> David: Absolutely. >> We're excited for you and a great to see Graviton and you guys have this kind of inflection point. We've been tracking for a while, but now the world's starting to see it. So congratulations to your team. >> David: Thank you. >> Just a couple of things. You guys have some news on instances. Talk about the deprecation issue and how you guys are keeping instances alive real quick. >> Yeah, we're super customer obsessed at Amazon. And so that really drives us. And one of the worst things for us to do is to have to tell a customer that we no longer supporting a service. We recently actually just deprecated the ECG classic network. I'm not sure if you saw that and that's actually off the 10 years of continuing to support it. And the only reason we did it is we have a tiny percentage of customers still using that from back in 2012. But one of the challenges is obviously instance hardware eventually will ultimately time out and fail and have hardware issues as it gets older and older. And so we didn't want to be in a place, in EC2, where we would have to constantly go to customers and say that M1 small, that C3, whatever you were running, it's no longer supported, please move. That's just a text that customers shouldn't have to do. And if they still getting value out of an older instance, let them keep using it. So we actually just announced at re:Invent, in my keynote on Tuesday, the longevity support for EC2 instances, which means we will never come back to you again and ask you to please get off an instance, because we can actually emulate all those instances on our Nitro system. And so all of these instances are starting to migrate to Nitro. You're getting all the benefits of Nitro for now some of our older zen instances, but also you don't have to worry about that work. That's just not something you need to do to get off in all the instance. >> That's great. That's a great test service. Stay on as long as you want. When you're ready to move, move. Okay, final question for you. I know we've got time, I want to get this in. The global network, you guys are known for AWS cloud WAN serve. Gives you updates on what's going on with that. >> So Werner just announced that in his keynote and over the last two to three years or so, we've seen a lot of customers starting to use the AWS backbone, which is extensive. I mean, you've seen the slides in Werner's keynote. It really does span the world. I think it's probably one of the largest networks out there. Customers starting to use that for actually their branch office communication. So instead of going and provisioning the own international MPLS networks and that sort of thing, they say, let me onboard to AWS with VPN or direct connect, and I can actually run the AWS backbone around the world. Now doing that actually has some complexity. You got to think about transit gateways. You got to think about those inter-region peering. And AWS cloud when takes all of that complexity away, you essentially create a cloud WAN, connecting to it to VPN or direct connect, and you can even go and actually set up network segments. So essentially VLANs for different parts of the organization. So super excited to get out that out of there. >> So the ease of use is the key there. >> Massively easy to use. and we have 26 SD-WAN partners. We even partnering with folks like Verizon and Swisscom in Switzerland to telco to actually allow them to use it for their customers as well. >> We'll probably use your service someday when we have a global rollout date. >> Let's do that, CUBE Global. And then the other was the M1 EC2 instance, which got a lot of applause. >> David: Absolutely. >> M1, I think it was based on A15. >> Yeah, that's for Mac. We've got to be careful 'cause M1 is our first instance as well. >> Yeah right, it's a little confusion there. >> So it's a Mac. The EC2 Mac is with M1 silicon from Apple, which super excited to put out there. >> Awesome. >> David Brown, great to see you in person. Congratulations to you and the team and all the work you guys have done over the years. And now that people starting to realize the cloud platform, the compute just gets better and better. It's a key part of the system. >> Thanks John, it's great to be here. >> Thanks for sharing. >> The SiliconANGLE is here. We're talking about custom silicon here on AWS. I'm John Furrier with Dave Vellante. You're watching theCUBE. The global leader in tech coverage. We'll be right back with more covers from re:Invent after this break. (bright music)
SUMMARY :
all of the actions on site A lot of virtual events with you guys, It's always been the core block of AWS. And that's the first thing. So instances become the new thing. and then the EC2 itself. available to you whether So the Custom Silicon, seeing the acceleration, of the processing of network And over the years, when we saw You're saying 0% in the cloud. It's all given to you as the customer. And the response was amazing. example of the network card, and I'm imagining that the app a lot of the more agile companies And it feels like you 'cause it really depends on the workload. some insight as to that. And the great thing about You have to keep going, the So the mainframe, migration Well, a lot of the legacy apps, So that's an interesting down the last few verticals but they don't have to think about it. and I'm going to get and get the price performance I think, oh, no, that was serverless. And I think they were talking about a 35% to do to port to Graviton. about is the important part. I mean, when you think about four days. And when we talked And see what happens, so I presume the what will it take? and the time to tape out, I wonder if you could that the Annapurna team When you send it to the fab, So the scale gives you an advantage the shrinking time to market, or the stuff you see in and that's why I believe anyone We've seen this, like you said before, and as cheaply as possible. And so I'm excited to see is the right thing to do. and a great to see Graviton Talk about the deprecation issue And the only reason we did it Stay on as long as you want. and over the last two and Swisscom in Switzerland to We'll probably use your service someday the M1 EC2 instance, We've got to be careful little confusion there. The EC2 Mac is with M1 silicon from Apple, and all the work you guys The SiliconANGLE is here.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
David | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
David Brown | PERSON | 0.99+ |
Verizon | ORGANIZATION | 0.99+ |
Peter | PERSON | 0.99+ |
Werner | PERSON | 0.99+ |
Swisscom | ORGANIZATION | 0.99+ |
Matt Garman | PERSON | 0.99+ |
John | PERSON | 0.99+ |
2008 | DATE | 0.99+ |
AMD | ORGANIZATION | 0.99+ |
Adam | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
Switzerland | LOCATION | 0.99+ |
Dave Brown | PERSON | 0.99+ |
Sunday | DATE | 0.99+ |
40% | QUANTITY | 0.99+ |
30% | QUANTITY | 0.99+ |
2010 | DATE | 0.99+ |
14 years | QUANTITY | 0.99+ |
100% | QUANTITY | 0.99+ |
2011 | DATE | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
15 years | QUANTITY | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
2002 | DATE | 0.99+ |
2012 | DATE | 0.99+ |
15% | QUANTITY | 0.99+ |
25 | QUANTITY | 0.99+ |
23.6 terabytes | QUANTITY | 0.99+ |
nine months | QUANTITY | 0.99+ |
Tuesday | DATE | 0.99+ |
10 years | QUANTITY | 0.99+ |
10% | QUANTITY | 0.99+ |
96 cores | QUANTITY | 0.99+ |
two years | QUANTITY | 0.99+ |
last year | DATE | 0.99+ |
four days | QUANTITY | 0.99+ |
2018 | DATE | 0.99+ |
55% | QUANTITY | 0.99+ |
2013 | DATE | 0.99+ |
2017 | DATE | 0.99+ |
200 | QUANTITY | 0.99+ |
2003 | DATE | 0.99+ |
24 terabytes | QUANTITY | 0.99+ |
Pat | PERSON | 0.99+ |
Apple | ORGANIZATION | 0.99+ |
one week | QUANTITY | 0.99+ |
four-day | QUANTITY | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
25% | QUANTITY | 0.99+ |
iPhone | COMMERCIAL_ITEM | 0.99+ |
two and a half | QUANTITY | 0.99+ |
this year | DATE | 0.99+ |
yesterday | DATE | 0.99+ |
a year later | DATE | 0.99+ |
first | QUANTITY | 0.99+ |
Elastic Compute Cloud | ORGANIZATION | 0.99+ |
500 | QUANTITY | 0.99+ |
William Choe & Shane Corban | Aruba & Pensando Announce New Innovations
(intro music playing) >> Hello everyone, and welcome to the power of n where HPE Aruba and Pensando are changing the game, the way customers scale with the cloud, and what's next in the evolution in switching. Hey everyone, I'm John furrier with the cube, and I'm here with Shane Corbin, director of technical product management at Pensando, and William show vice president of product management, Aruba HPE. Gentlemen, thank you for coming on and doing a deep dive and, and going into the, the big news. So the first question I want to ask you guys is um, what do you guys see from a market customer perspective that kicked this project off? um, amazing um, results um, over the past year or so? Where did it all come from? >> No, it's a great question, John. So when we were doing our homework, there were actually three very clear customer challenges. First, security threats were largely spawn with on, within the perimeter. In fact, Forrester highlighted 80% of threats originate within the internal network. Secondly, workloads are largely distributed creating a ton of east-west traffic. And then lastly, network services such as firewalls, load balancers, VPN aggregators are expensive, they're centralized, and they ultimately result in service chaining complexity. >> John: So, so, >> John: Go ahead, Shane. >> Yeah. Additionally, when we spoke to our customers after launching initially the distributed services platform, these compliance challenges clearly became apparent to us and while they saw the architecture value of adopting what the largest public cloud providers have done by putting a smart NIC in each compute node to provide these stateful services. Enterprise customers were still, were struggling with the need to upgrade fleets and brown field servers and the associated per node cost of adding a smart NIC to every compute node. Typically the traffic volumes for on a per node basis within an enterprise data center are significantly lower than cloud. Thus, we saw an opportunity here to, in conjunction with Aruba, develop a new category of switching product um, to share the processing capabilities of our unique intellectual property around our DPU across a rack of servers that net net delivers the same set of services through a new category of platform, enabling a distributed services architecture, and ultimately addressing the compliance and TCO generating huge TCO and ROI for customers. >> You know, one of the things that we've been reporting on with you guys, as well as the cloud scale, this is the volume of data and just the performance and scale. I think the timing of the, of this partnership and the product development is right on point. And you've got the edge right around the corner, more, more distributed nature of cloud operations, huge, huge change in the marketplace. So great timing on the origination story there. Great stuff. Tell me more about the platform itself, the details, what's under the hood, the hardware OS, what are the specs? >> Yeah, so we started with a very familiar premise. Rubik customers are already leveraging CX with an edge to cloud common operating model, in deploying leaf and spine networks. Plus we're excited to introduce the industry's first distributed services switch, where the first configuration has 48-25 gig ports with a hundred gig couplings running Aruba CX cloud native operating system, Pensando Asic's software inside, enabling layer four through six, seven stateful services. Shane, do you want to elaborate on. >> Yeah, let me elaborate on that a little bit further, um, you know, as we spoke existing platforms and how customers were seeking to address these challenges were, are inherently limited by the ASIC dye size, and that does limit their scale and performance and ability in traditional switching platforms to deliver truly stateful functions in, in, in a switching platform, this was, you know, architecturally from the ground up, when we developed our DPU, first and second generation, we delivered it, or we, we built it with stateful services in mind from the get-go, we leveraged the clean state design with our P four program with DPU. We evolved to our seven nanometers based pro DPU right now, which is essentially enabling software and Silicon. And this has generated a new level of performance scale, flexibility and capability in terms of services. This serves as the foundation for our 200 gig card, were we taking the largest cloud providers into production for. And the DPU itself is, is designed inherently to process stage, track stateful connections, and stateful flow is at very, very large scale without impacting performance. And in fact, the two of these DPU components server disk, services foundation of the CX 10 K, and this is how we enable stateful functions in a switching platform functions like stateful network fire-walling, stateful segmentation, enhanced programmable telemetry, which we believe will bring a whole lot of value to our customers. And this is a platform that's inherently programmable from the ground up. We can, we can build and leverage this platform to build new use cases around encryption, enabling stateful load balancing, stateful NAT to name a few, but, but the key message here is, this is, this is a platform with the next generation of architecture's in mind, is programmed, but at all, there's the stack, and that's what makes it fundamentally different than anything else. >> I want to just double click on that if you don't mind, before we get to the competitive question, because I think you brought up the state thing. I think this is worth calling out, if you guys don't mind commenting more on this states issue, because this is big. Cloud native developers right now, want speed, they're shifting left at the CICD pipeline with programmability. So going down and having the programmability, and having state is a really big deal. Can you guys just expand on that a little bit more and why it's important and, and how hard it really is to pull off? >> I, I can start, I guess, um, it's very hard to pull off because of the sheer amount of connections you need to track when you're developing something like a stateful firewall or a stateful load balancer, a key component of that is managing the connections at very, very large scale and understanding what's happening with those connections at scale, without impacting application performance. And this is fundamentally different at traditional switching platform, regardless of how it's deployed today in Asics, don't typically process and manage state like this. Um, memory resources within the chip aren't sufficient, um, the policy scale that you can um, implement on a platform aren't sufficient to address and fundamentally enable deployable firewalling, or load balancing, or other stateful services. >> That's exactly right. And so the other kind of key point here is that, if you think about the sophistication of different security threats, it does really require you to be able to look at the entire packet, and, and more so be able to look at the entire flow and be able to log that history, so that you can get much better heuristics around different anomalies, security threats that are emerging today. >> That's a great, great point. Thanks for, for, um, bringing that extra, extra point out. I would just add to this, we're reporting this all the time on Silicon angle in the cube is that, you know, the, you know, the, the automation wave that's coming with around data, you know, it's a center of data, not data centers we heard earlier on with the, in, in, in the presentation. Data drives automation, having that enabled with the state is a real big deal. So, I think that's really worth calling out. Now, I've got to ask the competition question, how is this different? I mean, this is an evolution. I would say, it's a revolution. You guys are being being humble, um, but how is this different from what customers can deploy today? >> Architecturally, if you take a look at it. We've, we've spoken about the technology and fundamentally in the platform what's unique, in the architecture, but, foundationally when customers deploy stateful services they're typically deployed leveraging traditional big box appliances for east-west our workload based agents, which seek to implement stateful security for each east-west. Architecturally what we're enabling is stateful services like firewalling, segmentation, can scale with the fabric and are delivered at the optimal point for east west which is through leaf for access layer of the network. And we do this for any type of workload. Be it deployed on a virtualized compute node, be a deployed on a containerized worker node, be deployed on bare metal, agnostic up typology, it can be in the access layer of a three tier design and a data center. It can be in the leaf layer of a VX VPN based fabric, but the goal is an all centrally managed to a single point of orchestration and control of which William will talk about shortly. The goal of this is to drive down the TCO of your data center as a whole, by allowing you to retire legacy appliances that are deployed in an east-west roll, and not utilize host based agents, and thus save a whole lot of money and we've modeled on the order of 60 to 70% in terms of savings in terms of the traditional data center pod design of a thousand compute nodes which we'll be publishing. And as, as we go forward additional services, as we mentioned, like encryption, this platform has the capability to terminate up to 800 gigs of our line rates encryption, IP sec, VPN per platform, stateful Nat load balancing, and this is all functionality we'll be adding to this existing platform because it's programmable as we've mentioned from the ground up. >> What are some of the use cases lead? And what's the top use cases, what's the low hanging fruit and where does this go? You've got service providers, enterprises. What are the types of customers you guys see implementing? >> Yeah, that's, what's really exciting about the CX 10,000. We actually see customer interest from all types of different markets, whether it be higher education, service providers to financial services, basically all enterprises verticals with private cloud or edge data centers. For example, it could be a hospital, a big box retailer, or a colon such as Iniquinate So it's really the CX 10,000 that creates a new switching category, enabling stateful services in that leaf node right at the workload, unifying network and security automation policy management. Second, the CX 10,000 greatly improves security posture and eliminates the need for hair-pinning east-west traffic all the way back to the centralized deployments. Lastly, As Shane highlighted, there's a 70% TCO savings by eliminating that appliance sprawl and ultimately collapsing the network security operations. >> I love the category creation um, vibe here. Love it. And also the technical and the cloud alignment's great. But how do the customers manage all this? Okay, I got a new category. I just put the box in, throw away some other ones? I mean, how does this all get done? And how does the customers manage all this? >> Yeah, so we're, we're looking to build on top of the river fabric composer. It's another familiar site for our customers, and what's already provides for compute storage and network automation, with a broad ecosystem integrations, such as VMware vSphere Vcenter as with Nutanix prism and so aligned with the CX 10,000 FGA, now you have a fabric composer, unified security and policy orchestration, and management with the ability to find firewall policies efficiently and provide that telemetry to collect your such a Splunk. >> John: So the customer environments right now involve a lot of multi-vendor and new frameworks, obviously, cloud native. How does this fit into the customer's existing environment with the ecosystem? How do they get, get going here? >> Yeah, great question. Um, Our customers can get going as we, we've built a flexible platform that can be deployed in either Greenfield or brownfield. Obviously it's a best of breed architecture for distributed services we're building in conjunction with Aruba. But if customers want to gradually integrate this into their existing environments and they're using other vendors, spines or cores, this can be inserted seamlessly as, as a lead for an access, access tier switch to deliver the exact same set of services within that architecture. So it plugs seamlessly in because it supports all the standard control plan protocols, a VX 90 VPN, and a traditional attitude, three tier designs easily. Now, for any enterprise solution deployment, it's critical that you build a holistic ecosystem around it. It's clear that, this will get customer deployments and the ecosystem being diverse and rich is very, very important. And as part of our integrations with the controller, we're building a broad suite of integrations across threat detection, application dependency mapping, Siemens sooam, dev ops infrastructure as code tools. (inaudible) And it's clear if you look at these categories of integrations, you know, XDR or threat detection requires full telemetric from within the data center, it's been hard to accomplish to date because you typically need agents on, on your compute nodes to give you the visibility into what's going on or firewalls for east west fuels. Now, our platform can natively provide full visibility into all flows east- west in the data center. And this can become the source of telemetry truth that these MLX CR engines require to work. The other aspects of ecosystem around application dependency mapping, this single core challenge with deploying segmentation east west is understanding the rules to put in & Right, first is how do you insert the service, um, service device in such a way that it won't add more complexity? We don't add any complexity because we're in line natively. How you would understand it, would allow you to build the rules that are necessary to do segmentation. We integrate with tools like Guardi core, we provide our flogs as source of data, and they can provide room recommendations and policy recommendations for customers. Around, we're building integrations around Siemen soam with, with tools like Splunk and elastic, elastic search that will allow NetOps and SecOps teams to visualize trend and manage the services delivered by the CX 10 K. And the other aspect of ecosystem, from a security standpoint is clearly how do I get policy for these traditional appliances and enforce them on this next generation architecture that you've built, that can enable stateful services. So we're building integrations with tools like turf and an algo sec third-party sources of policy that we can ingest and enforce on the infrastructure, allowing you to gradually, um, migrate to this new architecture over time. >> John: It's really a cloud native switch. I mean, you solve people's problems, pin- points, but yet positioned for growth. I mean, it sounds that's my takeaway, but I got to ask you guys both, what's the takeaway for the customers because it's not that simple for them, I mean it's, we a have complicated environment. (all giggling) >> Yeah, I think it's, I think it's really simple, um, you know, every 10 years or so, we see major evolutions in the data center and the switching environment, but we do believe we've created a new category with the distributed services, distributed services switch, delivering cloud scale distributed services, where the local, where the workloads reside greatly, simplifying network, security provisioning, and operations with the urban fabric composer while improving security posture and the TCO. But that's not all the folks, it's a journey, right Shane? >> Yeah, it's absolutely a journey. And this is the first step in a long journey with a great partner like Aruba. There's other platforms, hundred or 400 gig hardware platforms where we're looking at and then this additional services that we can enable over time, allowing customers to drive even more TCO value out of the platform of the architecture services like encryption for securing the cloud on-ramp, services like stateful load balancing to deploy east-west in the data center and, you know, holistically that's, that's the goal, deliver value for customers. And we believe we have an architecture and a platform, and this is a first step in a long journey. >> It's a great way of, I just ask one final, final question for both of you as product leaders, you got to be excited having a category creation product here in this market, this big wave, but what's your thoughts? >> Yeah, exactly right, it doesn't happen that often, and so we're, we're all in it's, it's exciting to be able to work with a great team like Pensando and Shane here. Um, so we're really, really excited about this launch. >> Yeah, it's awesome. The team is great. It's a great partnership between Pensando and Aruba. You know, we, we look forward to delivering value for our joint customers. >> John: Thank you both for sharing under the hood and more details on the product. Thanks for coming on. >> [William And Shane] Thank you. >> Okay. The next evolution in switching, I'm John Furrier here with the power of nHPE Aruba and Pensando changing the game, the way customers scale up in the cloud and networking. Thanks for watching. (music playing)
SUMMARY :
the way customers scale with the cloud, and they ultimately result in service and the associated per node cost and just the performance and scale. introduce the industry's and this is how we and how hard it really is to pull off? because of the sheer amount of connections And so the other kind of on Silicon angle in the cube and fundamentally in the What are some of the use cases lead? and eliminates the need for And how does the and so aligned with the CX 10,000 FGA, John: So the customer and the ecosystem being diverse and rich but I got to ask you guys both, and the switching environment, and this is a first and so we're, we're all in it's, we look forward to delivering value on the product. the way customers scale up in the cloud
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Shane Corbin | PERSON | 0.99+ |
John | PERSON | 0.99+ |
William | PERSON | 0.99+ |
Shane | PERSON | 0.99+ |
60 | QUANTITY | 0.99+ |
hundred | QUANTITY | 0.99+ |
80% | QUANTITY | 0.99+ |
six | QUANTITY | 0.99+ |
70% | QUANTITY | 0.99+ |
John Furrier | PERSON | 0.99+ |
First | QUANTITY | 0.99+ |
Pensando | ORGANIZATION | 0.99+ |
Shane Corban | PERSON | 0.99+ |
Aruba | ORGANIZATION | 0.99+ |
Second | QUANTITY | 0.99+ |
200 gig | QUANTITY | 0.99+ |
first | QUANTITY | 0.99+ |
first question | QUANTITY | 0.99+ |
CX 10,000 | COMMERCIAL_ITEM | 0.99+ |
first configuration | QUANTITY | 0.99+ |
three | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
Siemens | ORGANIZATION | 0.98+ |
William Choe | PERSON | 0.98+ |
both | QUANTITY | 0.98+ |
400 gig | QUANTITY | 0.98+ |
first step | QUANTITY | 0.98+ |
Forrester | ORGANIZATION | 0.98+ |
Pensando Asic | ORGANIZATION | 0.98+ |
second generation | QUANTITY | 0.98+ |
seven nanometers | QUANTITY | 0.98+ |
48-25 gig | QUANTITY | 0.98+ |
Secondly | QUANTITY | 0.97+ |
today | DATE | 0.97+ |
CX | TITLE | 0.97+ |
Asics | ORGANIZATION | 0.97+ |
single | QUANTITY | 0.97+ |
HPE Aruba | ORGANIZATION | 0.96+ |
one | QUANTITY | 0.96+ |
three tier | QUANTITY | 0.95+ |
one final | QUANTITY | 0.94+ |
first distributed services | QUANTITY | 0.92+ |
illiam | ORGANIZATION | 0.92+ |
Iniquinate | ORGANIZATION | 0.91+ |
nHPE | ORGANIZATION | 0.91+ |
ASIC | ORGANIZATION | 0.9+ |
hundred gig | QUANTITY | 0.89+ |
10 years | QUANTITY | 0.88+ |
Rubik | ORGANIZATION | 0.87+ |
CX 10,000 FGA | COMMERCIAL_ITEM | 0.85+ |
Splunk | TITLE | 0.84+ |
up to 800 gigs | QUANTITY | 0.83+ |
each compute | QUANTITY | 0.83+ |
NetOps | TITLE | 0.82+ |
Aruba HPE | ORGANIZATION | 0.81+ |
Guardi | TITLE | 0.8+ |
seven stateful services | QUANTITY | 0.79+ |
SecOps | TITLE | 0.77+ |
VMware vSphere Vcenter | TITLE | 0.76+ |
east- | LOCATION | 0.75+ |
CX 10 K | TITLE | 0.75+ |
layer four | OTHER | 0.74+ |
single point | QUANTITY | 0.72+ |
each east | QUANTITY | 0.7+ |
Greenfield | LOCATION | 0.7+ |
east west | LOCATION | 0.64+ |
question | QUANTITY | 0.63+ |
William Choe and Shane Corban | Aruba & Pensando Announce New Innovations
>>Hello and welcome to the power of and where H P E Aruba and Pensando are changing the game the way customers scale at the cloud and what's next in the evolution in switching everyone. I'm john ferrier with the Cuban. I'm here with Shane Corbyn, Director of Technical Product management. Pensando Williams show vice president Product management, Aruba HP Gentlemen, thank you for coming on and doing a deep dive and and going into the big news. So the first question I want to ask you guys is um, what do you guys see from a market customer perspective that kicked this project off? Amazing results over the past year or so. Where did it all come from? >>It's a great question, John So when we were doing our homework, there were actually three very clear customer challenges. First, security threats were largely spawned with from within the perimeter. In fact, four star highlights that 80% of threats originate within the internal network. Secondly, workloads are largely distributed, creating a ton of east west traffic and then lastly, network services such as firewalls load balancers. VPN aggregators are expensive. They're centralized and then ultimately result in service changing complexity. So everyone, >>so go ahead. Change. >>Yeah. Additionally, when we spoke to our customers after launching initially the distributed services platform, these compliance challenges clearly became apparent to us and while they saw the architectural value of adopting what the largest public cloud providers have done by putting a smart making each compute note to provide these state full services. Enterprise customers were still were struggling with the need to upgrade fleets and Brownfield servers and the associated per node cost of adding a spark nick to every compute node. Typically the traffic volumes for on a personal basis within an enterprise data center are significantly lower than cloud. Thus we saw an opportunity here to in conjunction with Aruba developed a new category of switching product um, to share the crossing capabilities of our unique intellectual property around our DPU across a rack of servers that Net Net delivers the same set of services through a new category of platform, enabling a distributed services architecture and ultimately addressing the compliance and uh, TCO generating huge TCO and ri for customers. >>You know, one of the things that we've been reporting on with you guys as well as the cloud scale, this is the volume of data and just the performance and scale I think the timing of the, of this partnership and the product development is right on point. You got the edge right around the corner more, more distributed nature of cloud operations, huge, huge change in the marketplace. So great timing on the origination story there. Great stuff. Tell me more about the platform itself. The details what's under the hood, the hardware. Os, what are the specs? >>Yeah, so we started with a very familiar premise, Ruba customers are already leveraging C X with an edge to cloud, common operating model and deploying Leaf and spy networks. Plus we're excited to introduce the industry's first distributed services switch where the first configuration has 48 25 gig ports with 100 gig uplinks running Aruba C X cloud native operating system. Pensando A six and software inside enabling layer four through seven staple services you want to elaborate on. >>Let me elaborate on that a little further. Um, you know, as we spoke, existing platforms and how customers were seeking to address these challenges were inherently limited by the diocese and that thus limited their scale and performance and ability in traditional switching platforms to deliver truly stable functions in in a switching platform. This was, you know, architecturally from the ground up. When we developed our DPU 1st and 2nd generation, we delivered it or we we we built it with staples services in in mind from the Gecko. We we leverage to clean state designed with RP four program with GPU, we evolved to our seven nanometer based DPU right now, which is essentially enabling software and silicon and this has generated a new level of performance scale flexibility and capability in terms of services this serves as the foundation for or 200 gig card where we're taking the largest cloud providers into production for. And the DPU itself is designed inherently to process state track state connections and state will flow is a very, very large scale without impacting performance. And in fact, the two of these deep you component service, their services foundation of the C X 10-K And this is how we enable states of functions in a switching platform. Functions like stable network network fire walling, stable segmentation, enhance programmable telemetry. Which we believe will bring a whole lot of value to our customers. And this is a, a platform that's inherently programmable from the ground up. We can we can build and and leverages platform to build new use cases around encryption, enabling state for load balancing, stable nash to name a few. But the key message here is this is this is a platform with the next generation of architecture is in mind is programmed but at all levels of the stack and that's what makes it fundamentally different than anything else. >>I want to just double click on that if you don't mind before we get to the competitive question because I think you brought up the state thing, I think this is worth calling out if you guys don't mind commenting more on this state issue because this is big cloud. Native developers right now want speed, they're shifting left at the Ci cd pipeline with program ability. So going down and having the program ability and having state is a really big deal. Can you guys just expand on that a little bit more and why it's important and how hard it really is to pull off. >>I I can start I guess. Well um it's very hard to pull off because of the sheer amount of connections you need to track when you're developing something like a state, full firewall or state from load balancer. A key component of that is managing the connections at very, very large scale and understanding what's happening with those connections at scale without impacting application performance. And this is fundamentally different. A traditional switching platform regardless of how it's deployed today in a six don't typically process and manage state like this. Memory resources within the shape aren't sufficient. Um the policy scale that you can implement on a platform aren't sufficient to address and fundamentally enable deployable fire walling or load balancing or other state services. >>That's exactly right. So the other kind of key point here is that if you think about the sophistication of different security threats, it does really require you to be able to look at the entire packet and more so be able to look at the entire flow and be able to log that history so that you can get much better heuristics around different anomalies. Security threats that are emerging today. >>That's a great great point. Thanks for bringing that extra extra point out, I would just add to this, we're reporting this all the time when silicon angle in the cube is that you know, the you know, the the automation wave that's coming with around data, you know, it's the center of data now, not date as soon as we heard earlier on with the presentation data drives automation having that enabled with state is a real big deal. So I think that's really worth calling out now. I got to ask the competition question, how is this different? I mean this is an evolution, I would say it's a revolution you guys are being humble um but how is this different from what customers can deploy today >>architecturally, if you take a look at it? So we've, we've spoken about the technology and fundamentally in the platform, what's unique in the architecture but foundational e when customers deploy stable services, they're typically deployed leveraging traditional big box appliances for east west or workload based agents which seek to implement stable security for each East west architectural, what we're enabling is staples services like fire walling, segmentation can scale with the fabric and are delivered at the optimal point for east west which is through the Leaf for access their of the network and we do this for any type of workload. Being deployed on a virtualized compute node being deployed on a containerized, our worker node being deployed on bare metal agnostic of topology. It can be in the access layer of a three tier design and a data center. It can be in the leaf layer of the excellent VPN based fabric. But the goal is an all centrally managed to a single point of orchestration control which William we'll talk about shortly. The goal of this is to to drive down the TCO of your data center as a whole by allowing you to retire legacy appliances that are deployed in in east west role, not utilized host based agents and thus save a whole lot of money. And we've modeled on the order of 60 to 70% in terms of savings in terms of the traditional data center pod design of 1000 compute nodes which will be publishing and as as we go forward, additional services as we mentioned like encryption, this platform has the capability to terminate up to 800 gigs of line, right encryption, I P sec VPN per platform state will not load balancing and this is all functionality will be adding to this existing platform because it's programmable as we mentioned from the ground up. >>What are some of the use cases lead and one of the top use case. What's the low hanging fruit? And where does this go? Service providers enterprise, what are the types of customers you guys see implementing? >>Yeah, that's what's really exciting about the C X 10,000 we actually see customer interest from all types of different markets, whether it be higher education service providers to financial services, basically all enterprises verticals with private cloud or edge data centers for example, could be a hospital, a big box retailer or Coehlo. Such as an equity. It's so it's really the 6 10,000 that creates a new switching category enabling staple services in that leaf node, right at the workload, unifying network and security automation policy management. Second, the C X 10,000 greatly improved security posture and eliminates the need for hair pinning east west traffic all the way back to the centralized plants. Lastly, a Shane highlighted there's a 70% Tco savings by eliminating that appliance brawl and ultimately collapsing the network security operations. >>I love the category creation vibe here. Love it. And obviously the technical and the cloud line is great. But how do the customers manage all this? Okay. You got a new category. I just put the box in, throw away some other one. I mean how does this all get down? How does the customers manage all this? >>Yeah. So we're looking to build on top of the ribbon fabric composer. It's another familiar sight for our customers which already provides for compute storage and network automation with a broad ecosystem integrations such as being where the sphere be center as with Nutanix prison And so aligned with the c. x. 10,000 at G. A. now the aruba fabric composer unifies security and policy orchestration and management with the ability to find firewall policies efficiently and provide that telemetry to collectors such a slump. >>So the customer environments right now involve a lot of multi vendor and new frameworks cloud native. How does this fit into the customer's existing environment? The ecosystem. How do they get that get going here? >>Yeah, great question. Um our customers can get going is we we built a flexible platform that can be deployed in either Greenfield or brownfield. Obviously it's a best of breed architecture for distributed services were building in conjunction with the ruble but if customers want to gradually integrate this into their existing environments and they're using other vendors, spines or course this can be inserted seamlessly as a leaf or an access access to your switch to deliver the exact same set of services within that architecture. So it plugs seamlessly in because it supports all the standard control playing protocols, VX, Lenny, VPN and traditional attitude three tier designs easily. Now for any enterprise solution deployment, it's critical that you build a holistic ecosystem around it. It's clear that this will get customer deployments and the ecosystem being diverse and rich is very, very important and as part of our integrations with the controller, we're building a broad suite of integrations across threat detection application dependency mapping, Semen sore develops infrastructure as code tools like ants, Poland to answer the entire form. Um, it's clear if you look at these categories of integrations, you know XDR or threat detection requires full telemetry from within the data center. It's been hard to accomplish to date because you typically need agents on, on your compute nodes to give you the visibility into what's going on or firewalls for east west flaws. Now our platform can natively provide full visibility in dolphins, East west in the data center and this can become the source of telemetry truth that these Ml XT or engines required to work. The other aspects of ecosystem are around application dependency mapping the single core challenge with deploying segmentation. East West is understanding the rules to put in place right first, is how do you insert the service uh service device in such a way that it won't add more complexity. We don't add any complexity because we're in line natively. How do we understand that allow you to build the rules are necessary to do segmentation. We integrate with tools like guard corps, we provide our flow logs a source of data and they can provide rural recommendations and policy recommendations for customers around. We're building integrations around steve and soar with tools like Splunk and elastic elastic search that will allow net hops and sec ops teams to visualize, train and manage the services delivered by the C X 10-K. And the other aspect of ecosystem from a security standpoint is clearly how do I get policy from these traditional appliances and enforce them on this next generation architecture that you've built that can enable state health services. So we're building integrations with tools like toughen analgesic third party sources of policy that we can ingest and enforcing the infrastructure allowing you to gradually migrate to this new architecture over time >>it's really a cloud native switch, you solve people's problems pain points but yet positioned for growth. I mean it sounds that's my takeaway. But I gotta ask you guys both what's the takeaway for the customers because it's not that simple for that. We have a complicated >>Environment. I think, I think it's really simple every 10 years or so. We see major evolutions in the data center in the switching environment. We do believe we've created a new category with the distributed services, distributed services, switch, delivering cloud scale distribute services where the local where the workloads were side greatly simplifying network security provisions and operations with the Yoruba fabric composer while improving security posture and the TCO. But that's not all folks. It's a journey. Right. >>Yeah, it's absolutely a journey. And this is the first step in in a long journey with a great partner like Aruba, there's other platforms, 100 or four gig hardware platforms we're looking at and then there's additional services that we can enable over time allowing customers to drive even more Tco value out of the platform and the architectural services like encryption for securing the cloud on ramp services like state for load balancing to deploy east west in the data center and you know, holistically that's that's the goal, deliver value for customers and we believe we have an architecture and a platform and this is the first step in a long journey. It's >>a great way. I just ask one final final question for both of you. As product leaders, you've got to be excited having a category creation product here in this market, this big wave. What's what's your thoughts? >>Yeah, exactly. Right. It doesn't happen that often. And so we're all in, it's it's exciting to be able to work with a great team like Sandu and chain here. And so we're really excited about this launch. >>Yeah, it's awesome. The team is great. It's a great partnership between and santo and Aruba and you know, we we look forward to delivering value for john customers. >>Thank you both for sharing under the hood and more details on the product. Thanks for coming on. >>Thank you. Okay, >>the next evolution of switching, I'm john furrier here with the power of An HP, Aruba and Pensando, changing the game the way customers scale up in the cloud and networking. Thanks for watching. Mhm.
SUMMARY :
So the first the perimeter. so go ahead. property around our DPU across a rack of servers that Net Net delivers the same set You know, one of the things that we've been reporting on with you guys as well as the cloud scale, the first configuration has 48 25 gig ports with 100 gig uplinks running And in fact, the two of these deep you component service, I think this is worth calling out if you guys don't mind commenting more on this state issue Um the policy scale that you can So the other kind of key point here is that if you think about the sophistication I mean this is an evolution, I would say it's a revolution you guys are being humble um but how The goal of this is to to drive down the TCO of your data center as a whole by allowing What are some of the use cases lead and one of the top use case. It's so it's really the 6 10,000 that creates a new switching category And obviously the technical and the cloud prison And so aligned with the c. x. 10,000 at G. A. now the aruba fabric So the customer environments right now involve a lot of multi vendor and new frameworks cloud native. and enforcing the infrastructure allowing you to gradually migrate to this new architecture But I gotta ask you guys both what's the takeaway for the customers because We see major evolutions in the data center in the switching environment. in the data center and you know, holistically that's that's the goal, deliver value for customers this big wave. it's it's exciting to be able to work with a great team like Sandu and chain here. It's a great partnership between and santo and Aruba and you Thank you both for sharing under the hood and more details on the product. Thank you. the next evolution of switching, I'm john furrier here with the power of An HP, Aruba and Pensando,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Shane Corbyn | PERSON | 0.99+ |
Shane Corban | PERSON | 0.99+ |
80% | QUANTITY | 0.99+ |
100 gig | QUANTITY | 0.99+ |
William Choe | PERSON | 0.99+ |
48 | QUANTITY | 0.99+ |
60 | QUANTITY | 0.99+ |
70% | QUANTITY | 0.99+ |
Aruba | ORGANIZATION | 0.99+ |
200 gig | QUANTITY | 0.99+ |
Net Net | ORGANIZATION | 0.99+ |
two | QUANTITY | 0.99+ |
Pensando | ORGANIZATION | 0.99+ |
first question | QUANTITY | 0.99+ |
First | QUANTITY | 0.99+ |
100 | QUANTITY | 0.99+ |
Second | QUANTITY | 0.99+ |
C X | TITLE | 0.99+ |
john ferrier | PERSON | 0.99+ |
Sandu | ORGANIZATION | 0.99+ |
HP | ORGANIZATION | 0.99+ |
H P E Aruba | ORGANIZATION | 0.99+ |
William | PERSON | 0.99+ |
first step | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
Greenfield | LOCATION | 0.98+ |
first configuration | QUANTITY | 0.98+ |
John So | PERSON | 0.98+ |
three | QUANTITY | 0.98+ |
C X 10-K | TITLE | 0.98+ |
santo | ORGANIZATION | 0.98+ |
Coehlo | ORGANIZATION | 0.97+ |
2nd generation | QUANTITY | 0.97+ |
seven nanometer | QUANTITY | 0.97+ |
john furrier | PERSON | 0.97+ |
six | QUANTITY | 0.97+ |
today | DATE | 0.97+ |
C X 10,000 | COMMERCIAL_ITEM | 0.96+ |
four star | QUANTITY | 0.96+ |
Poland | LOCATION | 0.96+ |
one final final question | QUANTITY | 0.96+ |
seven staple services | QUANTITY | 0.96+ |
four gig | QUANTITY | 0.96+ |
one | QUANTITY | 0.95+ |
first distributed services | QUANTITY | 0.95+ |
Tco | ORGANIZATION | 0.95+ |
Secondly | QUANTITY | 0.95+ |
Ruba | ORGANIZATION | 0.95+ |
brownfield | LOCATION | 0.94+ |
Nutanix | ORGANIZATION | 0.94+ |
up to 800 gigs | QUANTITY | 0.94+ |
each | QUANTITY | 0.93+ |
three tier | QUANTITY | 0.92+ |
john | PERSON | 0.92+ |
C X | TITLE | 0.91+ |
east west | LOCATION | 0.9+ |
1000 compute | QUANTITY | 0.9+ |
C X 10,000 | TITLE | 0.89+ |
each compute note | QUANTITY | 0.89+ |
10,000 | QUANTITY | 0.87+ |
Gecko | ORGANIZATION | 0.86+ |
single core | QUANTITY | 0.86+ |
first | QUANTITY | 0.85+ |
single point | QUANTITY | 0.85+ |
25 gig | QUANTITY | 0.81+ |
Shane | PERSON | 0.81+ |
HP Gentlemen | ORGANIZATION | 0.8+ |
1st | QUANTITY | 0.79+ |
DPU | QUANTITY | 0.76+ |
Semen sore | ORGANIZATION | 0.74+ |
every 10 years | QUANTITY | 0.73+ |
6 10,000 | OTHER | 0.71+ |
past year | DATE | 0.69+ |
Yoruba | ORGANIZATION | 0.68+ |
Splunk | TITLE | 0.65+ |
Pensando Williams | ORGANIZATION | 0.64+ |
East West | LOCATION | 0.61+ |
Brownfield | ORGANIZATION | 0.59+ |
layer | QUANTITY | 0.54+ |
G. A. | LOCATION | 0.54+ |
four | OTHER | 0.53+ |
ton | QUANTITY | 0.52+ |