Image Title

Search Results for Zen 3:

Dilip Ramachandran and Juergen Zimmerman


 

(bright upbeat music) >> Welcome to theCUBE's continuing coverage of AMD's fourth generation EPYC launch, along with the way that Dell has integrated this technology into its PowerEdge server lines. We're in for an interesting conversation today. Today, I'm joined by Dilip Ramachandran, Senior Director of Marketing at AMD, and Juergen Zimmermann. Juergen is Principal SAP Solutions Performance Benchmarking Engineer at Dell. Welcome, gentlemen. >> Welcome. >> Thank you David, nice to be here. >> Nice to meet you too, welcome to theCUBE. You will officially be CUBE alumni after this. Dilip, let's start with you. What's this all about? Tell us about AMD's recent launch and the importance of it. >> Thanks, David. I'm excited to actually talk to you today, AMD, at our fourth generation EPYC launch last month in November. And as part of that fourth generation EPYC launch, we announced industry-leading performance based on 96 cores, based on Zen 4 architecture. And new interfaces, PCIe Gen 5, as well as DDR5. Incredible amount of memory bandwidth, memory capacity supported, and a whole lot of other features as well. So we announced this product, we launched it in November last month. And we've been closely working with Dell on a number of benchmarks that we'd love to talk to you more about today. >> So just for some context, when was the last release of this scale? So when was the third generation released? How long ago? >> The third generation EPYC was launched in Q1 of 2021. So it was almost 18 to 24 months ago. And since then we've made a tremendous jump, the fourth generation EPYC, in terms of number of cores. So third generation EPYC supported 64 cores, fourth generation EPYC supports 96 cores. And these are new cores, the Zen 4 cores, the fourth generation of Zen cores. So very high performance, new interfaces, and really world-class performance. >> Excellent. Well, we'll go into greater detail in a moment, but let's go to Juergen. Tell us about the testing that you've been involved with to kind of prove out the benefits of this new AMD architecture. >> Yeah, well, the testing is SAP Standard Performance benchmark, the SAP SD two tier. And this is more or less a industry standard benchmark that is used to size your service for the needs of SAP. Actually, SAP customers always ask the vendors about the SAP benchmark and the SAPS values of their service. >> And I should have asked you before, but give us a little bit of your background working with SAP. Have you been doing this for longer than a week? >> Yeah, yeah, definitely, I do this for about 20 years now. Started with Sun Microsystems, and interestingly in the year 2003, 2004, I started working with AMD service on SAP with Linux, and afterwards parted the SAP application to Solaris AMD, also with AMD. So I have a lot of tradition with SAP and AMD benchmarks, and doing this ever since then. >> So give us some more detail on the results of the recent testing, and if you can, tell us why we should care? >> (laughs) Okay, the recent results actually also surprised myself, they were so good. So I initially installed the benchmark kit, and couldn't believe that the server is just getting, or hitting idle by the numbers I saw. So I cranked up the numbers and reached results that are most likely double the last generation, so Zen 3 generation, and that even passed almost all 8-socket systems out there. So if you want to have the same SAP performance, you can just use 2-socket AMD server instead of any four or 8-socket servers out there. And this is a tremendous saving in energy. >> So you just mentioned savings in terms of power consumption, which is a huge consideration. What are the sort of end user results that this delivers in terms of real world performance? How is a human being at the end of a computer going to notice something like this? >> So actually the results are like that you get almost 150,000 users concurrently accessing the system, and get their results back from SAP within one second response time. >> 150,000 users, you said? >> 150,000 users in parallel. >> (laughs) Okay, that's amazing. And I think it's interesting to note that, and I'll probably say this a a couple of times. You just referenced third generation EPYC architecture, and there are a lot of folks out there who are two generations back. Not everyone is religiously updating every 18 months, and so for a fair number of SAP environments, this is an even more dramatic increase. Is that a fair thing to say? >> Yeah, I just looked up yesterday the numbers from generation one of EPYC, and this was at about 28,000 users. So we are five times the performance now, within four years. Yeah, great. >> So Dilip, let's dig a little more into the EPYC architecture, and I'm specifically also curious about... You mentioned PCIe Gen five, or 5.0 and all of the components that plug into that. You mentioned I think faster DDR. Talk about that. Talk about how all of the components work together to make when Dell comes out with a PowerEdge server, to make it so much more powerful. >> Absolutely. So just to spend a little bit more time on this particular benchmark, the SAP Sales and Distribution benchmark. It's a widely used benchmark in the industry to basically look at how do I get the most performance out of my system for a variety of SAP business suite applications. And we touched upon it earlier, right, we are able to beat a performance of 4-socket and 8-socket servers out there. And you know, it saves energy, it saves cost, better TCO for the data center. So we're really excited to be able to support more users in a single server and meeting all the other dual socket and 4-socket combinations out there. Now, how did we get there, right, is more the important question. So as part of our fourth generation EPYC, we obviously upgraded our CPU core to provide much better single third performance per core. And at the socket level, you know, when you're packing 96 cores, you need to be able to feed these cores, you know, from a memory standpoint. So what we did was we went to 12 channels of memory, and these are DDR5 memory channels. So obviously you get much better bandwidth, higher speed of the memory with DDR5, you know, starting at 4,800 megahertz. And you're also now able to have more channels to be able to send the data from the memory into the CPU subsystem, which is very critical to keep the CPUs busy and active, and get the performance out. So that's on the memory side. On the data side, you know, we do have PCIe Gen five, and any data oriented applications that take data either from the PCIe drives or the network cards that utilize Gen five that are available in the industry today, you can actually really get data into the system through the PCIe I/O, either again, through the disk, or through the net card as well. So those are other ways to actually also feed the CPU subsystem with data to be processed by the CPU complex. So we are, again, very excited to see all of this coming together, and as they say, proof's in the pudding. You know, Juergen talked about it. How over generation after generation we've increased the performance, and now with our fourth generation EPYC, we are absolutely leading world-class performance on the SAP Sales and Distribution benchmark. >> Dilip, I have another question for you, and this may be, it may be a bit of a PowerEdge and beyond question. What are you seeing, or what are you anticipating in terms of end user perception when they go to buy a new server? Obviously server is a very loose term, and they can be configured in a bunch of different ways. But is there a discussion about ROI and TCO that's particularly critical? Because people are going to ask, "Well, wait a minute. If it's more expensive than the last one that I bought, am I getting enough bang for my buck?" Is that going to be part of the conversation, especially around power and cooling and things like that? >> Yeah, absolutely. You know, every data center decision maker has to ask the question, "Why should I upgrade? Should I stay with legacy hardware, or should I go into the latest and greatest that AMD offers?" And the advantages that the new generation products bring is much better performance at much better energy consumption levels, as well as much better performance per dollar levels. So when you do the upgrade, you are actually getting, you know, savings in terms of performance per dollar, as well as saving in space because you can consolidate your work into fewer servers 'cause you have more cores. As we talked about, you have eight, you know. Typically you might do it on a four or 8-socket server which is really expensive. You can consolidate down to a 2-socket server which is much cheaper. As also for maintenance costs, it's much lower maintenance costs as well. All of this, performance, power, maintenance costs, all of that translate into better TCO, right. So lower all of these, high performance, lower power, and then lower maintenance costs, translate to much better TCO for the end user. And that's an important equation that all customers pay attention to. and you know, we love to work with them and demonstrate those TCO benefits to them. >> Juergen, talk to us more in general about what Dell does from a PowerEdge perspective to make sure that Dell is delivering the best infrastructure possible for SAP. In general, I mean, I assume that this is a big responsibility of yours, is making sure that the stuff runs properly and if not, fixing it. So tell us about that relationship between Dell and a SAP. >> Yeah, for Dell and SAP actually, we're more or less partners with SAP. We have people sitting in SAP's Linux lab, and working in cooperative with SAP, also with Linux partners like SUSE and Red Hat. And we are in constant exchange about what's new in Linux, what's new on our side. And we're all a big family here. >> So when the new architecture comes out and they send it to Juergen, the boys back at the plant as they say, or the factory to use Formula One terms, are are waiting with baited breath to hear what Juergen says about the results. So just kind of kind of recap again, you know, the specific benchmarks that you were running. Tell us about that again. >> Yeah, the specific benchmark is the SAP Sales and Distribution benchmark. And for SAP, this is the benchmark that needs to be tested, and it shows the performance of the whole system. So in contrast to benchmarks that only check if the CPU is running, very good, this test the whole system up from the network stack, from the storage stack, the memory, subsystem, and the OS running on the CPUs. >> Okay, which makes perfect sense, since Dell is delivering an integrated system and not just CPU technology. You know, on that subject, Dilip, do you have any insights into performance numbers that you're hearing about with Gen four EPYC for other database environments? >> Yeah, we have actually worked together with Dell on a variety of benchmarks, both on the latest fourth generation EPYC processors as well as the preceding one, the third generation EPYC processors. And published a bunch of world records on database, particularly I would say TPC-H, TPCx-V, as well as TPCx-HS and TPCx-IoT. So a number of TPC related benchmarks that really showcase performance for database and related applications. And we've collaborated very closely with Dell on these benchmarks and published a number of them already, and you know, a number of them are world records as well. So again, we're very excited to collaborate with Dell on the SAP Sales and Distribution benchmark, as well as other benchmarks that are related to database. >> Well, speaking of other benchmarks, here at theCUBE we're going to be talking to actually quite a few people, looking at this fourth generation EPYC launch from a whole bunch of different angles. You two gentlemen have shed light on some really good pieces of that puzzle. I want to thank you for being on theCUBE today. With that, I'd like to thank all of you for joining us here on theCUBE. Stay tuned for continuing CUBE coverage of AMD's fourth generation EPYC launch, and Dell PowerEdge strategy to leverage it.

Published Date : Dec 8 2022

SUMMARY :

Welcome to theCUBE's Nice to meet you talk to you today, AMD, the fourth generation of Zen cores. to kind of prove out the benefits and the SAPS values of their service. you before, but give us and afterwards parted the SAP application and couldn't believe that the server What are the sort of end user results So actually the results Is that a fair thing to say? and this was at about 28,000 users. and all of the components And at the socket level, you know, of the conversation, And the advantages that the is delivering the best and working in cooperative with SAP, or the factory to use Formula One terms, and it shows the performance You know, on that subject, on the SAP Sales and With that, I'd like to thank all of you

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavidPERSON

0.99+

AMDORGANIZATION

0.99+

DilipPERSON

0.99+

Dilip RamachandranPERSON

0.99+

DellORGANIZATION

0.99+

JuergenPERSON

0.99+

Sun MicrosystemsORGANIZATION

0.99+

12 channelsQUANTITY

0.99+

96 coresQUANTITY

0.99+

five timesQUANTITY

0.99+

4,800 megahertzQUANTITY

0.99+

2003DATE

0.99+

2004DATE

0.99+

SAPORGANIZATION

0.99+

last monthDATE

0.99+

96 coresQUANTITY

0.99+

Juergen ZimmermannPERSON

0.99+

eightQUANTITY

0.99+

64 coresQUANTITY

0.99+

TodayDATE

0.99+

fourQUANTITY

0.99+

yesterdayDATE

0.99+

one secondQUANTITY

0.99+

November last monthDATE

0.99+

8-socketQUANTITY

0.99+

about 28,000 usersQUANTITY

0.98+

2-socketQUANTITY

0.98+

todayDATE

0.98+

Juergen ZimmermanPERSON

0.98+

two generationsQUANTITY

0.98+

four yearsQUANTITY

0.98+

bothQUANTITY

0.98+

Zen 3 generationCOMMERCIAL_ITEM

0.98+

about 20 yearsQUANTITY

0.97+

150,000 usersQUANTITY

0.97+

LinuxTITLE

0.96+

singleQUANTITY

0.96+

almost 150,000 usersQUANTITY

0.95+

fourth generationQUANTITY

0.95+

SAPTITLE

0.94+

two gentlemenQUANTITY

0.94+

third generationQUANTITY

0.94+

fourthQUANTITY

0.93+

single serverQUANTITY

0.93+

two tierQUANTITY

0.92+

24 months agoDATE

0.92+

PCIe Gen fiveOTHER

0.91+

PCIe Gen 5OTHER

0.9+

Zen 4 coresCOMMERCIAL_ITEM

0.89+

Brad Smith, AMD & Rahul Subramaniam, Aurea CloudFix | AWS re:Invent 2022


 

(calming music) >> Hello and welcome back to fabulous Las Vegas, Nevada. We're here at AWS re:Invent day three of our scintillating coverage here on theCUBE. I'm Savannah Peterson, joined by John Furrier. John Day three energy's high. How you feeling? >> I dunno, it's day two, day three, day four. It feels like day four, but again, we're back. >> Who's counting? >> Three pandemic levels in terms of 50,000 plus people? Hallways are packed. I got pictures. People don't believe it. It's actually happening. Then people are back. So, you know, and then the economy is a big question too and it's still, people are here, they're still building on the cloud and cost is a big thing. This next segment's going to be really important. I'm looking forward to this next segment. >> Yeah, me too. Without further ado let's welcome our guests for this segment. We have Brad from AMD and we have Rahul from you are, well you do a variety of different things. We'll start with CloudFix for this segment, but we could we could talk about your multiple hats all day long. Welcome to the show, gentlemen. How you doing? Brad how does it feel? We love seeing your logo above our stage here. >> Oh look, we love this. And talking about re:Invent last year, the energy this year compared to last year is so much bigger. We love it. We're excited to be here. >> Yeah, that's awesome. Rahul, how are you feeling? >> Excellent, I mean, I think this is my eighth or ninth re:Invent at this point and it's been fabulous. I think the, the crowd, the engagement, it's awesome. >> You wouldn't know there's a looming recession if you look at the activity but yet still the reality is here we had an analyst on yesterday, we were talking about spend more in the cloud, save more. So that you can still use the cloud and there's a lot of right sizing, I call you got to turn the lights off before you go to bed. Kind of be more efficient with your infrastructure as a theme. This re:Invent is a lot more about that now. Before it's about the glory days. Oh yeah, keep building, now with a little bit of pressure. This is the conversation. >> Exactly and I think most companies are looking to figure out how to innovate their way out of this uncertainty that's kind of on everyone's head. And the only way to do it is to be able to be more efficient with whatever your existing spend is, take those savings and then apply them to innovating on new stuff. And that's the way to go about it at this point. >> I think it's such a hot topic, for everyone that we're talking about. I mean, total cost optimization figuring out ways to be more efficient. I know that that's a big part of your mission at CloudFix. So just in case the audience isn't versed, give us the pitch. >> Okay, so a little bit of background on this. So the other hat I wear is CTO of ESW Capital. We have over 150 enterprise software companies within the portfolio. And one of my jobs is also to manage and run about 40 to 45,000 AWS accounts of our own. >> Casual number, just a few, just a couple pocket change, no big deal. >> And like everyone else here in the audience, yeah we had a problem with our costs, just going out of control and as we were looking at a lot of the tools to help us kind of get more efficient one of the biggest issues was that while people give you a lot of recommendations recommendations are way too far from realized savings. And we were running through the challenge of how do you take recommendation and turn them into real savings and multiple different hurdles. The short story being, we had to create CloudFix to actually realize those savings. So we took AWS recommendations around cost, filtered them down to the ones that are completely non-disruptive in nature, implemented those as simple automations that everyone could just run and realize those savings right away. We then took those savings and then started applying them to innovating and doing new interesting things with that money. >> Is there a best practice in your mind that you see merging in this time? People start more focused on it. Is there a method or a purpose kind of best practice of how to approach cost optimization? >> I think one of the things that most people don't realize is that cost optimization is not a one and done thing. It is literally nonstop. Which means that, on one hand AWS is constantly creating new services. There are over a hundred thousand API at this point of time How to use them right, how to use them efficiently You also have a problem of choice. Developers are constantly discovering new services discovering new ways to utilize them. And they are behaving in ways that you had not anticipated before. So you have to stay on top of things all the time. And really the only way to kind of stay on top is to have automation that helps you stay on top of all of these things. So yeah, finding efficiencies, standardizing your practices about how you leverage these AWS services and then automating the governance and hygiene around how you utilize them is really the key >> Brad tell me what this means for AMD and what working with CloudFix and Rahul does for your customers. >> Well, the idea of efficiency and cost optimization is near and dear to our heart. We have the leading. >> It's near and dear to everyone's heart, right now. (group laughs) >> But we are the leaders in x86 price performance and density and power efficiency. So this is something that's actually part of our core culture. We've been doing this a long time and what's interesting is most companies don't understand how much more efficiency they can get out of their applications aside from just the choices they make in cloud. but that's the one thing, the message we're giving to everybody is choice matters very much when it comes to your cloud solutions and just deciding what type of instance types you choose can have a massive impact on your bottom line. And so we are excited to partner with CloudFix, they've got a great model for this and they make it very easier for our customers to help identify those areas. And then AMD can come in as well and then help provide additional insight into those applications what else they can squeeze out of it. So it's a great relationship. >> If I hear you correctly, then there's more choice for the customers, faster selection, so no bad choices means bad performance if they have a workload or an app that needs to run, is that where you you kind of get into the, is that where it is or more? >> Well, I mean from the AMD side right now, one of the things they do very quickly is they identify where the low hanging fruit is. So it's the thing about x86 compatibility, you can shift instance types instantly in most cases without any change to your environment at all. And CloudFix has an automated tool to do that. And that's one thing you can immediately have an impact on your cost without having to do any work at all. And customers love that. >> What's the alternative if this doesn't exist they have to go manually figure it out or it gets them in the face or they see the numbers don't work or what's the, if you don't have the tool to automate what's the customer's experience >> The alternative is that you actually have people look at every single instance of usage of resources and try and figure out how to do this. At cloud scale, that just doesn't make sense. You just can't. >> It's too many different options. >> Correct The reality is that your resources your human resources are literally your most expensive part of your budget. You want to leverage all the amazing people you have to do the amazing work. This is not amazing work. This is mundane. >> So you free up all the people time. >> Correct, you free up wasting their time and resources on doing something that's mundane, simple and should be automated, because that's the only way you scale. >> I think of you is like a little helper in the background helping me save money while I'm not thinking about it. It's like a good financial planner making you money since we're talking about the economy >> Pretty much, the other analogy that I give to all the technologists is this is like garbage collection. Like for most languages when you are coding, you have these new languages that do garbage collection for you. You don't do memory management and stuff where developers back in the day used to do that. Why do that when you can have technology do that in an automated manner for you in an optimal way. So just kind of freeing up your developer's time from doing this stuff that's mundane and it's a standard best practice. One of the things that we leverage AMD for, is they've helped us define the process of seamlessly migrating folks over to AMD based instances without any major disruptions or trying to minimize every aspect of disruption. So all the best practices are kind of borrowed from them, borrowed from AWS in most other cases. And we basically put them in the automation so that you don't ever have to worry about that stuff. >> Well you're getting so much data you have the opportunity to really streamline, I mean I love this, because you can look across industry, across verticals and behavior of what other folks are doing. Learn from that and apply that in the background to all your different customers. >> So how big is the company? How big is the team? >> So we have people in about 130 different countries. So we've completely been remote and global and actually the cloud has been one of the big enablers of that. >> That's awesome, 130 countries. >> And that's the best part of it. I was just telling Brad a short while ago that's allowed us to hire the best talent from across the world and they spend their time building new amazing products and new solutions instead of doing all this other mundane stuff. So we are big believers in automation not only for our world. And once our customers started asking us about or telling us about the same problem that they were having that's when we actually took what we had internally for our own purpose. We packaged it up as CloudFix and launched it last year at re:Invent. >> If the customers aren't thinking about automation then they're going to probably have struggle. They're going to probably struggle. I mean with more data coming in you see the data story here more data's coming in, more automation. And this year Brad price performance, I've heard the word price performance more this year at re:Invent than any other year I've heard it before, but this year, price performance not performance, price performance. So you're starting to hear that dialogue of squeeze, understand the use cases use the right specialized processor instance starting to see that evolve. >> Yeah and and there's so much to it. I mean, AMD right out of the box is any instance is 10% less expensive than the equivalent in the market right now on AWS. They do a great job of maximizing those products. We've got our Zen four core general processor family just released in November and it's going to be a beast. Yeah, we're very excited about it and AWS announced support for it so we're excited to see what they deliver there too. But price performance is so critical and again it's going back to the complexity of these environments. Giving some of these enterprises some help, to help them understand where they can get additional value. It goes well beyond the retail price. There's a lot more money to be shaved off the top just by spending time thinking about those applications. >> Yeah, absolutely. I love that you talked about collaboration we've been talking about community. I want to acknowledge the AWS super fans here, standing behind the stage. Rahul, I know that you are an AWS super fan. Can you tell us about that community and the program? >> Yeah, so I have been involved with AWS and building products with AWS since 2007. So it's kind of 15 years back when literally there were just a handful of API for launching EC2 instances and S3. >> Not the a hundred thousand that you mentioned earlier, my goodness, the scale. >> So I think I feel very privileged and honored that I have been part of that journey and have had to learn or have had the opportunity to learn both from successes and failures. And it's just my way of contributing back to that community. So we are part of the FinOps foundation as well, contributing through that. I run a podcast called AWS Insiders and a livestream called AWS Made Easy. So we are trying to make sure that people out there are able to understand how to leverage AWS in the best possible way. And yeah, we are there to help and hold their hand through it. >> Talk about the community, take a minute to explain to the audience watching the community around this cost optimization area. It's evolving, you mentioned FinOps. There's a whole large community developing, of practitioners and technologists coming together to look at this. What does this all mean? Talk about this community. >> So cost management within organizations is has evolved so drastically that organizations haven't really coped with it. Historically, you've had finance teams basically buy a lot of infrastructure, which is CapEx and the engineering teams had kind of an upper bound on what they would spend and where they would spend. Suddenly with cloud, that's kind of enabled so much innovation all of a sudden, everyone's realized it, five years was spent figuring out whether people should be on the cloud or not. That's no longer a question, right. Everyone needs to be in the cloud and I think that's a no-brainer. The problem there is that suddenly your operating model has moved from CapEx to OpEx. And organizations haven't really figured out how to deal with it. Finance now no longer has the controls to control and manage and forecast costs. Engineering has never had to deal with it in the past and suddenly now they have to figure out how to do all this finance stuff. And procurement finds itself in a very awkward way position because they are no longer doing these negotiations like they were doing in the past where it was okay right up front before you engage, you do these negotiations. Now it's kind of an ongoing thing and it's constantly changing. Like every day is different. >> And you got marketplace >> And you got marketplace. So it's a very complex situation and I think what we are trying to do with the FinOps foundation is try and take a lot of the best practices across organizations that have been doing this at least for the last 10, 15 years. Take all the learnings and failures and turn them into hopefully opinionated approaches that people can take organizations can take to navigate through this faster rather than kind of falter and then decide that oh, this is not for us. >> Yeah. It's a great model, it's a great model. >> I know it's time John, go ahead. >> All right so, we got a little bumper sticker exercise we used to say what's the bumper sticker for the show? We used to say that, now we're modernizing, we're saying if you had to do an Instagram reel right now, short hot take of what's going on at re:Invent this year with AMD or CloudFix or just in general what would be the sizzle reel, that would be on Instagram or TikTok, go. >> Look, I think when you're at re:Invent right now and number one the energy is fantastic. 23 is going to be a building year. We've got a lot of difficult times ahead financially but it's the time, the ones that come out of 23 stronger and more efficient, and cost optimize are going to survive the long run. So now's the time to build. >> Well done, Rahul let's go for it. >> Yeah, so like Brad said, cost and efficiencies at the top of everyone's mind. Stuff that's the low hanging fruit, easy, use automation. Apply your sources to do most of the innovation. Take the easiest part to realizing savings and operate as efficiently as you possibly can. I think that's got to be key. >> I think they nailed it. They both nailed it. Wow, well it was really good. >> I put you on our talent list of >> And alright, so we repeat them. Are you part of our host team? I love this, I absolutely love this Rahul we wish you the best at CloudFix and your 17 other jobs. And I am genuinely impressed. Do you sleep actually? Last question. >> I do, I do. I have an amazing team that really helps me with all of this. So yeah, thanks to them and thank you for having us here. >> It's been fantastic. >> It's our pleasure. And Brad, I'm delighted we get you both now and again on our next segment. Thank you for being here with us. >> Thank you very much. >> And thank you all for tuning in to our live coverage here at AWS re:Invent, in fabulous Sin City with John Furrier, my name's Savannah Peterson. You're watching theCUBE, the leader in high tech coverage. (calm music)

Published Date : Nov 30 2022

SUMMARY :

How you feeling? I dunno, it's day on the cloud and cost is a big thing. Rahul from you are, the energy this year compared to last year Rahul, how are you feeling? the engagement, it's awesome. So that you can still use the cloud and then apply them to So just in case the audience isn't versed, and run about 40 to 45,000 AWS accounts just a couple pocket change, no big deal. at a lot of the tools how to approach cost optimization? is to have automation that helps you and Rahul does for your customers. We have the leading. to everyone's heart, right now. from just the choices they make in cloud. So it's the thing about x86 compatibility, The alternative is that you actually It's too many all the amazing people you have because that's the only way you scale. I think of you is like One of the things that in the background to all and actually the cloud has been one And that's the best part of it. If the customers aren't and it's going to be a beast. and the program? So it's kind of 15 years that you mentioned earlier, or have had the opportunity to learn the community around this and the engineering teams had of the best practices it's a great model. if you had to do an So now's the time to build. Take the easiest part to realizing savings I think they nailed it. Rahul we wish you the best and thank you for having us here. we get you both now And thank you all

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
BradPERSON

0.99+

AWSORGANIZATION

0.99+

RahulPERSON

0.99+

Savannah PetersonPERSON

0.99+

10%QUANTITY

0.99+

John FurrierPERSON

0.99+

Brad SmithPERSON

0.99+

AMDORGANIZATION

0.99+

ESW CapitalORGANIZATION

0.99+

NovemberDATE

0.99+

five yearsQUANTITY

0.99+

last yearDATE

0.99+

Rahul SubramaniamPERSON

0.99+

17 other jobsQUANTITY

0.99+

JohnPERSON

0.99+

yesterdayDATE

0.99+

oneQUANTITY

0.99+

Las Vegas, NevadaLOCATION

0.99+

CloudFixTITLE

0.99+

130 countriesQUANTITY

0.99+

2007DATE

0.99+

this yearDATE

0.98+

OneQUANTITY

0.98+

eighthQUANTITY

0.98+

about 130 different countriesQUANTITY

0.98+

ninthQUANTITY

0.98+

CapExORGANIZATION

0.98+

bothQUANTITY

0.98+

FinOpsORGANIZATION

0.97+

CTOPERSON

0.97+

Aurea CloudFixORGANIZATION

0.96+

over a hundred thousand APIQUANTITY

0.96+

Zen four coreCOMMERCIAL_ITEM

0.95+

one thingQUANTITY

0.95+

EC2TITLE

0.95+

50,000 plus peopleQUANTITY

0.95+

day threeQUANTITY

0.95+

day fourQUANTITY

0.95+

about 40QUANTITY

0.95+

23QUANTITY

0.95+

day twoQUANTITY

0.94+

CloudFixORGANIZATION

0.94+

45,000QUANTITY

0.93+

TikTokORGANIZATION

0.92+

OpExORGANIZATION

0.92+

S3TITLE

0.92+

over 150 enterprise software companiesQUANTITY

0.89+

InventEVENT

0.87+

InstagramORGANIZATION

0.86+

Next Gen Servers Ready to Hit the Market


 

(upbeat music) >> The market for enterprise servers is large and it generates well north of $100 billion in annual revenue, and it's growing consistently in the mid to high single digit range. Right now, like many segments, the market for servers is, it's like slingshotting, right? Organizations, they've been replenishing their install bases and upgrading, especially at HQs coming out of the isolation economy. But the macro headwinds, as we've reported, are impacting all segments of the market. CIOs, you know, they're tapping the brakes a little bit, sometimes quite a bit and being cautious with both capital expenditures and discretionary opex, particularly in the cloud. They're dialing it down and just being a little bit more, you know, cautious. The market for enterprise servers, it's dominated as you know, by x86 based systems with an increasingly large contribution coming from alternatives like ARM and NVIDIA. Intel, of course, is the largest supplier, but AMD has been incredibly successful competing with Intel because of its focus, it's got an outsourced manufacturing model and its innovation and very solid execution. Intel's frequent delays with its next generation Sapphire Rapid CPUs, now slated for January 2023 have created an opportunity for AMD, specifically AMD's next generation EPYC CPUs codenamed Genoa will offer as many as 96 Zen 4 cores per CPU when it launches later on this month. Observers can expect really three classes of Genoa. There's a standard Zen 4 compute platform for general purpose workloads, there's a compute density optimized Zen 4 package and then a cache optimized version for data intensive workloads. Indeed, the makers of enterprise servers are responding to customer requirements for more diversity and server platforms to handle different workloads, especially those high performance data-oriented workloads that are being driven by AI and machine learning and high performance computing, HPC needs. OEMs like Dell, they're going to be tapping these innovations and try to get to the market early. Dell, in particular, will be using these systems as the basis for its next generation Gen 16 servers, which are going to bring new capabilities to the market. Now, of course, Dell is not alone, there's got other OEM, you've got HPE, Lenovo, you've got ODMs, you've got the cloud players, they're all going to be looking to keep pace with the market. Now, the other big trend that we've seen in the market is the way customers are thinking about or should be thinking about performance. No longer is the clock speed of the CPU the soul and most indicative performance metric. There's much more emphasis in innovation around all those supporting components in a system, specifically the parts of the system that take advantage, for example, of faster bus speeds. We're talking about things like network interface cards and RAID controllers and memories and other peripheral devices that in combination with microprocessors, determine how well systems can perform and those kind of things around compute operations, IO and other critical tasks. Now, the combinatorial factors ultimately determine the overall performance of the system and how well suited a particular server is to handling different workloads. So we're seeing OEMs like Dell, they're building flexibility into their offerings and putting out products in their portfolios that can meet the changing needs of their customers. Welcome to our ongoing series where we investigate the critical question, does hardware matter? My name is Dave Vellante, and with me today to discuss these trends and the things that you should know about for the next generation of server architectures is former CTO from Oracle and EMC and adjunct faculty and Wharton CTO Academy, David Nicholson. Dave, always great to have you on "theCUBE." Thanks for making some time with me. >> Yeah, of course, Dave, great to be here. >> All right, so you heard my little spiel in the intro, that summary, >> Yeah. >> Was it accurate? What would you add? What do people need to know? >> Yeah, no, no, no, 100% accurate, but you know, I'm a resident nerd, so just, you know, some kind of clarification. If we think of things like microprocessor release cycles, it's always going to be characterized as rolling thunder. I think 2023 in particular is going to be this constant release cycle that we're going to see. You mentioned the, (clears throat) excuse me, general processors with 96 cores, shortly after the 96 core release, we'll see that 128 core release that you referenced in terms of compute density. And then, we can talk about what it means in terms of, you know, nanometers and performance per core and everything else. But yeah, no, that's the main thing I would say, is just people shouldn't look at this like a new car's being released on Saturday. This is going to happen over the next 18 months, really. >> All right, so to that point, you think about Dell's next generation systems, they're going to be featuring these new AMD processes, but to your point, when you think about performance claims, in this industry, it's a moving target. It's that, you call it a rolling thunder. So what does that game of hopscotch, if you will, look like? How do you see it unfolding over the next 12 to 18 months? >> So out of the gate, you know, slated as of right now for a November 10th release, AMD's going to be first to market with, you know, everyone will argue, but first to market with five nanometer technology in production systems, 96 cores. What's important though is, those microprocessors are going to be resident on motherboards from Dell that feature things like PCIe 5.0 technology. So everything surrounding the microprocessor complex is faster. Again, going back to this idea of rolling thunder, we expect the Gen 16 PowerEdge servers from Dell to similarly be rolled out in stages with initial releases that will address certain specific kinds of workloads and follow on releases with a variety of systems configured in a variety of ways. >> So I appreciate you painting a picture. Let's kind of stay inside under the hood, if we can, >> Sure. >> And share with us what we should know about these kind of next generation CPUs. How are companies like Dell going to be configuring them? How important are clock speeds and core counts in these new systems? And what about, you mentioned motherboards, what about next gen motherboards? You mentioned PCIe Gen 5, where does that fit in? So take us inside deeper into the system, please. >> Yeah, so if you will, you know, if you will join me for a moment, let's crack open the box and look inside. It's not just microprocessors. Like I said, they're plugged into a bus architecture that interconnect. How quickly that interconnect performs is critical. Now, I'm going to give you a statistic that doesn't require a PhD to understand. When we go from PCIe Gen 4 to Gen 5, which is going to be featured in all of these systems, we double the performance. So just, you can write that down, two, 2X. The performance is doubled, but the numbers are pretty staggering in terms of giga transactions per second, 128 gigabytes per second of aggregate bandwidth on the motherboard. Again, doubling when going from 4th Gen to 5th Gen. But the reality is, most users of these systems are still on PCIe Gen 3 based systems. So for them, just from a bus architecture perspective, you're doing a 4X or 8X leap in performance, and then all of the peripherals that plug into that faster bus are faster, whether it's RAID control cards from RAID controllers or storage controllers or network interface cards. Companies like Broadcom come to mind. All of their components are leapfrogging their prior generation to fit into this ecosystem. >> So I wonder if we could stay with PCIe for a moment and, you know, just understand what Gen 5 brings. You said, you know, 2X, I think we're talking bandwidth here. Is there a latency impact? You know, why does this matter? And just, you know, this premise that these other components increasingly matter more, Which components of the system are we talking about that can actually take advantage of PCIe Gen 5? >> Pretty much all of them, Dave. So whether it's memory plugged in or network interface cards, so communication to the outside world, which computer servers tend to want to do in 2022, controllers that are attached to internal and external storage devices. All of them benefit from this enhancement and performance. And it's, you know, PCI express performance is measured in essentially bandwidth and throughput in the sense of the numbers of transactions per second that you can do. It's mind numbing, I want to say it's 32 giga transfers per second. And then in terms of bandwidth, again, across the lanes that are available, 128 gigabytes per second. I'm going to have to check if it's gigabits or gigabytes. It's a massive number. And again, it's double what PCIe 4 is before. So what does that mean? Just like the advances in microprocessor technology, you can consolidate massive amounts of work into a much smaller footprint. That's critical because everything in that server is consuming power. So when you look at next generation hardware that's driven by things like AMD Genoa or you know, the EPYC processors, the Zen with the Z4 microprocessors, for every dollar that you're spending on power and equipment and everything else, you're getting far greater return on your investment. Now, I need to say that we anticipate that these individual servers, if you're out shopping for a server, and that's a very nebulous term because they come in all sorts of shapes and sizes, I think there's going to be a little bit of sticker shock at first until you run the numbers. People will look at an individual server and they'll say, wow, this is expensive and the peripherals, the things that are going into those slots are more expensive, but you're getting more bang for your buck. You're getting much more consolidation, lower power usage and for every dollar, you're getting a greater amount of performance and transactions, which translates up the stack through the application layer and, you know, out to the end user's desire to get work done. >> So I want to come back to that, but let me stay on performance for a minute. You know, we all used to be, when you'd go buy a new PC, you'd be like, what's the clock speed of that? And so, when you think about performance of a system today and how measurements are changing, how should customers think about performance in these next gen systems? And where does that, again, where does that supporting ecosystem play? >> So if you are really into the speeds and feeds and what's under the covers, from an academic perspective, you can go in and you can look at the die size that was used to create the microprocessors, the clock speeds, how many cores there are, but really, the answer is look at the benchmarks that are created through testing, especially from third party organizations that test these things for workloads that you intend to use these servers for. So if you are looking to support something like a high performance environment for artificial intelligence or machine learning, look at the benchmarks as they're recorded, as they're delivered by the entire system. So it's not just about the core. So yeah, it's interesting to look at clock speeds to kind of compare where we are with regards to Moore's Law. Have we been able to continue to track along that path? We know there are physical limitations to Moore's Law from an individual microprocessor perspective, but none of that really matters. What really matters is what can this system that I'm buying deliver in terms of application performance and user requirement performance? So that's what I'd say you want to look for. >> So I presume we're going to see these benchmarks at some point, I'm hoping we can, I'm hoping we can have you back on to talk about them. Is that something that we can expect in the future? >> Yeah, 100%, 100%. Dell, and I'm sure other companies, are furiously working away to demonstrate the advantages of this next gen architecture. If I had to guess, I would say that we are going to see quite a few world records set because of the combination of things, like faster network interface cards, faster storage cards, faster memory, more memory, faster cache, more cache, along with the enhanced microprocessors that are going to be delivered. And you mentioned this is, you know, AMD is sort of starting off this season of rolling thunder and in a few months, we'll start getting the initial entries from Intel also, and we'll be able to compare where they fit in with what AMD is offering. I'd expect OEMs like Dell to have, you know, a portfolio of products that highlight the advantages of each processor's set. >> Yeah, I talked in my open Dave about the diversity of workloads. What are some of those emerging workloads and how will companies like Dell address them in your view? >> So a lot of the applications that are going to be supported are what we think of as legacy application environments. A lot of Oracle databases, workloads associated with ERP, all of those things are just going to get better bang for their buck from a compute perspective. But what we're going to be hearing a lot about and what the future really holds for us that's exciting is this arena of artificial intelligence and machine learning. These next gen platforms offer performance that allows us to do things in areas like natural language processing that we just couldn't do before cost effectively. So I think the next few years are going to see a lot of advances in AI and ML that will be debated in the larger culture and that will excite a lot of computer scientists. So that's it, AI/ML are going to be the big buzzwords moving forward. >> So Dave, you talked earlier about this, some people might have sticker shocks. So some of the infrastructure pros that are watching this might be, oh, okay, I'm going to have to pitch this, especially in this, you know, tough macro environment. I'm going to have to sell this to my CIO, my CFO. So what does this all mean? You know, if they're going to have to pay more, how is it going to affect TCO? How would you pitch that to your management? >> As long as you stay away from per unit cost, you're fine. And again, we don't have necessarily, or I don't have necessarily insider access to street pricing on next gen servers yet, but what I do know from examining what the component suppliers tell us is that, these systems are going to be significantly more expensive on a per unit basis. But what does that mean? If the server that you're used to buying for five bucks is now 10 bucks, but it's doing five times as much work, it's a great deal, and anyone who looks at it and says, 10 bucks? It used to only be five bucks, well, the ROI and the TCO, that's where all of this really needs to be measured and a huge part of that is going to be power consumption. And along with the performance tests that we expect to see coming out imminently, we should also be expecting to see some of those ROI metrics, especially around power consumption. So I don't think it's going to be a problem moving forward, but there will be some sticker shock. I imagine you're going to be able to go in and configure a very, very expensive, fully loaded system on some of these configurators online over the next year. >> So it's consolidation, which means you could do more with less. It's going to be, or more with the same, it's going to be lower power, less cooling, less floor space and lower management overhead, which is kind of now you get into staff, so you're going to have to sort of identify how the staff can be productive in other areas. You're probably not going to fire people hopefully. But yeah, it sounds like it's going to be a really consolidation play. I talked at the open about Intel and AMD and Intel coming out with Sapphire Rapids, you know, of course it's been well documented, it's late but they're now scheduled for January. Pat Gelsinger's talked about this, and of course they're going to try to leapfrog AMD and then AMD is going to respond, you talked about this earlier, so that game is going to continue. How long do you think this cycle will last? >> Forever. (laughs) It's just that, there will be periods of excitement like we're going to experience over at least the next year and then there will be a lull and then there will be a period of excitement. But along the way, we've got lurkers who are trying to disrupt this market completely. You know, specifically you think about ARM where the original design point was, okay, you're powered by a battery, you have to fit in someone's pocket. You can't catch on fire and burn their leg. That's sort of the requirement, as opposed to the, you know, the x86 model, which is okay, you have a data center with a raised floor and you have a nuclear power plant down the street. So don't worry about it. As long as an 18-wheeler can get it to where it needs to be, we'll be okay. And so, you would think that over time, ARM is going to creep up as all destructive technologies do, and we've seen that, we've definitely seen that. But I would argue that we haven't seen it happen as quickly as maybe some of us expected. And then you've got NVIDIA kind of off to the side starting out, you know, heavy in the GPU space saying, hey, you know what, you can use the stuff we build for a whole lot of really cool new stuff. So they're running in a different direction, sort of gnawing at the traditional x86 vendors certainly. >> Yes, so I'm glad- >> That's going to be forever. >> I'm glad you brought up ARM and NVIDIA, I think, but you know, maybe it hasn't happened as quickly as many thought, although there's clearly pockets and examples where it is taking shape. But this to me, Dave, talks to the supporting cast. It's not just about the microprocessor unit anymore, specifically, you know, generally, but specifically the x86. It's the supporting, it's the CPU, the NPU, the XPU, if you will, but also all those surrounding components that, to your earlier point, are taking advantage of the faster bus speeds. >> Yeah, no, 100%. You know, look at it this way. A server used to be measured, well, they still are, you know, how many U of rack space does it take up? You had pizza box servers with a physical enclosure. Increasingly, you have the concept of a server in quotes being the aggregation of components that are all plugged together that share maybe a bus architecture. But those things are all connected internally and externally, especially externally, whether it's external storage, certainly networks. You talk about HPC, it's just not one server. It's hundreds or thousands of servers. So you could argue that we are in the era of connectivity and the real critical changes that we're going to see with these next generation server platforms are really centered on the bus architecture, PCIe 5, and the things that get plugged into those slots. So if you're looking at 25 gig or 100 gig NICs and what that means from a performance and/or consolidation perspective, or things like RDMA over Converged Ethernet, what that means for connecting systems, those factors will be at least as important as the microprocessor complexes. I imagine IT professionals going out and making the decision, okay, we're going to buy these systems with these microprocessors, with this number of cores in memory. Okay, great. But the real work starts when you start talking about connecting all of them together. What does that look like? So yeah, the definition of what constitutes a server and what's critically important I think has definitely changed. >> Dave, let's wrap. What can our audience expect in the future? You talked earlier about you're going to be able to get benchmarks, so that we can quantify these innovations that we've been talking about, bring us home. >> Yeah, I'm looking forward to taking a solid look at some of the performance benchmarking that's going to come out, these legitimate attempts to set world records and those questions about ROI and TCO. I want solid information about what my dollar is getting me. I think it helps the server vendors to be able to express that in a concrete way because our understanding is these things on a per unit basis are going to be more expensive and you're going to have to justify them. So that's really what, it's the details that are going to come the day of the launch and in subsequent weeks. So I think we're going to be busy for the next year focusing on a lot of hardware that, yes, does matter. So, you know, hang on, it's going to be a fun ride. >> All right, Dave, we're going to leave it there. Thanks you so much, my friend. Appreciate you coming on. >> Thanks, Dave. >> Okay, and don't forget to check out the special website that we've set up for this ongoing series. Go to doeshardwarematter.com and you'll see commentary from industry leaders, we got analysts on there, technical experts from all over the world. Thanks for watching, and we'll see you next time. (upbeat music)

Published Date : Nov 10 2022

SUMMARY :

and the things that you should know about Dave, great to be here. I think 2023 in particular is going to be over the next 12 to 18 months? So out of the gate, you know, So I appreciate you painting a picture. going to be configuring them? So just, you can write that down, two, 2X. Which components of the and the peripherals, the And so, when you think about So it's not just about the core. can expect in the future? Dell to have, you know, about the diversity of workloads. So a lot of the applications that to your management? So I don't think it's going to and then AMD is going to respond, as opposed to the, you the XPU, if you will, and the things that get expect in the future? it's the details that are going to come going to leave it there. Okay, and don't forget to

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavePERSON

0.99+

Dave VellantePERSON

0.99+

David NicholsonPERSON

0.99+

January 2023DATE

0.99+

OracleORGANIZATION

0.99+

JanuaryDATE

0.99+

DellORGANIZATION

0.99+

hundredsQUANTITY

0.99+

November 10thDATE

0.99+

AMDORGANIZATION

0.99+

10 bucksQUANTITY

0.99+

five bucksQUANTITY

0.99+

NVIDIAORGANIZATION

0.99+

100 gigQUANTITY

0.99+

EMCORGANIZATION

0.99+

Pat GelsingerPERSON

0.99+

LenovoORGANIZATION

0.99+

100%QUANTITY

0.99+

SaturdayDATE

0.99+

128 coreQUANTITY

0.99+

25 gigQUANTITY

0.99+

96 coresQUANTITY

0.99+

five timesQUANTITY

0.99+

2XQUANTITY

0.99+

96 coreQUANTITY

0.99+

8XQUANTITY

0.99+

4XQUANTITY

0.99+

96QUANTITY

0.99+

next yearDATE

0.99+

twoQUANTITY

0.99+

todayDATE

0.99+

2022DATE

0.98+

bothQUANTITY

0.98+

doeshardwarematter.comOTHER

0.98+

5th Gen.QUANTITY

0.98+

4th GenQUANTITY

0.98+

ARMORGANIZATION

0.98+

18-wheelerQUANTITY

0.98+

Z4COMMERCIAL_ITEM

0.97+

firstQUANTITY

0.97+

IntelORGANIZATION

0.97+

2023DATE

0.97+

Zen 4COMMERCIAL_ITEM

0.97+

Sapphire RapidsCOMMERCIAL_ITEM

0.97+

thousandsQUANTITY

0.96+

one serverQUANTITY

0.96+

doubleQUANTITY

0.95+

PCIe Gen 4OTHER

0.95+

Sapphire Rapid CPUsCOMMERCIAL_ITEM

0.94+

PCIe Gen 3OTHER

0.93+

PCIe 4OTHER

0.93+

x86COMMERCIAL_ITEM

0.92+

Wharton CTO AcademyORGANIZATION

0.92+

AMD & Oracle Partner to Power Exadata X9M


 

(upbeat jingle) >> The history of Exadata in the platform is really unique. And from my vantage point, it started earlier this century as a skunkworks inside of Oracle called Project Sage back when grid computing was the next big thing. Oracle saw that betting on standard hardware would put it on an industry curve that would rapidly evolve. Last April, for example, Oracle announced the availability of Exadata X9M in OCI, Oracle Cloud Infrastructure. One thing that hasn't been as well publicized is that Exadata on OCI is using AMD's EPYC processors in the database service. EPYC is not Eastern Pacific Yacht Club for all you sailing buffs, rather it stands for Extreme Performance Yield Computing, the enterprise grade version of AMD's Zen architecture which has been a linchpin of AMD's success in terms of penetrating enterprise markets. And to focus on the innovations that AMD and Oracle are bringing to market, we have with us today, Juan Loaiza, who's executive vice president of mission critical technologies at Oracle, and Mark Papermaster, who's the CTO and EVP of technology and engineering at AMD. Juan, welcome back to the show. Mark, great to have you on The Cube in your first appearance, thanks for coming on. Juan, let's start with you. You've been on The Cube a number of times, as I said, and you've talked about how Exadata is a top platform for Oracle database. We've covered that extensively. What's different and unique from your point of view about Exadata Cloud Infrastructure X9M on OCI? >> So as you know, Exadata, it's designed top down to be the best possible platform for database. It has a lot of unique capabilities, like we make extensive use of RDMA, smart storage. We take advantage of everything we can in the leading hardware platforms. X9M is our next generation platform and it does exactly that. We're always wanting to be, to get all the best that we can from the available hardware that our partners like AMD produce. And so that's what X9M in it is, it's faster, more capacity, lower latency, more iOS, pushing the limits of the hardware technology. So we don't want to be the limit, the software database software should not be the limit, it should be the actual physical limits of the hardware. That that's what X9M's all about. >> Why, Juan, AMD chips in X9M? >> We're introducing AMD chips. We think they provide outstanding performance, both for OTP and for analytic workloads. And it's really that simple, we just think the performance is outstanding in the product. >> Mark, your career is quite amazing. I could riff on history for hours but let's focus on the Oracle relationship. Mark, what are the relevant capabilities and key specs of the AMD chips that are used in Exadata X9M on Oracle's cloud? >> Well, thanks. It's really the basis of the great partnership that we have with Oracle on Exadata X9M and that is that the AMD technology uses our third generation of Zen processors. Zen was architected to really bring high performance back to X86, a very strong roadmap that we've executed on schedule to our commitments. And this third generation does all of that, it uses a seven nanometer CPU that is a core that was designed to really bring throughput, bring really high efficiency to computing and just deliver raw capabilities. And so for Exadata X9M, it's really leveraging all of that. It's really a balanced processor and it's implemented in a way to really optimize high performance. That is our whole focus of AMD. It's where we've reset the company focus on years ago. And again, great to see the super smart database team at Oracle really partner with us, understand those capabilities and it's been just great to partner with them to enable Oracle to really leverage the capabilities of the Zen processor. >> Yeah. It's been a pretty amazing 10 or 11 years for both companies. But Mark, how specifically are you working with Oracle at the engineering and product level and what does that mean for your joint customers in terms of what they can expect from the collaboration? >> Well, here's where the collaboration really comes to play. You think about a processor and I'll say, when Juan's team first looked at it, there's general benchmarks and the benchmarks are impressive but they're general benchmarks. And they showed the base processing capability but the partnership comes to bear when it means optimizing for the workloads that Exadata X9M is really delivering to the end customers. And that's where we dive down and as we learn from the Oracle team, we learn to understand where bottlenecks could be, where is there tuning that we could in fact really boost the performance above that baseline that you get in the generic benchmarks. And that's what the teams have done, so for instance, you look at optimizing latency to our DMA, you look at optimizing throughput on oil TP and database processing. When you go through the workloads and you take the traces and you break it down and you find the areas that are bottlenecking and then you can adjust, we have thousands of parameters that can be adjusted for a given workload. And that's the beauty of the partnership. So we have the expertise on the CPU engineering, Oracle Exadata team knows innately what the customers need to get the most out of their platform. And when the teams came together, we actually achieved anywhere from 20% to 50% gains on specific workloads, it is really exciting to see. >> Mark, last question for you is how do you see this relationship evolving in the future? Can you share a little roadmap for the audience? >> You bet. First off, given the deep partnership that we've had on Exadata X9M, it's really allowed us to inform our future design. So in our current third generation, EPYC is that is really what we call our epic server offerings. And it's a 7,003 third gen and Exadara X9M. So what about fourth gen? Well, fourth gen is well underway, ready for the future, but it incorporates learning that we've done in partnership with Oracle. It's going to have even more through capabilities, it's going to have expanded memory capabilities because there's a CXL connect express link that'll expand even more memory opportunities. And I could go on. So that's the beauty of a deep partnership as it enables us to really take that learning going forward. It pays forward and we're very excited to fold all of that into our future generations and provide even a better capabilities to Juan and his team moving forward. >> Yeah, you guys have been obviously very forthcoming. You have to be with Zen and EPYC. Juan, anything you'd like to add as closing comments? >> Yeah. I would say that in the processor market there's been a real acceleration in innovation in the last few years, there was a big move 10, 15 years ago when multicore processors came out. And then we were on that for a while and then things started stagnating, but in the last two or three years, AMD has been leading this, there's been a dramatic acceleration in innovation so it's very exciting to be part of this and customers are getting a big benefit from this. >> All right. Hey, thanks for coming back on The Cube today. Really appreciate your time. >> Thanks. Glad to be here. >> All right and thank you for watching this exclusive Cube conversation. This is Dave Vellante from The Cube and we'll see you next time. (upbeat jingle)

Published Date : Sep 22 2022

SUMMARY :

in the database service. in the leading hardware platforms. And it's really that simple, and key specs of the the great partnership that we have expect from the collaboration? but the partnership comes to So that's the beauty of a deep partnership You have to be with Zen and EPYC. but in the last two or three years, coming back on The Cube today. Glad to be here. and we'll see you next time.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JuanPERSON

0.99+

Dave VellantePERSON

0.99+

OracleORGANIZATION

0.99+

Juan LoaizaPERSON

0.99+

MarkPERSON

0.99+

10QUANTITY

0.99+

20%QUANTITY

0.99+

Mark PapermasterPERSON

0.99+

AMDORGANIZATION

0.99+

Last AprilDATE

0.99+

11 yearsQUANTITY

0.99+

thousandsQUANTITY

0.99+

both companiesQUANTITY

0.99+

iOSTITLE

0.99+

7,003QUANTITY

0.99+

X9MTITLE

0.99+

50%QUANTITY

0.99+

fourth genQUANTITY

0.98+

todayDATE

0.98+

FirstQUANTITY

0.98+

ZenCOMMERCIAL_ITEM

0.97+

third generationQUANTITY

0.97+

X86COMMERCIAL_ITEM

0.97+

first appearanceQUANTITY

0.97+

ExadataTITLE

0.97+

third genQUANTITY

0.96+

earlier this centuryDATE

0.96+

seven nanometerQUANTITY

0.96+

ExadataORGANIZATION

0.94+

firstQUANTITY

0.92+

Eastern Pacific Yacht ClubORGANIZATION

0.9+

EPYCORGANIZATION

0.87+

bothQUANTITY

0.86+

OCITITLE

0.85+

One thingQUANTITY

0.83+

Exadata X9MCOMMERCIAL_ITEM

0.81+

Power ExadataORGANIZATION

0.81+

The CubeORGANIZATION

0.8+

OCIORGANIZATION

0.79+

The CubeCOMMERCIAL_ITEM

0.79+

ZenORGANIZATION

0.78+

three yearsQUANTITY

0.78+

Exadata X9MCOMMERCIAL_ITEM

0.74+

X9MCOMMERCIAL_ITEM

0.74+

yearsDATE

0.73+

15 years agoDATE

0.7+

10DATE

0.7+

EPYCOTHER

0.65+

ExadaraORGANIZATION

0.64+

Oracle Cloud InfrastructureORGANIZATION

0.61+

last few yearsDATE

0.6+

Exadata Cloud Infrastructure X9MTITLE

0.6+

Digging into HeatWave ML Performance


 

(upbeat music) >> Hello everyone. This is Dave Vellante. We're diving into the deep end with AMD and Oracle on the topic of mySQL HeatWave performance. And we want to explore the important issues around machine learning. As applications become more data intensive and machine intelligence continues to evolve, workloads increasingly are seeing a major shift where data and AI are being infused into applications. And having a database that simplifies the convergence of transaction and analytics data without the need to context, switch and move data out of and into different data stores. And eliminating the need to perform extensive ETL operations is becoming an industry trend that customers are demanding. At the same time, workloads are becoming more automated and intelligent. And to explore these issues further, we're happy to have back in theCUBE Nipun Agarwal, who's the Senior Vice President of mySQL HeatWave and Kumaran Siva, who's the Corporate Vice President Strategic Business Development at AMD. Gents, hello again. Welcome back. >> Hello. Hi Dave. >> Thank you, Dave. >> Okay. Nipun, obviously machine learning has become a must have for analytics offerings. It's integrated into mySQL HeatWave. Why did you take this approach and not the specialized database approach as many competitors do right tool for the right job? >> Right? So, there are a lot of customers of mySQL who have the need to run machine learning on the data which is store in mySQL database. So in the past, customers would need to extract the data out of mySQL and they would take it to a specialized service for running machine learning. Now, the reason we decided to incorporate machine learning inside the database, there are multiple reasons. One, customers don't need to move the data. And if they don't need to move the data, it is more secure because it's protected by the same access controlled mechanisms as rest of the data There is no need for customers to manage multiple services. But in addition to that, when we run the machine learning inside the database customers are able to leverage the same service the same hardware, which has been provisioned for OTP analytics and use machine learning capabilities at no additional charge. So from a customer's perspective, they get the benefits that it is a single database. They don't need to manage multiple services. And it is offered at no additional charge. And then as another aspect, which is kind of hard to learn which is based on the IP, the work we have done it is also significantly faster than what customers would get by having a separate service. >> Just to follow up on that. How are you seeing customers use HeatWaves machine learning capabilities today? How is that evolving? >> Right. So one of the things which, you know customers very often want to do is to train their models based on the data. Now, one of the things is that data in a database or in a transaction database changes quite rapidly. So we have introduced support for auto machine learning as a part of HeatWave ML. And what it does is that it fully automates the process of training. And this is something which is very important to database users, very important to mySQL users that they don't really want to hire or data scientists or specialists for doing training. So that's the first part that training in HeatWave ML is fully automated. Doesn't require the user to provide any like specific parameters, just the source data and the task which they want to train. The second aspect is the training is really fast. So the training is really fast. The benefit is that customers can retrain quite often. They can make sure that the model is up to date with any changes which have been made to their transaction database. And as a result of the models being up to date, the accuracy of the prediction is high. Right? So that's the first aspect, which is training. The second aspect is inference, which customers run once they have the models trained. And the third thing, which is perhaps been the most sought after request from the mySQL customers is the ability to provide explanations. So, HeatWave ML provides explanations for any model which has been generated or trained by HeatWave ML. So these are the three capabilities- training, inference and explanations. And this whole process is completely automated, doesn't require a specialist or a data scientist. >> Yeah, that's nice. I mean, training obviously very popular today. I've said inference I think is going to explode in the coming decade. And then of course, AI explainable AI is a very important issue. Kumaran, what are the relevant capabilities of the AMD chips that are used in OCI to support HeatWave ML? Are they different from say the specs for HeatWave in general? >> So, actually they aren't. And this is one of the key features of this architecture or this implementation that is really exciting. Um, there with HeatWave ML, you're using the same CPU. And by the way, it's not a GPU, it's a CPU for both for all three of the functions that Nipun just talked about- inference, training and explanation all done on CPU. You know, bigger picture with the capabilities we bring here we're really providing a balance, you know between the CPU cores, memory and the networking. And what that allows you to do here is be able to feed the CPU cores appropriately. And within the cores, we have these AVX instruc... extensions in with the Zen 2 and Zen 3 cores. We had AVX 2, and then with the Zen 4 core coming out we're going to have AVX 512. But we were able to with that balance of being able to bring in the data and utilize the high memory bandwidth and then use the computation to its maximum we're able to provide, you know, build pride enough AI processing that we are able to get the job done. And then we're built to build a fit into that larger pipeline that that we build out here with the HeatWave. >> Got it. Nipun you know, you and I every time we have a conversation we've got to talk benchmarks. So you've done machine learning benchmarks with HeatWave. You might even be the first in the industry to publish you know, transparent, open ML benchmarks on GitHub. I mean, I, I wouldn't know for sure but I've not seen that as common. Can you describe the benchmarks and the data sets that you used here? >> Sure. So what we did was we took a bunch of open data sets for two categories of tasks- classification and regression. So we took about a dozen data sets for classification and about six for regression. So to give an example, the kind of data sets we used for classifications like the airlines data set, hex sensors bank, right? So these are open data sets. And what we did was for on these data sets we did a comparison of what would it take to train using HeatWave ML? And then the other service we compared with is that RedShift ML. So, there were two observations. One is that with HeatWave ML, the user does not need to provide any tuning parameters, right? The HeatWave ML using RML fully generates a train model, figures out what are the right algorithms? What are the right features? What are the right hyper parameters and sets, right? So no need for any manual intervention not so the case with Redshift ML. The second thing is the performance, right? So the performance of HeatWave ML aggregate on these 12 data sets for classification and the six data sets on regression. On an average, it is 25 times faster than Redshift ML. And note that Redshift ML in turn involves SageMaker, right? So on an average, HeatWave ML provides 25 times better performance for training. And the other point to note is that there is no need for any human intervention. That's fully automated. But in the case of Redshift ML, many of these data sets did not even complete in the set duration. If you look at price performance, one of the things again I want to highlight is because of the fact that AMD does pretty well in all kinds of workloads. We are able to use the same cluster users and use the same cluster for analytics, for OTP or for machine learning. So there is no additional cost for customers to run HeatWave ML if they have provision HeatWave. But assuming a user is provisioning a HeatWave cluster only to run HeatWave ML, right? That's the case, even in that case the price performance advantage of HeatWave ML over Redshift ML is 97 times, right? So 25 times faster at 1% of the cost compared to Redshift ML And all these scripts and all this information is available on GitHub for customers to try to modify and like, see, like what are the advantages they would get on their workloads? >> Every time I hear these numbers, I shake my head. I mean, they're just so overwhelming. Um, and so we'll see how the competition responds when, and if they respond. So, but thank you for sharing those results. Kumaran, can you elaborate on how the specs that you talked about earlier contribute to HeatWave ML's you know, benchmark results. I'm particularly interested in scalability, you know Typically things degrade as you push the system harder. What are you seeing? >> No, I think, I think it's good. Look, yeah. That's by those numbers, just blow me, blow my head too. That's crazy good performance. So look from, from an AMD perspective, we have really built an architecture. Like if you think about the chiplet architecture to begin with, it is fundamentally, you know, it's kind of scaling by design, right? And, and one of the things that we've done here is been able to work with, with the HeatWave team and heat well ML team, and then been able to, to within within the CPU package itself, be able to scale up to take very efficient use of all of the course. And then of course, work with them on how you go between nodes. So you can have these very large systems that can run ML very, very efficiently. So it's really, you know, building on the building blocks of the chiplet architecture and how scaling happens there. >> Yeah. So it's you're saying it's near linear scaling or essentially. >> So, let Nipun comment on that. >> Yeah. >> Is it... So, how about as cluster sizes grow, Nipun? >> Right. >> What happens there? >> So one of the design points for HeatWave is scale out architecture, right? So as you said, that as we add more data set or increase the size of the data, or we add the number of nodes to the cluster, we want the performance to scale. So we show that we have near linear scale factor, or nearly near scale scalability for SQL workloads in the case of HeatWave ML, as well. As users add more nodes to the cluster so the size of the cluster the performance of HeatWave ML improves. So I was giving you this example that HeatWave ML is 25 times faster compared to Redshift ML. Well, that was on a cluster size of two. If you increase the cluster size of HeatWave ML to a larger number. But I think the number is 16. The performance advantage over Redshift ML increases from 25 times faster to 45 times faster. So what that means is that on a cluster size of 16 nodes HeatWave ML is 45 times faster for training these again, dozen data sets. So this shows that HeatWave ML skills better than the computation. >> So you're saying adding nodes offsets any management complexity that you would think of as getting in the way. Is that right? >> Right. So one is the management complexity and which is why by features like last customers can scale up or scale down, you know, very easily. The second aspect is, okay What gives us this advantage, right, of scalability? Or how are we able to scale? Now, the techniques which we use for HeatWave ML scalability are a bit different from what we use for SQL processing. So in the case of HeatWave ML, they really like, you know, three, two trade offs which we have to be careful about. One is the accuracy. Because we want to provide better performance for machine learning without compromising on the accuracy. So accuracy would require like more synchronization if you have multiple threads. But if you have too much of synchronization that can slow down the degree of patterns that we get. Right? So we have to strike a fine balance. So what we do is that in HeatWave ML, there are different phases of training, like algorithm selection, feature selection, hyper probability training. Each of these phases is analyzed. And for instance, one of the ways techniques we use is that if you're trying to figure out what's the optimal hyper parameter to be used? We start up with the search space. And then each of the VMs gets a part of the search space. And then we synchronize only when needed, right? So these are some of the techniques which we have developed over the years. And there are actually paper's filed, research publications filed on this. And this is what we do to achieve good scalability. And what that results to the customer is that if they have some amount of training time and they want to make it better they can just provision a larger cluster and they will get better performance. >> Got it. Thank you. Kumaran, when I think of machine learning, machine intelligence, AI, I think GPU but you're not using GPU. So how are you able to get this type of performance or price performance without using GPU's? >> Yeah, definitely. So yeah, that's a good point. And you think about what is going on here and you consider the whole pipeline that Nipun has just described in terms of how you get you know, your training, your algorithms And using the mySQL pieces of it to get to the point where the AI can be effective. In that process what happens is you have to have a lot of memory to transactions. A lot of memory bandwidth comes into play. And then bringing all that data together, feeding the actual complex that does the AI calculations that in itself could be the bottleneck, right? And you can have multiple bottlenecks along the way. And I think what you see in the AMD architecture for epic for this use case is the balance. And the fact that you are able to do the pre-processing, the AI, and then the post-processing all kind of seamlessly together, that has a huge value. And that goes back to what Nipun was saying about using the same infrastructure, gets you the better TCO but it also gets you gets you better performance. And that's because of the fact that you're bringing the data to the computation. So the computation in this case is not strictly the bottleneck. It's really about how you pull together what you need and to do the AI computation. And that is, that's probably a more, you know, it's a common case. And so, you know, you're going to start I think the least start to see this especially for inference applications. But in this case we're doing both inference explanation and training. All using the the CPU in the same OCI infrastructure. >> Interesting. Now Nipun, is the secret sauce for HeatWave ML performance different than what we've discussed before you and I with with HeatWave generally? Is there some, you know, additive engine additive that you're putting in? >> Right? Yes. The secret sauce is indeed different, right? Just the way I was saying that for SQL processing. The reason we get very good performance and price performance is because we have come up with new algorithms which help the SQL process can scale out. Similarly for HeatWave ML, we have come up with new IP, new like algorithms. One example is that we use meta-learn proxy models, right? That's the technique we use for automating the training process, right? So think of this meta-learn proxy models to be like, you know using machine learning for machine learning training. And this is an IP which we developed. And again, we have published the results and the techniques. But having such kind of like techniques is what gives us a better performance. Similarly, another thing which we use is adaptive sampling that you can have a large data set. But we intelligently sample to figure out that how can we train on a small subset without compromising on the accuracy? So, yes, there are many techniques that you have developed specifically for machine learning which is what gives us the better performance, better price performance, and also better scalability. >> What about mySQL autopilot? Is there anything that differs from HeatWave ML that is relevant? >> Okay. Interesting you should ask. So mySQL Autopilot is think of it to be an application using machine learning. So mySQL Autopilot uses machine learning to automate various aspects of the database service. So for instance, if you want to figure out that what's the right partitioning scheme to partition the data in memory? We use machine learning techniques to figure out that what's the right, the best column based on the user's workload to partition the data in memory Or given a workload, if you want to figure out what is the right cluster size to provision? That's something we use mySQL autopilot for. And I want to highlight that we don't aware of any other database service which provides this level of machine learning based automation which customers get with mySQL Autopilot. >> Hmm. Interesting. Okay. Last question for both of you. What are you guys working on next? What can customers expect from this collaboration specifically in this space? Maybe Nipun, you can start and then Kamaran can bring us home. >> Sure. So there are two things we are working on. One is based on the feedback we have gotten from customers, we are going to keep making the machine learning capabilities richer in HeatWave ML. That's one dimension. And the second thing is which Kamaran was alluding to earlier, We are looking at the next generation of like processes coming from AMD. And we will be seeing as to how we can more benefit from these processes whether it's the size of the L3 cache, the memory bandwidth, the network bandwidth, and such or the newer effects. And make sure that we leverage the all the greatness which the new generation of processes will offer. >> It's like an engineering playground. Kumaran, let's give you the final word. >> No, that's great. Now look with the Zen 4 CPU cores, we're also bringing in AVX 512 instruction capability. Now our implementation is a little different. It was in, in Rome and Milan, too where we use a double pump implementation. What that means is, you know, we take two cycles to do these instructions. But the key thing there is we don't lower our speed of the CPU. So there's no noisy neighbor effects. And it's something that OCI and the HeatWave has taken full advantage of. And so like, as we go out in time and we see the Zen 4 core, we can... we see up to 96 CPUs that that's going to work really well. So we're collaborating closely with, with OCI and with the HeatWave team here to make sure that we can take advantage of that. And we're also going to upgrade the memory subsystem to get to 12 channels of DDR 5. So it should be, you know there should be a fairly significant boost in absolute performance. But more important or just as importantly in TCO value for the customers, the end customers who are going to adopt this great service. >> I love their relentless innovation guys. Thanks so much for your time. We're going to have to leave it there. Appreciate it. >> Thank you, David. >> Thank you, David. >> Okay. Thank you for watching this special presentation on theCUBE. Your leader in enterprise and emerging tech coverage.

Published Date : Sep 14 2022

SUMMARY :

And eliminating the need and not the specialized database approach So in the past, customers How are you seeing customers use So one of the things of the AMD chips that are used in OCI And by the way, it's not and the data sets that you used here? And the other point to note elaborate on how the specs And, and one of the things or essentially. So, how about as So one of the design complexity that you would So in the case of HeatWave ML, So how are you able to get And the fact that you are Nipun, is the secret sauce That's the technique we use for automating of the database service. What are you guys working on next? And the second thing is which Kamaran Kumaran, let's give you the final word. OCI and the HeatWave We're going to have to leave it there. and emerging tech coverage.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

RomeLOCATION

0.99+

DavePERSON

0.99+

DavidPERSON

0.99+

OCIORGANIZATION

0.99+

Nipun AgarwalPERSON

0.99+

MilanLOCATION

0.99+

45 timesQUANTITY

0.99+

25 timesQUANTITY

0.99+

12 channelsQUANTITY

0.99+

OracleORGANIZATION

0.99+

AMDORGANIZATION

0.99+

Zen 4COMMERCIAL_ITEM

0.99+

KumaranPERSON

0.99+

HeatWaveORGANIZATION

0.99+

Zen 3COMMERCIAL_ITEM

0.99+

second aspectQUANTITY

0.99+

Kumaran SivaPERSON

0.99+

12 data setsQUANTITY

0.99+

first aspectQUANTITY

0.99+

97 timesQUANTITY

0.99+

Zen 2COMMERCIAL_ITEM

0.99+

bothQUANTITY

0.99+

firstQUANTITY

0.99+

OneQUANTITY

0.99+

two thingsQUANTITY

0.99+

oneQUANTITY

0.99+

EachQUANTITY

0.99+

1%QUANTITY

0.99+

two cyclesQUANTITY

0.99+

three capabilitiesQUANTITY

0.99+

third thingQUANTITY

0.99+

eachQUANTITY

0.99+

AVX 2COMMERCIAL_ITEM

0.99+

AVX 512COMMERCIAL_ITEM

0.99+

second thingQUANTITY

0.99+

Redshift MLTITLE

0.99+

six data setsQUANTITY

0.98+

HeatWaveTITLE

0.98+

mySQL AutopilotTITLE

0.98+

twoQUANTITY

0.98+

NipunPERSON

0.98+

two categoriesQUANTITY

0.98+

mySQLTITLE

0.98+

two observationsQUANTITY

0.98+

first partQUANTITY

0.98+

mySQL autopilotTITLE

0.98+

threeQUANTITY

0.97+

SQLTITLE

0.97+

One exampleQUANTITY

0.97+

single databaseQUANTITY

0.95+

16QUANTITY

0.95+

todayDATE

0.95+

about sixQUANTITY

0.95+

HeatWavesORGANIZATION

0.94+

about a dozen data setsQUANTITY

0.94+

16 nodesQUANTITY

0.93+

mySQL HeatWaveTITLE

0.93+

AMD Oracle Partnership Elevates MySQLHeatwave


 

(upbeat music) >> For those of you who've been following the cloud database space, you know that MySQL HeatWave has been on a technology tear over the last 24 months with Oracle claiming record breaking benchmarks relative to other database platforms. So far, those benchmarks remain industry leading as competitors have chosen not to respond, perhaps because they don't feel the need to, or maybe they don't feel that doing so would serve their interest. Regardless, the HeatWave team at Oracle has been very aggressive about its performance claims, making lots of noise, challenging the competition to respond, publishing their scripts to GitHub. But so far, there are no takers, but customers seem to be picking up on these moves by Oracle and it's likely the performance numbers resonate with them. Now, the other area we want to explore, which we haven't thus far, is the engine behind HeatWave and that is AMD. AMD's epic processors have been the powerhouse on OCI, running MySQL HeatWave since day one. And today we're going to explore how these two technology companies are working together to deliver these performance gains and some compelling TCO metrics. In fact, a recent Wikibon analysis from senior analyst Marc Staimer made some TCO comparisons in OLAP workloads relative to AWS, Snowflake, GCP, and Azure databases, you can find that research on wikibon.com. And with that, let me introduce today's guest, Nipun Agarwal senior vice president of MySQL HeatWave and Kumaran Siva, who's the corporate vice president for strategic business development at AMD. Welcome to theCUBE gentlemen. >> Welcome. Thank you. >> Thank you, Dave. >> Hey Nipun, you and I have talked a lot about this. You've been on theCUBE a number of times talking about MySQL HeatWave. But for viewers who may not have seen those episodes maybe you could give us an overview of HeatWave and how it's different from competitive cloud database offerings. >> Sure. So MySQL HeatWave is a fully managed MySQL database service offering from Oracle. It's a single database, which can be used to run transactional processing, analytics and machine learning workloads. So, in the past, MySQL has been designed and optimized for transaction processing. So customers of MySQL when they had to run, analytics machine learning, would need to extract the data out of MySQL, into some other database or service, to run analytics or machine learning. MySQL HeatWave offers a single database for running all kinds of workloads so customers don't need to extract data into some of the database. In addition to having a single database, MySQL HeatWave is also very performant compared to one up databases and also it is very price competitive. So the advantages are; single database, very performant, and very good price performance. >> Yes. And you've published some pretty impressive price performance numbers against competitors. Maybe you could describe those benchmarks and highlight some of the results, please. >> Sure. So one thing to notice that the performance of any database is going to like vary, the performance advantage is going to vary based on, the size of the data and the specific workloads, so the mileage varies, that's the first thing to know. So what we have done is, we have published multiple benchmarks. So we have benchmarks on PPCH or PPCDS and we have benchmarks on different data sizes because based on the customer's workload, the mileage is going to vary, so we want to give customers a broad range of comparisons so that they can decide for themselves. So in a specific case, where we are running on a 30 terabyte PPCH workload, HeatWave is about 18 times better price performance compared to Redshift. 18 times better compared to Redshift, about 33 times better price performance, compared to Snowflake, and 42 times better price performance compared to Google BigQuery. So, this is on 30 Terabyte PPCH. Now, if the data size is different, or the workload is different, the characteristics may vary slightly but this is just to give a flavor of the kind of performance advantage MySQL HeatWave offers. >> And then my last question before we bring in Kumaran. We've talked about the secret sauce being the tight integration between hardware and software, but would you add anything to that? What is that secret sauce in HeatWave that enables you to achieve these performance results and what does it mean for customers? >> So there are three parts to this. One is HeatWave has been designed with a scale out architecture in mind. So we have invented and implemented new algorithms for skill out query processing for analytics. The second aspect is that HeatWave has been really optimized for cloud, commodity cloud, and that's where AMD comes in. So for instance, many of the partitioning schemes we have for processing HeatWave, we optimize them for the L3 cache of the AMD processor. The thing which is very important to our customers is not just the sheer performance but the price performance, and that's where we have had a very good partnership with AMD because not only does AMD help us provide very good performance, but the price performance, right? And that all these numbers which I was showing, big part of it is because we are running on AMD which provides very good price performance. So that's the second aspect. And the third aspect is, MySQL autopilot, which provides machine learning based automation. So it's really these three things, a combination of new algorithms, design for scale out query processing, optimized for commodity cloud hardware, specifically AMD processors, and third, MySQL auto pilot which gives us this performance advantage. >> Great, thank you. So that's a good segue for AMD and Kumaran. So Kumaran, what is AMD bringing to the table? What are the, like, for instance, relevance specs of the chips that are used in Oracle cloud infrastructure and what makes them unique? >> Yeah, thanks Dave. That's a good question. So, OCI is a great customer of ours. They use what we call the top of stack devices meaning that they have the highest core count and they also are very, very fast cores. So these are currently Zen 3 cores. I think the HeatWave product is right now deployed on Zen 2 but will shortly be also on the Zen 3 core as well. But we provide in the case of OCI 64 cores. So that's the largest devices that we build. What actually happens is, because these large number of CPUs in a single package and therefore increasing the density of the node, you end up with this fantastic TCO equation and the cost per performance, the cost per for deployed services like HeatWave actually ends up being extraordinarily competitive and that's a big part of the contribution that we're bringing in here. >> So Zen 3 is the AMD micro architecture which you introduced, I think in 2017, and it's the basis for EPIC, which is sort of the enterprise grade that you really attacked the enterprise with. Maybe you could elaborate a little bit, double click on how your chips contribute specifically to HeatWave's, price performance results. >> Yeah, absolutely. So in the case of HeatWave, so as Nipun alluded to, we have very large L3 caches, right? So in our very, very top end parts just like the Milan X devices, we can go all the way up to like 768 megabytes of L3 cache. And that gives you just enormous performance and performance gains. And that's part of what we're seeing with HeatWave today and that not that they're currently on the second generation ROM based product, 'cause it's a 7,002 based product line running with the 64 cores. But as time goes on, they'll be adopting the next generation Milan as well. And the other part of it too is, as our chip led architecture has evolved, we know, so from the first generation Naples way back in 2017, we went from having multiple memory domains and a sort of NUMA architecture at the time, today we've really optimized that architecture. We use a common I/O Die that has all of the memory channels attached to it. And what that means is that, these scale out applications like HeatWave, are able to really scale very efficiently as they go from a small domain of CPUs to, for example the entire chip, all 64 cores that scaling, is been a key focus for AMD and being able to design and build architectures that can take advantage of that and then have applications like HeatWave that scale so well on it, has been, a key aim of ours. >> And Gen 3 moving up the Italian countryside. Nipun, you've taken the somewhat unusual step of posting the benchmark parameters, making them public on GitHub. Now, HeatWave is relatively new. So people felt that when Oracle gained ownership of MySQL it would let it wilt on the vine in favor of Oracle database, so you lost some ground and now, you're getting very aggressive with HeatWave. What's the reason for publishing those benchmark parameters on GitHub? >> So, the main reason for us to publish price performance numbers for HeatWave is to communicate to our customers a sense of what are the benefits they're going to get when they use HeatWave. But we want to be very transparent because as I said the performance advantages for the customers may vary, based on the data size, based on the specific workloads. So one of the reasons for us to publish, all these scripts on GitHub is for transparency. So we want customers to take a look at the scripts, know what we have done, and be confident that we stand by the numbers which we are publishing, and they're very welcome, to try these numbers themselves. In fact, we have had customers who have downloaded the scripts from GitHub and run them on our service to kind of validate. The second aspect is in some cases, they may be some deviations from what we are publishing versus what the customer would like to run in the production deployments so it provides an easy way, for customers to take the scripts, modify them in some ways which may suit their real world scenario and run to see what the performance advantages are. So that's the main reason, first, is transparency, so the customers can see what we are doing, because of the comparison, and B, if they want to modify it to suit their needs, and then see what is the performance of HeatWave, they're very welcome to do so. >> So have customers done that? Have they taken the benchmarks? And I mean, if I were a competitor, honestly, I wouldn't get into that food fight because of the impressive performance, but unless I had to, I mean, have customers picked up on that, Nipun? >> Absolutely. In fact, we have had many customers who have benchmarked the performance of MySQL HeatWave, with other services. And the fact that the scripts are available, gives them a very good starting point, and then they've also tweaked those queries in some cases, to see what the Delta would be. And in some cases, customers got back to us saying, hey the performance advantage of HeatWave is actually slightly higher than what was published and what is the reason. And the reason was, when the customers were trying, they were trying on the latest version of the service, and our benchmark results were posted let's say, two months back. So the service had improved in those two to three months and customers actually saw better performance. So yes, absolutely. We have seen customers download the scripts, try them and also modify them to some extent and then do the comparison of HeatWave with other services. >> Interesting. Maybe a question for both of you how is the competition responding to this? They haven't said, "Hey, we're going to come up "with our own benchmarks." Which is very common, you oftentimes see that. Although, for instance, Snowflake hasn't responded to data bricks, so that's not their game, but if the customers are actually, putting a lot of faith in the benchmarks and actually using that for buying decisions, then it's inevitable. But how have you seen the competition respond to the MySQL HeatWave and AMD combo? >> So maybe I can take the first track from the database service standpoint. When customers have more choice, it is invariably advantages for the customer because then the competition is going to react, right? So the way we have seen the reaction is that we do believe, that the other database services are going to take a closer eye to the price performance, right? Because if you're offering such good price performance, the vendors are already looking at it. And, you know, instances where they have offered let's say discount to the customers, to kind of at least like close the gap to some extent. And the second thing would be in terms of the capability. So like one of the things which I should have mentioned even early on, is that not only does MySQL HeatWave on AMD, provide very good price performance, say on like a small cluster, but it's all the way up to a cluster size of 64 nodes, which has about 1000 cores. So the point is, that HeatWave performs very well, both on a small system, as well as a huge scale out. And this is again, one of those things which is a differentiation compared to other services so we expect that even other database services will have to improve their offerings to provide the same good scale factor, which customers are now starting to expectancy, with MySQL HeatWave. >> Kumaran, anything you'd add to that? I mean, you guys are an arms dealer, you love all your OEMs, but at the same time, you've got chip competitors, Silicon competitors. How do you see the competitive-- >> I'd say the broader answer and the big picture for AMD, we're very maniacally focused on our customers, right? And OCI and Oracle are huge and important customers for us, and this particular use cases is extremely interesting both in that it takes advantage, very well of our architecture and it pulls out some of the value that AMD bring. I think from a big picture standpoint, our aim is to execute, to build to bring out generations of CPUs, kind of, you know, do what we say and say, sorry, say what we do and do what we say. And from that point of view, we're hitting, the schedules that we say, and being able to bring out the latest technology and bring it in a TCO value proposition that generationally keeps OCI and HeatWave ahead. That's the crux of our partnership here. >> Yeah, the execution's been obvious for the last several years. Kumaran, staying with you, how would you characterize the collaboration between, the AMD engineers and the HeatWave engineering team? How do you guys work together? >> No, I'd say we're in a very, very deep collaboration. So, there's a few aspects where, we've actually been working together very closely on the code and being able to optimize for both the large L3 cache that AMD has, and so to be able to take advantage of that. And then also, to be able to take advantage of the scaling. So going between, you know, our architecture is chip like based, so we have these, the CPU cores on, we call 'em CCDs and the inter CCD communication, there's opportunities to optimize an application level and that's something we've been engaged with. In the broader engagement, we are going back now for multiple generations with OCI, and there's a lot of input that now, kind of resonates in the product line itself. And so we value this very close collaboration with HeatWave and OCI. >> Yeah, and the cadence, Nip, and you and I have talked about this quite a bit. The cadence has been quite rapid. It's like this constant cycle every couple of months I turn around, is something new on HeatWave. But for question again, for both of you, what new things do you think that organizations, customers, are going to be able to do with MySQL HeatWave if you could look out next 12 to 18 months, is there anything you can share at this time about future collaborations? >> Right, look, 12 to 18 months is a long time. There's going to be a lot of innovation, a lot of new capabilities coming out on in MySQL HeatWave. But even based on what we are currently offering, and the trend we are seeing is that customers are bringing, more classes of workloads. So we started off with OLTP for MySQL, then it went to analytics. Then we increased it to mixed workloads, and now we offer like machine learning as alike. So one is we are seeing, more and more classes of workloads come to MySQL HeatWave. And the second is a scale, that kind of data volumes people are using HeatWave for, to process these mixed workloads, analytics machine learning OLTP, that's increasing. Now, along the way we are making it simpler to use, we are making it more cost effective use. So for instance, last time, when we talked about, we had introduced this real time elasticity and that's something which is a very, very popular feature because customers want the ability to be able to scale out, or scale down very efficiently. That's something we provided. We provided support for compression. So all of these capabilities are making it more efficient for customers to run a larger part of their workloads on MySQL HeatWave, and we will continue to make it richer in the next 12 to 18 months. >> Thank you. Kumaran, anything you'd add to that, we'll give you the last word as we got to wrap it. >> No, absolutely. So, you know, next 12 to 18 months we will have our Zen 4 CPUs out. So this could potentially go into the next generation of the OCI infrastructure. This would be with the Genoa and then Bergamo CPUs taking us to 96 and 128 cores with 12 channels at DDR five. This capability, you know, when applied to an application like HeatWave, you can see that it'll open up another order of magnitude potentially of use cases, right? And we're excited to see what customers can do do with that. It certainly will make, kind of the, this service, and the cloud in general, that this cloud migration, I think even more attractive. So we're pretty excited to see how things evolve in this period of time. >> Yeah, the innovations are coming together. Guys, thanks so much, we got to leave it there really appreciate your time. >> Thank you. >> All right, and thank you for watching this special Cube conversation, this is Dave Vellante, and we'll see you next time. (soft calm music)

Published Date : Sep 14 2022

SUMMARY :

and it's likely the performance Thank you. and how it's different from So the advantages are; single and highlight some of the results, please. the first thing to know. We've talked about the secret sauce So for instance, many of the relevance specs of the chips that are used and that's a big part of the contribution and it's the basis for EPIC, So in the case of HeatWave, of posting the benchmark parameters, So one of the reasons for us to publish, So the service had improved how is the competition responding to this? So the way we have seen the but at the same time, and the big picture for AMD, for the last several years. and so to be able to Yeah, and the cadence, and the trend we are seeing is we'll give you the last and the cloud in general, Yeah, the innovations we'll see you next time.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Marc StaimerPERSON

0.99+

Dave VellantePERSON

0.99+

NipunPERSON

0.99+

OracleORGANIZATION

0.99+

2017DATE

0.99+

DavePERSON

0.99+

OCIORGANIZATION

0.99+

Zen 3COMMERCIAL_ITEM

0.99+

7,002QUANTITY

0.99+

KumaranPERSON

0.99+

second aspectQUANTITY

0.99+

Nipun AgarwalPERSON

0.99+

AMDORGANIZATION

0.99+

12QUANTITY

0.99+

64 coresQUANTITY

0.99+

768 megabytesQUANTITY

0.99+

twoQUANTITY

0.99+

MySQLTITLE

0.99+

third aspectQUANTITY

0.99+

12 channelsQUANTITY

0.99+

Kumaran SivaPERSON

0.99+

HeatWaveORGANIZATION

0.99+

96QUANTITY

0.99+

18 timesQUANTITY

0.99+

BergamoORGANIZATION

0.99+

three partsQUANTITY

0.99+

DeltaORGANIZATION

0.99+

three monthsQUANTITY

0.99+

MySQL HeatWaveTITLE

0.99+

42 timesQUANTITY

0.99+

bothQUANTITY

0.99+

18 monthsQUANTITY

0.99+

Zen 2COMMERCIAL_ITEM

0.99+

oneQUANTITY

0.99+

GitHubORGANIZATION

0.99+

OneQUANTITY

0.98+

second generationQUANTITY

0.98+

single databaseQUANTITY

0.98+

128 coresQUANTITY

0.98+

18 monthsQUANTITY

0.98+

three thingsQUANTITY

0.98+

Oracle & AMD Partner to Power Exadata X9M


 

[Music] the history of exadata in the platform is really unique and from my vantage point it started earlier this century as a skunk works inside of oracle called project sage back when grid computing was the next big thing oracle saw that betting on standard hardware would put it on an industry curve that would rapidly evolve and i remember the oracle hp database machine which was announced at oracle open world almost 15 years ago and then exadata kept evolving after the sun acquisition it became a platform that had tightly integrated hardware and software and today exadata it keeps evolving almost like a chameleon to address more workloads and reach new performance levels last april for example oracle announced the availability of exadata x9m in oci oracle cloud infrastructure and introduced the ability to run the autonomous database service or the exa data database service you know oracle often talks about they call it stock exchange performance level kind of no description needed and sort of related capabilities the company as we know is fond of putting out benchmarks and comparisons with previous generations of product and sometimes competitive products that underscore the progress that's being made with exadata such as 87 percent more iops with metrics for latency measured in microseconds mics instead of milliseconds and many other numbers that are industry-leading and compelling especially for mission-critical workloads one thing that hasn't been as well publicized is that exadata on oci is using amd's epyc processors in the database service epyc is not eastern pacific yacht club for all your sailing buffs rather it stands for extreme performance yield computing the enterprise grade version of amd's zen architecture which has been a linchpin of amd's success in terms of penetrating enterprise markets and to focus on the innovations that amd and oracle are bringing to market we have with us today juan loyza who's executive vice president of mission critical technologies at oracle and mark papermaster who's the cto and evp of technology and engineering at amd juan welcome back to the show mark great to have you on thecube and your first appearance thanks for coming on yep happy to be here thank you all right juan let's start with you you've been on thecube a number of times as i said and you've talked about how exadata is a top platform for oracle database we've covered that extensively what's different and unique from your point of view about exadata cloud infrastructure x9m on oci yeah so as you know exadata it's designed top down to be the best possible platform for database uh it has a lot of unique capabilities like we make extensive use of rdma smart storage we take advantage of you know everything we can in the leading uh hardware platforms and x9m is our next generation platform and it does exactly that we're always wanting to be to get all the best that we can from the available hardware that our partners like amd produce and so that's what x9 in it is it's faster more capacity lower latency more ios pushing the limits of the hardware technology so we don't want to be the limit the software the database software should not be the limit it should be uh the actual physical limits of the hardware and that that's what x9m is all about why won amd chips in x9m uh yeah so we're we're uh introducing uh amd chips we think they provide outstanding performance uh both for oltp and for analytic workloads and it's really that simple we just think that performance is outstanding in the product yeah mark your career is quite amazing i've been around long enough to remember the transition to cmos from emitter coupled logic in the mainframe era back when you were at ibm that was an epic technology call at the time i was of course steeped as an analyst at idc in the pc era and like like many witnessed the tectonic shift that apple's ipod and iphone caused and the timing of you joining amd is quite important in my view because it coincided with the year that pc volumes peaked and marked the beginning of what i call a stagflation period for x86 i could riff on history for hours but let's focus on the oracle relationship mark what are the relevant capabilities and key specs of the amd chips that are used in exadata x9m on oracle's cloud well thanks and and uh it's really uh the basis of i think the great partnership that we have with oracle on exadata x9m and that is that the amd technology uses our third generation of zen processors zen was you know architected to really bring high performance you know back to x86 a very very strong road map that we've executed you know on schedule to our commitments and this third generation does all of that it uses a seven nanometer cpu that is a you know core that was designed to really bring uh throughput uh bring you know really high uh efficiency uh to computing uh and just deliver raw capabilities and so uh for uh exadata x9m uh it's really leveraging all of that it's it's a uh implemented in up to 64 cores per socket it's got uh you know really anywhere from 128 to 168 pcie gen 4 io connectivity so you can you can really attach uh you know all of the uh the necessary uh infrastructure and and uh storage uh that's needed uh for exadata performance and also memory you have to feed the beast for those analytics and for the oltp that juan was talking about and so it does have eight lanes of memory for high performance ddr4 so it's really as a balanced processor and it's implemented in a way to really optimize uh high performance that that is our whole focus of uh amd it's where we've you know reset the company focus on years ago and uh again uh you know great to see uh you know the the super smart uh you know database team at oracle really a partner with us understand those capabilities and it's been just great to partner with them to uh you know to you know enable oracle to really leverage the capabilities of the zen processor yeah it's been a pretty amazing 10 or 11 years for both companies but mark how specifically are you working with oracle at the engineering and product level you know and what does that mean for your joint customers in terms of what they can expect from the collaboration well here's where the collaboration really comes to play you think about a processor and you know i'll say you know when one's team first looked at it there's general benchmarks and the benchmarks are impressive but they're general benchmarks and you know and they showed you know the i'll say the you know the base processing capability but the partnership comes to bear uh when it when it means optimizing for the workloads that exadata x9m is really delivering to the end customers and that's where we dive down and and as we uh learn from the oracle team we learned to understand where bottlenecks could be uh where is there tuning that we could in fact in fact really boost the performance above i'll say that baseline that you get in the generic benchmarks and that's what the teams have done so for instance you look at you know optimizing latency to rdma you look at just throughput optimizing throughput on otp and database processing when you go through the workloads and you take the traces and you break it down and you find the areas that are bottlenecking and then you can adjust we have you know thousands of parameters that can be adjusted for a given workload and that's again that's the beauty of the partnership so we have the expertise on the cpu engineering uh you know oracle exudated team knows innately what the customers need to get the most out of their platform and when the teams came together we actually achieved anywhere from 20 percent to 50 gains on specific workloads it's really exciting to see so okay so so i want to follow up on that is that different from the competition how are you driving customer value you mentioned some you know some some percentage improvements are you measuring primarily with with latency how do you look at that well uh you know we are differentiated with the uh in the number of factors we bring a higher core density we bring the highest core density certainly in x86 and and moreover what we've led the industry is how to scale those cores we have a very high performance fabric that connects those together so as as a customer needs more cores again we scale anywhere from 8 to 64 cores but what the trick is uh that is you add more cores you want the scale the scale to be as close to linear as possible and so that's a differentiation we have and we enable that again with that balanced computer of cpu io and memory that we design but the key is you know we pride ourselves at amd of being able to partner in a very deep fashion with our customers we listen very well i think that's uh what we've had the opportunity uh to do with uh juan and his team we appreciate that and and that is how we got the kind of performance benefits that i described earlier it's working together almost like one team and in bringing that best possible capability to the end customers great thank you for that one i want to come back to you can both the exadata database service and the autonomous database service can they take advantage of exadata cloud x9m capabilities that are in that platform yeah absolutely um you know autonomous is basically our self-driving version of the oracle database but fundamentally it is the same uh database course so both of them will take advantage of the tremendous performance that we're getting now you know when when mark takes about 64 cores that's for chip we have two chips you know it's a two socket server so it's 128 128-way processor and then from our point of view there's two threads so from the database point there's 200 it's a 256-way processor and so there's a lot of raw performance there and we've done a lot of work with the amd team to make sure that we deliver that to our customers for all the different kinds of workload including otp analytics but also including for our autonomous database so yes absolutely allah takes advantage of it now juan you know i can't let you go without asking about the competition i've written extensively about the big four hyperscale clouds specifically aws azure google and alibaba and i know that don't hate me sometimes it angers some of my friends at oracle ibm too that i don't include you in that list but but i see oracle specifically is different and really the cloud for the most demanding applications and and top performance databases and not the commodity cloud which of course that angers all my friends at those four companies so i'm ticking everybody off so how does exadata cloud infrastructure x9m compare to the likes of aws azure google and other database cloud services in terms of oltp and analytics value performance cost however you want to frame it yeah so our architecture is fundamentally different uh we've architected our database for the scale out environment so for example we've moved intelligence in the storage uh we've put uh remote direct memory access we put persistent memory into our product so we've done a lot of architectural changes that they haven't and you're starting to see a little bit of that like if you look at some of the things that amazon and google are doing they're starting to realize that hey if you're gonna achieve good results you really need to push some database uh processing into the storage so so they're taking baby steps toward that you know you know roughly 15 years after we we've had a product and again at some point they're gonna realize you really need rdma you really need you know more uh direct access to those capabilities so so they're slowly getting there but you know we're well ahead and what you know the way this is delivered is you know better availability better performance lower latency higher iops so and this is why our customers love our product and you know if you if you look at the global fortune 100 over 90 percent of them are running exit data today and even in the in our cloud uh you know over 60 of the global 100 are running exadata in the oracle cloud because of all the differentiated uh benefits that they get uh from the product uh so yeah we're we're well ahead in the in the database space mark last question for you is how do you see this relationship evolving in the future can you share a little road map for the audience you bet well first off you know given the deep partnership that we've had on exudate x9m uh it it's really allowed us to inform our future design so uh in our current uh third generation epic epyc is uh that is really uh what we call our epic server offerings and it's a 7003 third gen in and exudate x9m so what about fourth gen well fourth gen is well underway uh you know it and uh and uh you know ready to you know for the for the future but it incorporates learning uh that we've done in partnership with with oracle uh it's gonna have even more through capabilities it's gonna have expanded memory capabilities because there's a cxl connect express link that'll expand even more memory opportunities and i could go on so you know that's the beauty of a deep partnership as it enables us to really take that learning going forward it pays forward and we're very excited to to fold all of that into our future generations and provide even a better capabilities to one and his team moving forward yeah you guys have been obviously very forthcoming you have to be with with with zen and epic juan anything you'd like to add as closing comments yeah i would say that in the processor market there's been a real acceleration in innovation in the last few years um there was you know a big move 10 15 years ago when multi-core processors came out and then you know we were on that for a while and then things started staggering but in the last two or three years and amd has been leading this um there's been a dramatic uh acceleration in innovation in this space so it's very exciting to be part of this and and customers are getting a big benefit from this all right chance hey thanks for coming back in the cube today really appreciate your time thanks glad to be here all right thank you for watching this exclusive cube conversation this is dave vellante from thecube and we'll see you next time [Music]

Published Date : Sep 13 2022

**Summary and Sentiment Analysis are not been shown because of improper transcript**

ENTITIES

EntityCategoryConfidence
20 percentQUANTITY

0.99+

juan loyzaPERSON

0.99+

amdORGANIZATION

0.99+

amazonORGANIZATION

0.99+

8QUANTITY

0.99+

256-wayQUANTITY

0.99+

10QUANTITY

0.99+

OracleORGANIZATION

0.99+

alibabaORGANIZATION

0.99+

87 percentQUANTITY

0.99+

128QUANTITY

0.99+

oracleORGANIZATION

0.99+

two threadsQUANTITY

0.99+

googleORGANIZATION

0.99+

11 yearsQUANTITY

0.99+

todayDATE

0.99+

50QUANTITY

0.99+

200QUANTITY

0.99+

ipodCOMMERCIAL_ITEM

0.99+

bothQUANTITY

0.99+

two chipsQUANTITY

0.99+

both companiesQUANTITY

0.99+

10DATE

0.98+

iphoneCOMMERCIAL_ITEM

0.98+

earlier this centuryDATE

0.98+

last aprilDATE

0.98+

third generationQUANTITY

0.98+

juanPERSON

0.98+

64 coresQUANTITY

0.98+

128-wayQUANTITY

0.98+

two socketQUANTITY

0.98+

eight lanesQUANTITY

0.98+

awsORGANIZATION

0.97+

AMDORGANIZATION

0.97+

iosTITLE

0.97+

fourth genQUANTITY

0.96+

168 pcieQUANTITY

0.96+

dave vellantePERSON

0.95+

third genQUANTITY

0.94+

aws azureORGANIZATION

0.94+

appleORGANIZATION

0.94+

thousands of parametersQUANTITY

0.92+

yearsDATE

0.91+

15 yearsQUANTITY

0.9+

Power ExadataORGANIZATION

0.9+

over 90 percentQUANTITY

0.89+

four companiesQUANTITY

0.89+

firstQUANTITY

0.88+

ociORGANIZATION

0.87+

first appearanceQUANTITY

0.85+

one teamQUANTITY

0.84+

almost 15 years agoDATE

0.83+

seven nanometerQUANTITY

0.83+

last few yearsDATE

0.82+

one thingQUANTITY

0.82+

15 years agoDATE

0.82+

epycTITLE

0.8+

over 60QUANTITY

0.79+

amd produceORGANIZATION

0.79+

Breaking Analysis: Broadcom, Taming the VMware Beast


 

>> From theCUBE studios in Palo Alto in Boston, bringing you data driven insights from theCUBE and ETR. This is Breaking Analysis with Dave Vellante. >> In the words of my colleague CTO David Nicholson, Broadcom buys old cars, not to restore them to their original luster and beauty. Nope. They buy classic cars to extract the platinum that's inside the catalytic converter and monetize that. Broadcom's planned 61 billion acquisition of VMware will mark yet another new era and chapter for the virtualization pioneer, a mere seven months after finally getting spun out as an independent company by Dell. For VMware, this means a dramatically different operating model with financial performance and shareholder value creation as the dominant and perhaps the sole agenda item. For customers, it will mean a more focused portfolio, less aspirational vision pitches, and most certainly higher prices. Hello and welcome to this week's Wikibon CUBE Insights powered by ETR. In this Breaking Analysis, we'll share data, opinions and customer insights about this blockbuster deal and forecast the future of VMware, Broadcom and the broader ecosystem. Let's first look at the key deal points, it's been well covered in the press. But just for the record, $61 billion in a 50/50 cash and stock deal, resulting in a blended price of $138 per share, which is a 44% premium to the unaffected price, i.e. prior to the news breaking. Broadcom will assume 8 billion of VMware debt and promises that the acquisition will be immediately accretive and will generate 8.5 billion in EBITDA by year three. That's more than 4 billion in EBITDA relative to VMware's current performance today. In a classic Broadcom M&A approach, the company promises to dilever debt and maintain investment grade ratings. They will rebrand their software business as VMware, which will now comprise about 50% of revenues. There's a 40 day go shop and importantly, Broadcom promises to continue to return 60% of its free cash flow to shareholders in the form of dividends and buybacks. Okay, with that out of the way, we're going to get to the money slide literally in a moment that Broadcom shared on its investor call. Broadcom has more than 20 business units. It's CEO Hock Tan makes it really easy for his business unit managers to understand. Rule number one, you agreed to an operating plan with targets for revenue, growth, EBITDA, et cetera, hit your numbers consistently and we're good. You'll be very well compensated and life will be wonderful for you and your family. Miss the number, and we're going to have a frank and uncomfortable bottom line discussion. You'll four, perhaps five quarters to turn your business around, if you don't, we'll kill it or sell it if we can. Rule number two, refer to rule number one. Hello, VMware, here's the money slide. I'll interpret the bullet points on the left for clarity. Your fiscal year 2022 EBITDA was 4.7 billion. By year three, it will be 8.5 billion. And we Broadcom have four knobs to turn with you, VMware to help you get there. First knob, if it ain't recurring revenue with rubber stamp renewals, we're going to convert that revenue or kill it. Knob number two, we're going to focus R&D in the most profitable areas of the business. AKA expect the R&D budget to be cut. Number three, we're going to spend less on sales and marketing by focusing on existing customers. We're not going to lose money today and try to make it up many years down the road. And number four, we run Broadcom with 1% GNA. You will too. Any questions? Good. Now, just to give you a little sense of how Broadcom runs its business and how well run a company it is, let's do a little simple comparison with this financial snapshot. All we're doing here is taking the most recent quarterly earnings reports from Broadcom and VMware respectively. We take the quarterly revenue and multiply by four X to get the revenue run rate and then we calculate the ratios off of the most recent quarters revenue. It's worth spending some time on this to get a sense of how profitable the Broadcom business actually is and what the spreadsheet gurus at Broadcom are seeing with respect to the possibilities for VMware. So combined, we're talking about a 40 plus billion dollar company. Broadcom is growing at more than 20% per year. Whereas VMware's latest quarter showed a very disappointing 3% growth. Broadcom is mostly a hardware company, but its gross margin is in the high seventies. As a software company of course VMware has higher gross margins, but FYI, Broadcom's software business, the remains of Symantec and what they purchased as CA has 90% gross margin. But the I popper is operating margin. This is all non gap. So it excludes things like stock based compensation, but Broadcom had 61% operating margin last quarter. This is insanely off the charts compared to VMware's 25%. Oracle's non gap operating margin is 47% and Oracle is an incredibly profitable company. Now the red box is where the cuts are going to take place. Broadcom doesn't spend much on marketing. It doesn't have to. It's SG&A is 3% of revenue versus 18% for VMware and R&D spend is almost certainly going to get cut. The other eye popper is free cash flow as a percentage of revenue at 51% for Broadcom and 29% for VMware. 51%. That's incredible. And that my dear friends is why Broadcom a company with just under 30 billion in revenue has a market cap of 230 billion. Let's dig into the VMware portfolio a bit more and identify the possible areas that will be placed under the microscope by Hock Tan and his managers. The data from ETR's latest survey shows the net score or spending momentum across VMware's portfolio in this chart, net score essentially measures the net percent of customers that are spending more on a specific product or vendor. The yellow bar is the most recent survey and compares the April 22 survey data to April 21 and January of 22. Everything is down in the yellow from January, not surprising given the economic outlook and the change in spending patterns that we've reported. VMware Cloud on AWS remains the product in the ETR survey with the most momentum. It's the only offering in the portfolio with spending momentum above the 40% line, a level that we consider highly elevated. Unified Endpoint Management looks more than respectable, but that business is a rock fight with Microsoft. VMware Cloud is things like VMware Cloud foundation, VCF and VMware's cross cloud offerings. NSX came from the Nicira acquisition. Tanzu is not yet pervasive and one wonders if VMware is making any money there. Server is ESX and vSphere and is the bread and butter. That is where Broadcom is going to focus. It's going to look at VSAN and NSX, which is software probably profitable. And of course the other products and see if the investments are paying off, if they are Broadcom will keep, if they are not, you can bet your socks, they will be sold off or killed. Carbon Black is at the far right. VMware paid $2.1 billion for Carbon Black. And it's the lowest performer on this list in terms of net score or spending momentum. And that doesn't mean it's not profitable. It just doesn't have the momentum you'd like to see, so you can bet that is going to get scrutiny. Remember VMware's growth has been under pressure for the last several years. So it's been buying companies, dozens of them. It bought AirWatch, bought Heptio, Carbon Black, Nicira, SaltStack, Datrium, Versedo, Bitnami, and on and on and on. Many of these were to pick up engineering teams. Some of them were to drive new revenue. Now this is definitely going to be scrutinized by Broadcom. So that helps explain why Michael Dell would sell VMware. And where does VMware go from here? It's got great core product. It's an iconic name. It's got an awesome ecosystem, fantastic distribution channel, but its growth is slowing. It's got limited developer chops in a world that developers and cloud native is all the rage. It's got a far flung R&D agenda going at war with a lot of different places. And it's increasingly fighting this multi front war with cloud companies, companies like Cisco, IBM Red Hat, et cetera. VMware's kind of becoming a heavy lift. It's a perfect acquisition target for Broadcom and why the street loves this deal. And we titled this Breaking Analysis taming the VMware beast because VMware is a beast. It's ubiquitous. It's an epic software platform. EMC couldn't control it. Dell used it as a piggy bank, but really didn't change its operating model. Broadcom 100% will. Now one of the things that we get excited about is the future of systems architectures. We published a breaking analysis about a year ago, talking about AWS's secret weapon with Nitro and it's Annapurna custom Silicon efforts. Remember it acquired Annapurna for a measly $350 million. And we talked about how there's a new architecture and a new price performance curve emerging in the enterprise, driven by AWS and being followed by Microsoft, Google, Alibaba, a trend toward custom Silicon with the arm based Nitro and which is AWS's hypervisor and Nick strategy, enabling processor diversity with things like Graviton and Trainium and other diverse processors, really diversifying away from x86 and how this leads to much faster product cycles, faster tape out, lower costs. And our premise was that everyone in the data center is going to competes, is going to need a Nitro to be competitive long term. And customers are going to gravitate toward the most economically favorable platform. And as we describe the landscape with this chart, we've updated this for this Breaking Analysis and we'll come back to nitro in a moment. This is a two dimensional graphic with net score or spending momentum on the vertical axis and overlap formally known as market share or presence within the survey, pervasiveness that's on the horizontal axis. And we plot various companies and products and we've inserted VMware's net score breakdown. The granularity in those colored bars on the bottom right. Net score is essentially the green minus the red and a couple points on that. VMware in the latest survey has 6% new adoption. That's that lime green. It's interesting. The question Broadcom is going to ask is, how much does it cost you to acquire that 6% new. 32% of VMware customers in the survey are increasing spending, meaning they're increasing spending by 6% or more. That's the forest green. And the question Broadcom will dig into is what percent of that increased spend (chuckles) you're capturing is profitable spend? Whatever isn't profitable is going to be cut. Now that 52% gray area flat spending that is ripe for the Broadcom picking, that is the fat middle, and those customers are locked and loaded for future rent extraction via perpetual renewals and price increases. Only 8% of customers are spending less, that's the pinkish color and only 3% are defecting, that's the bright red. So very, very sticky profile. Perfect for Broadcom. Now the rest of the chart lays out some of the other competitor names and we've plotted many of the VMware products so you can see where they fit. They're all pretty respectable on the vertical axis, that's spending momentum. But what Broadcom wants is that core ESX vSphere base where we've superimposed the Broadcom logo. Broadcom doesn't care so much about spending momentum. It cares about profitability potential and then momentum. AWS and Azure, they're setting the pace in this business, in the upper right corner. Cisco very huge presence in the data center, as does Intel, they're not in the ETR survey, but we've superimposed them. Now, Intel of course, is in a dog fight within Nvidia, the Arm ecosystem, AMD, don't forget China. You see a Google cloud platform is in there. Oracle is also on the chart as well, somewhat lower on the vertical axis, but it doesn't have that spending momentum, but it has a big presence. And it owns a cloud as we've talked about many times and it's highly differentiated. It's got a strategy that allows it to differentiate from the pack. It's very financially driven. It knows how to extract lifetime value. Safra Catz operates in many ways, similar to what we're seeing from Hock Tan and company, different from a portfolio standpoint. Oracle's got the full stack, et cetera. So it's a different strategy. But very, very financially savvy. You could see IBM and IBM Red Hat in the mix and then Dell and HP. I want to come back to that momentarily to talk about where value is flowing. And then we plotted Nutanix, which with Acropolis could suck up some V tax avoidance business. Now notice Symantec and CA, relatively speaking in the ETR survey, they have horrible spending momentum. As we said, Broadcom doesn't care. Hock Tan is not going for growth at the expense of profitability. So we fully expect VMware to come down on the vertical axis over time and go up on the profit scale. Of course, ETR doesn't measure the profitability here. Now back to Nitro, VMware has this thing called Project Monterey. It's essentially their version of Nitro and will serve as their future architecture diversifying off x86 and accommodating alternative processors. And a much more efficient performance, price in energy consumption curve. Now, one of the things that we've advocated for, we said this about Dell and others, including VMware to take a page out of AWS and start developing custom Silicon to better integrate hardware and software and accelerate multi-cloud or what we call supercloud. That layer above the cloud, not just running on individual clouds. So this is all about efficiency and simplicity to own this space. And we've challenged organizations to do that because otherwise we feel like the cloud guys are just going to have consistently better costs, not necessarily price, but better cost structures, but it begs the question. What happens to Project Monterey? Hock Tan and Broadcom, they don't invest in something that is unproven and doesn't throw off free cash flow. If it's not going to pay off for years to come, they're probably not going to invest in it. And yet Project Monterey could help secure VMware's future in not only the data center, but at the edge and compete more effectively with cloud economics. So we think either Project Monterey is toast or the VMware team will knock on the door of one of Broadcom's 20 plus business units and say, guys, what if we work together with you to develop a version of Monterey that we can use and sell to everyone, it'd be the arms dealer to everyone and be competitive with the cloud and other players out there and create the de facto standard for data center performance and supercloud. I mean, it's not outrageously expensive to develop custom Silicon. Tesla is doing it for example. And Broadcom obviously is capable of doing it. It's got good relationships with semiconductor fabs. But I think this is going to be a tough sell to Broadcom, unless VMware can hide this in plain site and make it profitable fast, like AWS most likely has with Nitro and Graviton. Then Project Monterey and our pipe dream of alternatives to Nitro in the data center could happen but if it can't, it's going to be toast. Or maybe Intel or Nvidia will take it over or maybe the Monterey team will spin out a VMware and do a Pensando like deal and demonstrate the viability of this concept and then Broadcom will buy it back in 10 years. Here's a double click on that previous data that we put in tabular form. It's how the data on that previous slide was plotted. I just want to give you the background data here. So net score spending momentum is the sorted on the left. So it's sorted by net score in the left hand chart, that was the y-axis in the previous data set and then shared and or presence in the data set is the right hand chart. In other words, it's sorted on the right hand chart, right hand table. That right most column is shared and you can see it's sorted top to bottom, and that was the x-axis on the previous chart. The point is not many on the left hand side are above the 40% line. VMware Cloud on AWS is, it's expensive, so it's probably profitable and it's probably a keeper. We'll see about the rest of VMware's portfolio. Like what happens to Tanzu for example. On the right, we drew a red line, just arbitrarily at those companies and products with more than a hundred mentions in the survey, everything but Tanzu from VMware makes that cut. Again, this is no indication of profitability here, and that's what's going to matter to Broadcom. Now let's take a moment to address the question of Broadcom as a software company. What the heck do they know about software, right. Well, they're not dumb over there and they know how to run a business, but there is a strategic rationale to this move beyond just doing portfolios and extracting rents and cutting R&D, et cetera, et cetera. Why, for example, isn't Broadcom going after coming back to Dell or HPE, it could pick up for a lot less than VMware, and they got way more revenue than VMware. Well, it's obvious, software's more profitable of course, and Broadcom wants to move up the stack, but there's a trend going on, which Broadcom is very much in touch with. First, it sells to Dell and HPE and Cisco and all the OEM. so it's not going to disrupt that. But this chart shows that the value is flowing away from traditional servers and storage and networking to two places, merchant Silicon, which itself is morphing. Broadcom... We focus on the left hand side of this chart. Broadcom correctly believes that the world is shifting from a CPU centric center of gravity to a connectivity centric world. We've talked about this on theCUBE a lot. You should listen to Broadcom COO Charlie Kawwas speak about this. It's all that supporting infrastructure around the CPU where value is flowing, including of course, alternative GPUs and XPUs, and NPUs et cetera, that are sucking the value out of the traditional x86 architecture, offloading some of the security and networking and storage functions that traditionally have been done in x86 which are part of the waste right now in the data center. This is that shifting dynamic of Moore's law. Moore's law, not keeping pace. It's slowing down. It's slower relative to some of the combinatorial factors. When you add up in all the CPU and GPU and NPU and accelerators, et cetera. So we've talked about this a lot in Breaking Analysis episodes. So the value is shifting left within that middle circle. And it's shifting left within that left circle toward components, other than CPU, many of which Broadcom supplies. And then you go back to the middle, value is shifting from that middle section, that traditional data center up into hyperscale clouds, and then to the right toward infrastructure software to manage all that equipment in the data center and across clouds. And look Broadcom is an arms dealer. They simply sell to everyone, locking up key vectors of the value chain, cutting costs and raising prices. It's a pretty straightforward strategy, but not for the fate of heart. And Broadcom has become pretty good at it. Let's close with the customer feedback. I spoke with ETRs Eric Bradley this morning. He and I both reached out to VMware customers that we know and got their input. And here's a little snapshot of what they said. I'll just read this. Broadcom will be looking to invest in the core and divest of any underperforming assets, right on. It's just what we were saying. This doesn't bode well for future innovation, this is a CTO at a large travel company. Next comment, we're a Carbon Black customer. VMware didn't seem to interfere with Carbon Black, but now that we're concerned about short term disruption to their tech roadmap and long term, are they going to split and be sold off like Symantec was, this is a CISO at a large hospitality organization. Third comment, I got directly from a VMware practitioner, an IT director at a manufacturing firm. This individual said, moving off VMware would be very difficult for us. We have over 500 applications running on VMware, and it's really easy to manage. We're not going to move those into the cloud and we're worried Broadcom will raise prices and just extract rents. Last comment, we'll share as, Broadcom sees the cloud data center and IoT is their next revenue source. The VMware acquisition provides them immediate virtualization capabilities to support a lightweight IoT offering. Big concern for customers is what technology they will invest in and innovate, and which will be stripped off and sold. Interesting. I asked David Floyer to give me a back of napkin estimate for the following question. I said, David, if you're running mission critical applications on VMware, how much would it increase your operating cost moving those applications into the cloud? Or how much would it save? And he said, Dave, VMware's really easy to run. It can run any application pretty much anywhere, and you don't need an army of people to manage it. All your processes are tied to VMware, you're locked and loaded. Move that into the cloud and your operating cost would double by his estimates. Well, there you have it. Broadcom will pinpoint the optimal profit maximization strategy and raise prices to the point where customers say, you know what, we're still better off staying with VMware. And sadly, for many practitioners there aren't a lot of choices. You could move to the cloud and increase your cost for a lot of your applications. You could do it yourself with say Zen or OpenStack. Good luck with that. You could tap Nutanix. That will definitely work for some applications, but are you going to move your entire estate, your application portfolio to Nutanix? It's not likely. So you're going to pay more for VMware and that's the price you're going to pay for two decades of better IT. So our advice is get out ahead of this, do an application portfolio assessment. If you can move apps to the cloud for less, and you haven't yet, do it, start immediately. Definitely give Nutanix a call, but going to have to be selective as to what you actually can move, forget porting to OpenStack, or do it yourself Hypervisor, don't even go there. And start building new cloud native apps where it makes sense and let the VMware stuff go into manage decline. Let certain apps just die through attrition, shift your development resources to innovation in the cloud and build a brick wall around the stable apps with VMware. As Paul Maritz, the former CEO of VMware said, "We are building the software mainframe". Now marketing guys got a hold of that and said, Paul, stop saying that, but it's true. And with Broadcom's help that day we'll soon be here. That's it for today. Thanks to Stephanie Chan who helps research our topics for Breaking Analysis. Alex Myerson does the production and he also manages the Breaking Analysis podcast. Kristen Martin and Cheryl Knight help get the word out on social and thanks to Rob Hof, who was our editor in chief at siliconangle.com. Remember, these episodes are all available as podcast, wherever you listen, just search Breaking Analysis podcast. Check out ETRs website at etr.ai for all the survey action. We publish a full report every week on wikibon.com and siliconangle.com. You can email me directly at david.vellante@siliconangle.com. You can DM me at DVellante or comment on our LinkedIn posts. This is Dave Vellante for theCUBE Insights powered by ETR. Have a great week, stay safe, be well. And we'll see you next time. (upbeat music)

Published Date : May 28 2022

SUMMARY :

This is Breaking Analysis and promises that the acquisition

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavidPERSON

0.99+

Stephanie ChanPERSON

0.99+

CiscoORGANIZATION

0.99+

Dave VellantePERSON

0.99+

SymantecORGANIZATION

0.99+

Rob HofPERSON

0.99+

Alex MyersonPERSON

0.99+

April 22DATE

0.99+

HPORGANIZATION

0.99+

David FloyerPERSON

0.99+

AWSORGANIZATION

0.99+

DellORGANIZATION

0.99+

OracleORGANIZATION

0.99+

HPEORGANIZATION

0.99+

Paul MaritzPERSON

0.99+

BroadcomORGANIZATION

0.99+

VMwareORGANIZATION

0.99+

NvidiaORGANIZATION

0.99+

Eric BradleyPERSON

0.99+

April 21DATE

0.99+

NSXORGANIZATION

0.99+

IBMORGANIZATION

0.99+

Cheryl KnightPERSON

0.99+

DavePERSON

0.99+

JanuaryDATE

0.99+

$61 billionQUANTITY

0.99+

8.5 billionQUANTITY

0.99+

$2.1 billionQUANTITY

0.99+

MicrosoftORGANIZATION

0.99+

Palo AltoLOCATION

0.99+

EMCORGANIZATION

0.99+

AcropolisORGANIZATION

0.99+

Kristen MartinPERSON

0.99+

90%QUANTITY

0.99+

6%QUANTITY

0.99+

4.7 billionQUANTITY

0.99+

GoogleORGANIZATION

0.99+

Hock TanORGANIZATION

0.99+

60%QUANTITY

0.99+

44%QUANTITY

0.99+

40 dayQUANTITY

0.99+

61%QUANTITY

0.99+

8 billionQUANTITY

0.99+

Michael DellPERSON

0.99+

52%QUANTITY

0.99+

47%QUANTITY

0.99+

Pete Lumbis, NVIDIA & Alessandro Barbieri, Pluribus Networks


 

(upbeat music) >> Okay, we're back. I'm John Furrier with theCUBE and we're going to go deeper into a deep dive into unified cloud networking solution from Pluribus and NVIDIA. And we'll examine some of the use cases with Alessandro Barbieri, VP of product management at Pluribus Networks and Pete Lumbis, the director of technical marketing and video remotely. Guys thanks for coming on, appreciate it. >> Yeah thanks a lot. >> I'm happy to be here. >> So a deep dive, let's get into the what and how. Alessandro, we heard earlier about the Pluribus and NVIDIA partnership and the solution you're working together in. What is it? >> Yeah, first let's talk about the what. What are we really integrating with the NVIDIA BlueField the DPU technology? Pluribus has been shipping in volume in multiple mission critical networks, this Netvisor ONE network operating systems. It runs today on merchant silicon switches and effectively it's standard based open network operating system for data center. And the novelty about this operating system is that it integrates distributed the control plane to automate effect with SDN overlay. This automation is completely open and interoperable and extensible to other type of clouds. It's not enclosed. And this is actually what we're now porting to the NVIDIA DPU. >> Awesome, so how does it integrate into NVIDIA hardware and specifically how is Pluribus integrating its software with the NVIDIA hardware? >> Yeah, I think we leverage some of the interesting properties of the BlueField DPU hardware which allows actually to integrate our network operating system in a manner which is completely isolated and independent from the guest operating system. So the first byproduct of this approach is that whatever we do at the network level on the DPU card is completely agnostic to the hypervisor layer or OS layer running on the host. Even more, we can also independently manage this network node this switch on a NIC effectively, managed completely independently from the host. You don't have to go through the network operating system running on X86 to control this network node. So you truly have the experience effectively top of rack for virtual machine or a top of rack for Kubernetes spots, where if you allow me with analogy, instead of connecting a server NIC directly to a switchboard, now we are connecting a VM virtual interface to a virtual interface on the switch on an niche. And also as part of this integration, we put a lot of effort, a lot of emphasis in accelerating the entire data plan for networking and security. So we are taking advantage of the NVIDIA DOCA API to program the accelerators. And these you accomplish two things with that. Number one, you have much better performance. They're running the same network services on an X86 CPU. And second, this gives you the ability to free up I would say around 20, 25% of the server capacity to be devoted either to additional workloads to run your cloud applications or perhaps you can actually shrink the power footprint and compute footprint of your data center by 20% if you want to run the same number of compute workloads. So great efficiencies in the overall approach. >> And this is completely independent of the server CPU, right? >> Absolutely, there is zero code from Pluribus running on the X86. And this is why we think this enables a very clean demarcation between compute and network. >> So Pete, I got to get you in here. We heard that the DPU enable cleaner separation of DevOps and NetOps. Can you explain why that's important because everyone's talking DevSecOps, right? Now, you've got NetSecOps. This separation, why is this clean separation important? >> Yeah, I think, it's a pragmatic solution in my opinion. We wish the world was all kind of rainbows and unicorns, but it's a little messier than that. I think a lot of the DevOps stuff and that mentality and philosophy. There's a natural fit there. You have applications running on servers. So you're talking about developers with those applications integrating with the operators of those servers. Well, the network has always been this other thing and the network operators have always had a very different approach to things than compute operators. And I think that we in the networking industry have gotten closer together but there's still a gap, there's still some distance. And I think that distance isn't going to be closed and so, again, it comes down to pragmatism. And I think one of my favorite phrases is look, good fences make good neighbors. And that's what this is. >> Yeah, and it's a great point 'cause DevOps has become kind of the calling car for cloud, right? But DevOps is a simply infrastructures code and infrastructure is networking, right? So if infrastructure is code you're talking about that part of the stack under the covers, under the hood if you will. This is super important distinction and this is where the innovation is. Can you elaborate on how you see that because this is really where the action is right now? >> Yeah, exactly. And I think that's where one from the policy, the security, the zero trust aspect of this, right? If you get it wrong on that network side, all of a sudden you can totally open up those capabilities. And so security's part of that. But the other part is thinking about this at scale, right? So we're taking one top of rack switch and adding up to 48 servers per rack. And so that ability to automate, orchestrate and manage its scale becomes absolutely critical. >> Alessandro, this is really the why we're talking about here and this is scale. And again, getting it right. If you don't get it right, you're going to be really kind of up you know what? So this is a huge deal. Networking matters, security matters, automation matters, DevOps, NetOps, all coming together clean separation. Help us understand how this joint solution with NVIDIA fits into the Pluribus unified cloud networking vision because this is what people are talking about and working on right now. >> Yeah, absolutely. So I think here with this solution we're attacking two major problems in cloud networking. One, is operation of cloud networking and the second, is distributing security services in the cloud infrastructure. First, let me talk about first what are we really unifying? If we're unifying something, something must be at least fragmented or disjointed. And what is disjointed is actually the network in the cloud. If you look wholistically how networking is deployed in the cloud, you have your physical fabric infrastructure, right? Your switches and routers. You build your IP clause, fabric leaf and spine topologies. This is actually a well understood problem I would say. There are multiple vendors with let's say similar technologies, very well standardized, very well understood and almost a commodity I would say building an IP fabric these days, but this is not the place where you deploy most of your services in the cloud particularly from a security standpoint. Those services are actually now moved into the compute layer where cloud builders have to instrument a separate network virtualization layer where they deploy segmentation and security closer to the workloads. And this is where the complication arise. This high value part of the cloud network is where you have a plethora of options that they don't talk to each other and they're very dependent on the kind of hypervisor or compute solution you choose. For example, the networking API between an ESXi environment or an Hyper-V or a Zen are completely disjointed. You have multiple orchestration layers. And then when you throw in also Kubernetes in this type of architecture, you are introducing yet another level of networking. And when Kubernetes runs on top of VMs which is a prevalent approach, you actually are stuck in multiple networks on the compute layer that they eventually ran on the physical fabric infrastructure. Those are all ships in the knights effectively, right? They operate as completely disjointed and we're trying to tackle this problem first with the notion of a unified fabric which is independent from any workloads whether this fabric spans on a switch which can be connected to bare metal workload or can span all the way inside the DPU where you have your multi hypervisor compute environment. It's one API, one common network control plane and one common set of segmentation services for the network. That's problem number one. >> It's interesting I hear you talking and I hear one network among different operating models. Reminds me of the old serverless days. There's still servers but they call it serverless. Is there going to be a term network-less because at the end of the day it should be one network, not multiple operating models. This is a problem that you guys are working on, is that right? I'm just joking serverless and network-less, but the idea is it should be one thing. >> Yeah, effectively what we're trying to do is we're trying to recompose this fragmentation in terms of network cooperation across physical networking and server networking. Server networking is where the majority of the problems are because as much as you have standardized the ways of building physical networks and cloud fabrics with IP protocols and internet, you don't have that sort of operational efficiency at the server layer. And this is what we're trying to attack first with this technology. The second aspect we're trying to attack is how we distribute security services throughout the infrastructure more efficiently whether it's micro-segmentation is a stateful firewall services or even encryption. Those are all capabilities enabled by the BlueField DPU technology. And we can actually integrate those capabilities directly into the network fabric limiting dramatically at least for east west traffic the sprawl of security appliances whether virtual or physical. That is typically the way people today segment and secure the traffic in the cloud. >> Awesome. Pete, all kidding aside about network-less and serverless kind of fun play on words there, the network is one thing it's basically distributed computing, right? So I'd love to get your thoughts about this distributed security with zero trust as the driver for this architecture you guys are doing. Can you share in more detail the depth of why DPU based approach is better than alternatives? >> Yeah, I think what's beautiful and kind of what the DPU brings that's new to this model is completely isolated compute environment inside. So it's the, yo dog, I heard you like a server so I put a server inside your server. And so we provide ARM CPUs, memory and network accelerators inside and that is completely isolated from the host. The actual X86 host just thinks it has a regular niche in there, but you actually have this full control plane thing. It's just like taking your top of rack switch and shoving it inside of your compute node. And so you have not only this separation within the data plane, but you have this complete control plane separation so you have this element that the network team can now control and manage, but we're taking all of the functions we used to do at the top of rack switch and we're distributing them now. And as time has gone on we've struggled to put more and more and more into that network edge. And the reality is the network edge is the compute layer, not the top of rack switch layer. And so that provides this phenomenal enforcement point for security and policy. And I think outside of today's solutions around virtual firewalls, the other option is centralized appliances. And even if you can get one that can scale large enough, the question is, can you afford it? And so what we end up doing is we kind of hope that NVIDIA's good enough or we hope that the VXLAN tunnel's good enough. And we can't actually apply more advanced techniques there because we can't financially afford that appliance to see all of the traffic. And now that we have a distributed model with this accelerator, we could do it. >> So what's in it for the customer real quick and I think this is an interesting point you mentioned policy. Everyone in networking knows policy is just a great thing. And as you hear it being talked about up the stack as well when you start getting to orchestrating microservices and whatnot all that good stuff going on there, containers and whatnot and modern applications. What's the benefit to the customers with this approach because what I heard was more scale, more edge, deployment flexibility relative to security policies and application enablement? What's the customer get out of this architecture? What's the enablement? >> It comes down to taking again the capabilities that we're in that top of rack switch and distributing them down. So that makes simplicity smaller, blast radius' for failures smaller failure domains, maintenance on the networks and the systems become easier. Your ability to integrate across workloads becomes infinitely easier. And again, we always want to kind of separate each one of those layers so just as in say a VXLAN network, my leaf in spine don't have to be tightly coupled together. I can now do this at a different layer and so you can run a DPU with any networking in the core there. And so you get this extreme flexibility. You can start small, you can scale large. To me the possibilities are endless. >> It's a great security control plan. Really flexibility is key and also being situationally aware of any kind of threats or new vectors or whatever's happening in the network. Alessandro, this is huge upside, right? You've already identified some successes with some customers on your early field trials. What are they doing and why are they attracted to the solution? >> Yeah, I think the response from customer has been the most encouraging and exciting for us to sort of continue and work and develop this product. And we have actually learned a lot in the process. We talked to tier two, tier three cloud providers. We talked to SP, Soft Telco type of networks as well as inter large enterprise customers. In one particular case one, let me call out a couple of examples here just to give you a flavor. There is a cloud provider in Asia who is actually managing a cloud where they're offering services based on multiple hypervisors. They are native services based on Zen, but they also on ramp into the cloud workloads based on ESXi and KVM depending on what the customer picks from the menu. And they have the problem of now orchestrating through their orchestrate or integrating with Zen center, with vSphere, with OpenStack to coordinate this multiple environments. And in the process to provide security, they actually deploy virtual appliances everywhere which has a lot of cost complication and eats up into the server CPU. The promise that they saw in this technology, they call it actually game changing is actually to remove all this complexity, having a single network and distribute the micro segmentation service directly into the fabric. And overall they're hoping to get out it tremendous OPEX benefit and overall operational simplification for the cloud infrastructure. That's one important use case. Another global enterprise customer is running both ESXi and Hyper-V environment and they don't have a solution to do micro segmentation consistently across hypervisors. So again, micro segmentation is a huge driver security. Looks like it's a recurring theme talking to most of these customers. And in the Telco space, we're working with few Telco customers on the CFT program where the main goal is actually to harmonize network cooperation. They typically handle all the VNFs with their own homegrown DPDK stack. This is overly complex. It is frankly also slow and inefficient. And then they have a physical network to manage. The idea of having again one network to coordinate the provisioning of cloud services between the Telco VNFs and the rest of the infrastructure is extremely powerful on top of the offloading capability opted by the BlueField DPUs. Those are just some examples. >> That was a great use case. A lot more potential I see that with the unified cloud networking, great stuff, Pete, shout out to you 'cause at NVIDIA we've been following your success us for a long time and continuing to innovate as cloud scales and Pluribus with unified networking kind of bring it to the next level. Great stuff, great to have you guys on and again, software keeps driving the innovation and again, networking is just a part of it and it's the key solution. So I got to ask both of you to wrap this up. How can cloud operators who are interested in this new architecture and solution learn more because this is an architectural shift? People are working on this problem, they're try to think about multiple clouds, they're try to think about unification around the network and giving more security, more flexibility to their teams. How can people learn more? >> Yeah, so Alessandro and I have a talk at the upcoming NVIDIA GTC conference. So it's the week of March 21st through 24th. You can go and register for free nvidia.com/gtc. You can also watch recorded sessions if you end up watching this on YouTube a little bit after the fact. And we're going to dive a little bit more into the specifics and the details and what we're providing in the solution. >> Alessandro, how can we people learn more? >> Yeah, absolutely. People can go to the Pluribus website, www.pluribusnetworks.com/eft and they can fill up the form and they will contact Pluribus to either know more or to know more and actually to sign up for the actual early field trial program which starts at the end of April. >> Okay, well, we'll leave it there. Thank you both for joining, appreciate it. Up next you're going to hear an independent analyst perspective and review some of the research from the enterprise strategy group ESG. I'm John Furrier with theCUBE, thanks for watching. (upbeat music)

Published Date : Mar 16 2022

SUMMARY :

Pete Lumbis, the director and NVIDIA partnership and the solution And the novelty about So the first byproduct of this approach on the X86. We heard that the DPU and the network operators have of the calling car for cloud, right? And so that ability to into the Pluribus unified and the second, is Reminds me of the old serverless days. and secure the traffic in the cloud. as the driver for this the data plane, but you have this complete What's the benefit to the and the systems become easier. to the solution? And in the process to provide security, and it's the key solution. and the details and what we're at the end of April. and review some of the research from

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Alessandro BarbieriPERSON

0.99+

AlessandroPERSON

0.99+

AsiaLOCATION

0.99+

NVIDIAORGANIZATION

0.99+

PluribusORGANIZATION

0.99+

TelcoORGANIZATION

0.99+

Pluribus NetworksORGANIZATION

0.99+

John FurrierPERSON

0.99+

20%QUANTITY

0.99+

Pete LumbisPERSON

0.99+

FirstQUANTITY

0.99+

ESXiTITLE

0.99+

March 21stDATE

0.99+

ESGORGANIZATION

0.99+

PetePERSON

0.99+

www.pluribusnetworks.com/eftOTHER

0.99+

second aspectQUANTITY

0.99+

firstQUANTITY

0.99+

oneQUANTITY

0.99+

24thDATE

0.99+

bothQUANTITY

0.99+

OneQUANTITY

0.99+

two thingsQUANTITY

0.98+

one networkQUANTITY

0.98+

DevOpsTITLE

0.98+

end of AprilDATE

0.98+

secondQUANTITY

0.97+

vSphereTITLE

0.97+

Soft TelcoORGANIZATION

0.97+

KubernetesTITLE

0.97+

todayDATE

0.97+

YouTubeORGANIZATION

0.97+

tier threeQUANTITY

0.96+

nvidia.com/gtcOTHER

0.96+

two major problemsQUANTITY

0.95+

ZenTITLE

0.94+

around 20, 25%QUANTITY

0.93+

zero codeQUANTITY

0.92+

each oneQUANTITY

0.92+

X86COMMERCIAL_ITEM

0.92+

OpenStackTITLE

0.92+

NetOpsTITLE

0.92+

single networkQUANTITY

0.92+

ARMORGANIZATION

0.91+

one common setQUANTITY

0.89+

one APIQUANTITY

0.88+

BlueFieldORGANIZATION

0.87+

one important use caseQUANTITY

0.86+

zero trustQUANTITY

0.86+

tier twoQUANTITY

0.85+

Hyper-VTITLE

0.85+

one common network control planeQUANTITY

0.83+

BlueFieldOTHER

0.82+

Number oneQUANTITY

0.81+

48 serversQUANTITY

0.8+

Changing the Game for Cloud Networking | Pluribus Networks


 

>>Everyone wants a cloud operating model. Since the introduction of the modern cloud. Last decade, the entire technology landscape has changed. We've learned a lot from the hyperscalers, especially from AWS. Now, one thing is certain in the technology business. It's so competitive. Then if a faster, better, cheaper idea comes along, the industry will move quickly to adopt it. They'll add their unique value and then they'll bring solutions to the market. And that's precisely what's happening throughout the technology industry because of cloud. And one of the best examples is Amazon's nitro. That's AWS has custom built hypervisor that delivers on the promise of more efficiently using resources and expanding things like processor, optionality for customers. It's a secret weapon for Amazon. As, as we, as we wrote last year, every infrastructure company needs something like nitro to compete. Why do we say this? Well, Wiki Bon our research arm estimates that nearly 30% of CPU cores in the data center are wasted. >>They're doing work that they weren't designed to do well, specifically offloading networking, storage, and security tasks. So if you can eliminate that waste, you can recapture dollars that drop right to the bottom line. That's why every company needs a nitro like solution. As a result of these developments, customers are rethinking networks and how they utilize precious compute resources. They can't, or won't put everything into the public cloud for many reasons. That's one of the tailwinds for tier two cloud service providers and why they're growing so fast. They give options to customers that don't want to keep investing in building out their own data centers, and they don't want to migrate all their workloads to the public cloud. So these providers and on-prem customers, they want to be more like hyperscalers, right? They want to be more agile and they do that. They're distributing, networking and security functions and pushing them closer to the applications. >>Now, at the same time, they're unifying their view of the network. So it can be less fragmented, manage more efficiently with more automation and better visibility. How are they doing this? Well, that's what we're going to talk about today. Welcome to changing the game for cloud networking made possible by pluribus networks. My name is Dave Vellante and today on this special cube presentation, John furrier, and I are going to explore these issues in detail. We'll dig into new solutions being created by pluribus and Nvidia to specifically address offloading, wasted resources, accelerating performance, isolating data, and making networks more secure all while unifying the network experience. We're going to start on the west coast and our Palo Alto studios, where John will talk to Mike of pluribus and AMI, but Donnie of Nvidia, then we'll bring on Alessandra Bobby airy of pluribus and Pete Lummus from Nvidia to take a deeper dive into the technology. And then we're gonna bring it back here to our east coast studio and get the independent analyst perspective from Bob Liberte of the enterprise strategy group. We hope you enjoy the program. Okay, let's do this over to John >>Okay. Let's kick things off. We're here at my cafe. One of the TMO and pluribus networks and NAMI by Dani VP of networking, marketing, and developer ecosystem at Nvidia. Great to have you welcome folks. >>Thank you. Thanks. >>So let's get into the, the problem situation with cloud unified network. What problems are out there? What challenges do cloud operators have Mike let's get into it. >>Yeah, it really, you know, the challenges we're looking at are for non hyperscalers that's enterprises, governments, um, tier two service providers, cloud service providers, and the first mandate for them is to become as agile as a hyperscaler. So they need to be able to deploy services and security policies. And second, they need to be able to abstract the complexity of the network and define things in software while it's accelerated in hardware. Um, really ultimately they need a single operating model everywhere. And then the second thing is they need to distribute networking and security services out to the edge of the host. Um, we're seeing a growth in cyber attacks. Um, it's, it's not slowing down. It's only getting worse and, you know, solving for this security problem across clouds is absolutely critical. And the way to do it is to move security out to the host. >>Okay. With that goal in mind, what's the pluribus vision. How does this tie together? >>Yeah. So, um, basically what we see is, uh, that this demands a new architecture and that new architecture has four tenants. The first tenant is unified and simplified cloud networks. If you look at cloud networks today, there's, there's sort of like discreet bespoke cloud networks, you know, per hypervisor, per private cloud edge cloud public cloud. Each of the public clouds have different networks that needs to be unified. You know, if we want these folks to be able to be agile, they need to be able to issue a single command or instantiate a security policy across all those locations with one command and not have to go to each one. The second is like I mentioned, distributed security, um, distributed security without compromise, extended out to the host is absolutely critical. So micro-segmentation and distributed firewalls, but it doesn't stop there. They also need pervasive visibility. >>You know, it's, it's, it's sort of like with security, you really can't see you can't protect what you can't see. So you need visibility everywhere. The problem is visibility to date has been very expensive. Folks have had to basically build a separate overlay network of taps, packet brokers, tap aggregation infrastructure that really needs to be built into this unified network I'm talking about. And the last thing is automation. All of this needs to be SDN enabled. So this is related to my comment about abstraction abstract, the complexity of all of these discreet networks, physic whatever's down there in the physical layer. Yeah. I don't want to see it. I want to abstract it. I wanted to find things in software, but I do want to leverage the power of hardware to accelerate that. So that's the fourth tenant is SDN automation. >>Mike, we've been talking on the cube a lot about this architectural shift and customers are looking at this. This is a big part of everyone who's looking at cloud operations next gen, how do we get there? How do customers get this vision realized? >>That's a great question. And I appreciate the tee up. I mean, we're, we're here today for that reason. We're introducing two things today. Um, the first is a unified cloud networking vision, and that is a vision of where pluribus is headed with our partners like Nvidia longterm. Um, and that is about, uh, deploying a common operating model, SDN enabled SDN, automated hardware, accelerated across all clouds. Um, and whether that's underlying overlay switch or server, um, hype, any hypervisor infrastructure containers, any workload doesn't matter. So that's ultimately where we want to get. And that's what we talked about earlier. Um, the first step in that vision is what we call the unified cloud fabric. And this is the next generation of our adaptive cloud fabric. Um, and what's nice about this is we're not starting from scratch. We have a, a, an award-winning adaptive cloud fabric product that is deployed globally. Um, and in particular, uh, we're very proud of the fact that it's deployed in over a hundred tier one mobile operators as the network fabric for their 4g and 5g virtualized cores. We know how to build carrier grade, uh, networking infrastructure, what we're doing now, um, to realize this next generation unified cloud fabric is we're extending from the switch to this Nvidia Bluefield to DPU. We know there's a, >>Hold that up real quick. That's a good, that's a good prop. That's the blue field and video. >>It's the Nvidia Bluefield two DPU data processing unit. And, um, uh, you know, what we're doing, uh, fundamentally is extending our SDN automated fabric, the unified cloud fabric out to the host, but it does take processing power. So we knew that we didn't want to do, we didn't want to implement that running on the CPU, which is what some other companies do because it consumes revenue generating CPU's from the application. So a DPU is a perfect way to implement this. And we knew that Nvidia was the leader with this blue field too. And so that is the first that's, that's the first step in the getting into realizing this vision. >>I mean, Nvidia has always been powering some great workloads of GPU. Now you've got DPU networking and then video is here. What is the relationship with clothes? How did that come together? Tell us the story. >>Yeah. So, you know, we've been working with pluribus for quite some time. I think the last several months was really when it came to fruition and, uh, what pluribus is trying to build and what Nvidia has. So we have, you know, this concept of a Bluefield data processing unit, which if you think about it, conceptually does really three things, offload, accelerate an isolate. So offload your workloads from your CPU to your data processing unit infrastructure workloads that is, uh, accelerate. So there's a bunch of acceleration engines. So you can run infrastructure workloads much faster than you would otherwise, and then isolation. So you have this nice security isolation between the data processing unit and your other CPU environment. And so you can run completely isolated workloads directly on the data processing unit. So we introduced this, you know, a couple of years ago, and with pluribus, you know, we've been talking to the pluribus team for quite some months now. >>And I think really the combination of what pluribus is trying to build and what they've developed around this unified cloud fabric, uh, is fits really nicely with the DPU and running that on the DPU and extending it really from your physical switch, all the way to your host environment, specifically on the data processing unit. So if you think about what's happening as you add data processing units to your environment. So every server we believe over time is going to have data processing units. So now you'll have to manage that complexity from the physical network layer to the host layer. And so what pluribus is really trying to do is extending the network fabric from the host, from the switch to the host, and really have that single pane of glass for network operators to be able to configure provision, manage all of the complexity of the network environment. >>So that's really how the partnership truly started. And so it started really with extending the network fabric, and now we're also working with them on security. So, you know, if you sort of take that concept of isolation and security isolation, what pluribus has within their fabric is the concept of micro-segmentation. And so now you can take that extended to the data processing unit and really have, um, isolated micro-segmentation workloads, whether it's bare metal cloud native environments, whether it's virtualized environments, whether it's public cloud, private cloud hybrid cloud. So it really is a magical partnership between the two companies with their unified cloud fabric running on, on the DPU. >>You know, what I love about this conversation is it reminds me of when you have these changing markets, the product gets pulled out of the market and, and you guys step up and create these new solutions. And I think this is a great example. So I have to ask you, how do you guys differentiate what sets this apart for customers with what's in it for the customer? >>Yeah. So I mentioned, you know, three things in terms of the value of what the Bluefield brings, right? There's offloading, accelerating, isolating, that's sort of the key core tenants of Bluefield. Um, so that, you know, if you sort of think about what, um, what Bluefields, what we've done, you know, in terms of the differentiation, we're really a robust platform for innovation. So we introduced Bluefield to, uh, last year, we're introducing Bluefield three, which is our next generation of Bluefields, you know, we'll have five X, the arm compute capacity. It will have 400 gig line rate acceleration, four X better crypto acceleration. So it will be remarkably better than the previous generation. And we'll continue to innovate and add, uh, chips to our portfolio every, every 18 months to two years. Um, so that's sort of one of the key areas of differentiation. The other is the, if you look at Nvidia and, and you know, what we're sort of known for is really known for our AI artificial intelligence and our artificial intelligence software, as well as our GPU. >>So you look at artificial intelligence and the combination of artificial intelligence plus data processing. This really creates the, you know, faster, more efficient, secure AI systems from the core of your data center, all the way out to the edge. And so with Nvidia, we really have these converged accelerators where we've combined the GPU, which does all your AI processing with your data processing with the DPU. So we have this convergence really nice convergence of that area. And I would say the third area is really around our developer environment. So, you know, one of the key, one of our key motivations at Nvidia is really to have our partner ecosystem, embrace our technology and build solutions around our technology. So if you look at what we've done with the DPU, with credit and an SDK, which is an open SDK called Doka, and it's an open SDK for our partners to really build and develop solutions using Bluefield and using all these accelerated libraries that we expose through Doka. And so part of our differentiation is really building this open ecosystem for our partners to take advantage and build solutions around our technology. >>You know, what's exciting is when I hear you talk, it's like you realize that there's no one general purpose network anymore. Everyone has their own super environment Supercloud or these new capabilities. They can really craft their own, I'd say, custom environment at scale with easy tools. Right. And it's all kind of, again, this is the new architecture Mike, you were talking about, how does customers run this effectively? Cost-effectively and how do people migrate? >>Yeah, I, I think that is the key question, right? So we've got this beautiful architecture. You, you know, Amazon nitro is a, is a good example of, of a smart NIC architecture that has been successfully deployed, but enterprises and serve tier two service providers and tier one service providers and governments are not Amazon, right? So they need to migrate there and they need this architecture to be cost-effective. And, and that's, that's super key. I mean, the reality is deep user moving fast, but they're not going to be, um, deployed everywhere on day one. Some servers will have DPS right away, some servers will have use and a year or two. And then there are devices that may never have DPS, right. IOT gateways, or legacy servers, even mainframes. Um, so that's the beauty of a solution that creates a fabric across both the switch and the DPU, right. >>Um, and by leveraging the Nvidia Bluefield DPU, what we really like about it is it's open. Um, and that drives, uh, cost efficiencies. And then, um, uh, you know, with this, with this, our architectural approach effectively, you get a unified solution across switch and DPU workload independent doesn't matter what hypervisor it is, integrated visibility, integrated security, and that can, uh, create tremendous cost efficiencies and, and really extract a lot of the expense from, from a capital perspective out of the network, as well as from an operational perspective, because now I have an SDN automated solution where I'm literally issuing a command to deploy a network service or to create or deploy our security policy and is deployed everywhere, automatically saving the oppor, the network operations team and the security operations team time. >>All right. So let me rewind that because that's super important. Get the unified cloud architecture, I'm the customer guy, but it's implemented, what's the value again, take, take me through the value to me. I have a unified environment. What's the value. >>Yeah. So I mean, the value is effectively, um, that, so there's a few pieces of value. The first piece of value is, um, I'm creating this clean D mark. I'm taking networking to the host. And like I mentioned, we're not running it on the CPU. So in implementations that run networking on the CPU, there's some conflict between the dev ops team who owned the server and the NetApps team who own the network because they're installing software on the, on the CPU stealing cycles from what should be revenue generating. Uh CPU's. So now by, by terminating the networking on the DPU, we click create this real clean DMARC. So the dev ops folks are happy because they don't necessarily have the skills to manage network and they don't necessarily want to spend the time managing networking. They've got their network counterparts who are also happy the NetApps team, because they want to control the networking. >>And now we've got this clean DMARC where the DevOps folks get the services they need and the NetApp folks get the control and agility they need. So that's a huge value. Um, the next piece of value is distributed security. This is essential. I mentioned earlier, you know, put pushing out micro-segmentation and distributed firewall, basically at the application level, right, where I create these small, small segments on an by application basis. So if a bad actor does penetrate the perimeter firewall, they're contained once they get inside. Cause the worst thing is a bad actor, penetrates a perimeter firewall and can go wherever they want and wreak havoc. Right? And so that's why this, this is so essential. Um, and the next benefit obviously is this unified networking operating model, right? Having, uh, uh, uh, an operating model across switch and server underlay and overlay, workload agnostic, making the life of the NetApps teams much easier so they can focus their time on really strategy instead of spending an afternoon, deploying a single villain, for example. >>Awesome. And I think also from my standpoint, I mean, perimeter security is pretty much, I mean, they're out there, it gets the firewall still out there exists, but pretty much they're being breached all the time, the perimeter. So you have to have this new security model. And I think the other thing that you mentioned, the separation between dev ops is cool because the infrastructure is code is about making the developers be agile and build security in from day one. So this policy aspect is, is huge. Um, new control points. I think you guys have a new architecture that enables the security to be handled more flexible. >>Right. >>That seems to be the killer feature here, >>Right? Yeah. If you look at the data processing unit, I think one of the great things about sort of this new architecture, it's really the foundation for zero trust it's. So like you talked about the perimeter is getting breached. And so now each and every compute node has to be protected. And I think that's sort of what you see with the partnership between pluribus and Nvidia is the DPU is really the foundation of zero trust. And pluribus is really building on that vision with, uh, allowing sort of micro-segmentation and being able to protect each and every compute node as well as the underlying network. >>This is super exciting. This is an illustration of how the market's evolving architectures are being reshaped and refactored for cloud scale and all this new goodness with data. So I gotta ask how you guys go into market together. Michael, start with you. What's the relationship look like in the go to market with an Nvidia? >>Sure. Um, I mean, we're, you know, we're super excited about the partnership, obviously we're here together. Um, we think we've got a really good solution for the market, so we're jointly marketing it. Um, uh, you know, obviously we appreciate that Nvidia is open. Um, that's, that's sort of in our DNA, we're about open networking. They've got other ISV who are gonna run on Bluefield too. We're probably going to run on other DPS in the, in the future, but right now, um, we're, we feel like we're partnered with the number one, uh, provider of DPS in the world and, uh, super excited about, uh, making a splash with it. >>I'm in get the hot product. >>Yeah. So Bluefield too, as I mentioned was GA last year, we're introducing, uh, well, we now also have the converged accelerator. So I talked about artificial intelligence or artificial intelligence with the Bluefield DPU, all of that put together on a converged accelerator. The nice thing there is you can either run those workloads. So if you have an artificial intelligence workload and an infrastructure workload, you can warn them separately on the same platform or you can actually use, uh, you can actually run artificial intelligence applications on the Bluefield itself. So that's what the converged accelerator really brings to the table. Uh, so that's available now. Then we have Bluefield three, which will be available late this year. And I talked about sort of, you know, uh, how much better that next generation of Bluefield is in comparison to Bluefield two. So we will see Bluefield three shipping later on this year, and then our software stack, which I talked about, which is called Doka we're on our second version are Doka one dot two. >>We're releasing Doka one dot three, uh, in about two months from now. And so that's really our open ecosystem framework. So allow you to program the Bluefields. So we have all of our acceleration libraries, um, security libraries, that's all packed into this STK called Doka. And it really gives that simplicity to our partners to be able to develop on top of Bluefield. So as we add new generations of Bluefield, you know, next, next year, we'll have, you know, another version and so on and so forth Doka is really that unified unified layer that allows, um, Bluefield to be both forwards compatible and backwards compatible. So partners only really have to think about writing to that SDK once, and then it automatically works with future generations of Bluefields. So that's sort of the nice thing around, um, around Doka. And then in terms of our go to market model, we're working with every, every major OEM. So, uh, later on this year, you'll see, you know, major server manufacturers, uh, releasing Bluefield enabled servers. So, um, more to come >>Awesome, save money, make it easier, more capabilities, more workload power. This is the future of, of cloud operations. >>Yeah. And, and, and, uh, one thing I'll add is, um, we are, um, we have a number of customers as you'll hear in the next segment, um, that are already signed up and we'll be working with us for our, uh, early field trial starting late April early may. Um, we are accepting registrations. You can go to www.pluribusnetworks.com/e F T a. If you're interested in signing up for, um, uh, being part of our field trial and providing feedback on the product, >>Awesome innovation and network. Thanks so much for sharing the news. Really appreciate it. Thanks so much. Okay. In a moment, we'll be back to look deeper in the product, the integration security zero trust use cases. You're watching the cube, the leader in enterprise tech coverage, >>Cloud networking is complex and fragmented slowing down your business. How can you simplify and unify your cloud networks to increase agility and business velocity? >>Pluribus unified cloud networking provides a unified simplify and agile network fabric across all clouds. It brings the simplicity of a public cloud operation model to private clouds, dramatically reducing complexity and improving agility, availability, and security. Now enterprises and service providers can increase their business philosophy and delight customers in the distributed multi-cloud era. We achieve this with a new approach to cloud networking, pluribus unified cloud fabric. This open vendor, independent network fabric, unifies, networking, and security across distributed clouds. The first step is extending the fabric to servers equipped with data processing units, unifying the fabric across switches and servers, and it doesn't stop there. The fabric is unified across underlay and overlay networks and across all workloads and virtualization environments. The unified cloud fabric is optimized for seamless migration to this new distributed architecture, leveraging the power of the DPU for application level micro-segmentation distributed fireball and encryption while still supporting those servers and devices that are not equipped with a DPU. Ultimately the unified cloud fabric extends seamlessly across distributed clouds, including central regional at edge private clouds and public clouds. The unified cloud fabric is a comprehensive network solution. That includes everything you need for clouds, networking built in SDN automation, distributed security without compromises, pervasive wire speed, visibility and application insight available on your choice of open networking switches and DP use all at the lowest total cost of ownership. The end result is a dramatically simplified unified cloud networking architecture that unifies your distributed clouds and frees your business to move at cloud speed, >>To learn more, visit www.pluribusnetworks.com. >>Okay. We're back I'm John ferry with the cube, and we're going to go deeper into a deep dive into unified cloud networking solution from Clovis and Nvidia. And we'll examine some of the use cases with Alessandra Burberry, VP of product management and pullovers networks and Pete Bloomberg who's director of technical marketing and video remotely guys. Thanks for coming on. Appreciate it. >>Yeah. >>So deep dive, let's get into the what and how Alexandra we heard earlier about the pluribus Nvidia partnership and the solution you're working together on what is it? >>Yeah. First let's talk about the water. What are we really integrating with the Nvidia Bluefield, the DPO technology, uh, plugable says, um, uh, there's been shipping, uh, in, uh, in volume, uh, in multiple mission critical networks. So this advisor one network operating systems, it runs today on a merchant silicone switches and effectively it's a standard open network operating system for data center. Um, and the novelty about this system that integrates a distributed control plane for, at water made effective in SDN overlay. This automation is a completely open and interoperable and extensible to other type of clouds is not enclosed them. And this is actually what we're now porting to the Nvidia DPO. >>Awesome. So how does it integrate into Nvidia hardware and specifically how has pluribus integrating its software with the Nvidia hardware? >>Yeah, I think, uh, we leverage some of the interesting properties of the Bluefield, the DPO hardware, which allows actually to integrate, uh, um, uh, our software, our network operating system in a manner which is completely isolated and independent from the guest operating system. So the first byproduct of this approach is that whatever we do at the network level on the DPU card that is completely agnostic to the hypervisor layer or OSTP layer running on, uh, on the host even more, um, uh, we can also independently manage this network, know that the switch on a Neek effectively, um, uh, managed completely independently from the host. You don't have to go through the network operating system, running on x86 to control this network node. So you throw yet the experience effectively of a top of rack for virtual machine or a top of rack for, uh, Kubernetes bots, where instead of, uh, um, if you allow me with the analogy instead of connecting a server knee directly to a switchboard, now you're connecting a VM virtual interface to a virtual interface on the switch on an ache. >>And, uh, also as part of this integration, we, uh, put a lot of effort, a lot of emphasis in, uh, accelerating the entire, uh, data plane for networking and security. So we are taking advantage of the DACA, uh, Nvidia DACA API to program the accelerators. And these accomplished two things with that. Number one, uh, you, uh, have much greater performance, much better performance. They're running the same network services on an x86 CPU. And second, this gives you the ability to free up, I would say around 20, 25% of the server capacity to be devoted either to, uh, additional workloads to run your cloud applications, or perhaps you can actually shrink the power footprint and compute footprint of your data center by 20%, if you want to run the same number of compute workloads. So great efficiencies in the overall approach, >>And this is completely independent of the server CPU, right? >>Absolutely. There is zero code from running on the x86, and this is what we think this enables a very clean demarcation between computer and network. >>So Pete, I gotta get, I gotta get you in here. We heard that, uh, the DPU is enabled cleaner separation of dev ops and net ops. Can you explain why that's important because everyone's talking DevSecOps right now, you've got net ops, net, net sec ops, this separation. Why is this clean separation important? >>Yeah, I think it's a, you know, it's a pragmatic solution in my opinion. Um, you know, we wish the world was all kind of rainbows and unicorns, but it's a little, a little messier than that. And I think a lot of the dev ops stuff and that, uh, mentality and philosophy, there's a natural fit there. Right? You have applications running on servers. So you're talking about developers with those applications integrating with the operators of those servers. Well, the network has always been this other thing and the network operators have always had a very different approach to things than compute operators. And, you know, I think that we, we in the networking industry have gotten closer together, but there's still a gap there's still some distance. And I think in that distance, isn't going to be closed. And so, you know, again, it comes down to pragmatism and I think, you know, one of my favorite phrases is look good fences, make good neighbors. And that's what this is. >>Yeah. That's a great point because dev ops has become kind of the calling card for cloud, right. But dev ops is as simply infrastructure as code and infrastructure is networking, right? So if infrastructure is code, you know, you're talking about, you know, that part of the stack under the covers under the hood, if you will, this is super important distinction. And this is where the innovation is. Can you elaborate on how you see that? Because this is really where the action is right now. >>Yeah, exactly. And I think that's where, um, one from, from the policy, the security that the zero trust aspect of this, right? If you get it wrong on that network side, all of a sudden you, you can totally open up that those capabilities. And so security is part of that. But the other part is thinking about this at scale, right? So we're taking one top of rack switch and adding, you know, up to 48 servers per rack. And so that ability to automate, orchestrate and manage at scale becomes absolutely critical. >>I'll Sandra, this is really the why we're talking about here, and this is scale. And again, getting it right. If you don't get it right, you're going to be really kind of up, you know what you know, so this is a huge deal. Networking matters, security matters, automation matters, dev ops, net ops, all coming together, clean separation, um, help us understand how this joint solution with Nvidia fits into the pluribus unified cloud networking vision, because this is what people are talking about and working on right now. >>Yeah, absolutely. So I think here with this solution, we're attacking two major problems in cloud networking. One is, uh, operation of, uh, cloud networking. And the second is a distributing security services in the cloud infrastructure. First, let me talk about the first water. We really unifying. If we're unifying something, something must be at least fragmented or this jointed and the, what is this joint that is actually the network in the cloud. If you look holistically, how networking is deployed in the cloud, you have your physical fabric infrastructure, right? Your switches and routers, you'll build your IP clause fabric leaf in spine typologies. This is actually a well understood the problem. I, I would say, um, there are multiple vendors, uh, uh, with, uh, um, uh, let's say similar technologies, um, very well standardized, whether you will understood, um, and almost a commodity, I would say building an IP fabric these days, but this is not the place where you deploy most of your services in the cloud, particularly from a security standpoint, two services are actually now moved into the compute layer where you actually were called builders, have to instrument the, a separate, uh, network virtualization layer, where they deploy segmentation and security closer to the workloads. >>And this is where the complication arise. These high value part of the cloud network is where you have a plethora of options that they don't talk to each other. And they are very dependent on the kind of hypervisor or compute solution you choose. Um, for example, the networking API to be between an GSXI environment or an hyper V or a Zen are completely disjointed. You have multiple orchestration layers. And when, and then when you throw in also Kubernetes in this, in this, in this type of architecture, uh, you're introducing yet another level of networking. And when Kubernetes runs on top of VMs, which is a prevalent approach, you actually just stacking up multiple networks on the compute layer that they eventually run on the physical fabric infrastructure. Those are all ships in the nights effectively, right? They operate as completely disjointed. And we're trying to attack this problem first with the notion of a unified fabric, which is independent from any workloads, whether it's this fabric spans on a switch, which can be con connected to a bare metal workload, or can span all the way inside the DPU, uh, where, um, you have, uh, your multi hypervisor compute environment. >>It's one API, one common network control plane, and one common set of segmentation services for the network. That's probably the number one, >>You know, it's interesting you, man, I hear you talking, I hear one network month, different operating models reminds me of the old serverless days. You know, there's still servers, but they call it serverless. Is there going to be a term network list? Because at the end of the day, it should be one network, not multiple operating models. This, this is a problem that you guys are working on. Is that right? I mean, I'm not, I'm just joking server listen network list, but the idea is it should be one thing. >>Yeah, it's effectively. What we're trying to do is we are trying to recompose this fragmentation in terms of network operation, across physical networking and server networking server networking is where the majority of the problems are because of the, uh, as much as you have standardized the ways of building, uh, physical networks and cloud fabrics with IP protocols and internet, you don't have that kind of, uh, uh, sort of, uh, um, um, uh, operational efficiency, uh, at the server layer. And, uh, this is what we're trying to attack first. The, with this technology, the second aspect we're trying to attack is are we distribute the security services throughout the infrastructure, more efficiently, whether it's micro-segmentation is a stateful firewall services, or even encryption. Those are all capabilities enabled by the blue field, uh, uh, the Butte technology and, uh, uh, we can actually integrate those capabilities directly into the nettle Fabrica, uh, limiting dramatically, at least for east-west traffic, the sprawl of, uh, security appliances, whether virtual or physical, that is typically the way the people today, uh, segment and secure the traffic in the cloud. >>Awesome. Pete, all kidding aside about network lists and serverless kind of fun, fun play on words there, the network is one thing it's basically distributed computing, right? So I love to get your thoughts about this distributed security with zero trust as the driver for this architecture you guys are doing. Can you share in more detail the depth of why DPU based approach is better than alternatives? >>Yeah, I think what's, what's beautiful and kind of what the DPU brings. That's new to this model is a completely isolated compute environment inside. So, you know, it's the, uh, yo dog, I heard you like a server, so I put a server inside your server. Uh, and so we provide, uh, you know, armed CPU's memory and network accelerators inside, and that is completely isolated from the host. So the server, the, the actual x86 host just thinks it has a regular Nick in there, but you actually have this full control plane thing. It's just like taking your top of rack switch and shoving it inside of your compute node. And so you have not only the separation, um, within the data plane, but you have this complete control plane separation. So you have this element that the network team can now control and manage, but we're taking all of the functions we used to do at the top of rack switch, and we're just shooting them now. >>And, you know, as time has gone on we've, we've struggled to put more and more and more into that network edge. And the reality is the network edge is the compute layer, not the top of rack switch layer. And so that provides this phenomenal enforcement point for security and policy. And I think outside of today's solutions around virtual firewalls, um, the other option is centralized appliances. And even if you can get one that can scale large enough, the question is, can you afford it? And so what we end up doing is we kind of hope that of aliens good enough, or we hope that if the excellent tunnel is good enough and we can actually apply more advanced techniques there because we can't physically, you know, financially afford that appliance to see all of the traffic. And now that we have a distributed model with this accelerator, we could do it. >>So what's the what's in it for the customer. I real quick, cause I think this is interesting point. You mentioned policy, everyone in networking knows policy is just a great thing and it adds, you hear it being talked about up the stack as well. When you start getting to orchestrating microservices and whatnot, all that good stuff going on there, containers and whatnot and modern applications. What's the benefit to the customers with this approach? Because what I heard was more scale, more edge deployment, flexibility, relative to security policies and application enablement. I mean, is that what what's the customer get out of this architecture? What's the enablement. >>It comes down to, uh, taking again the capabilities that were in that top of rack switch and asserting them down. So that makes simplicity smaller blast radiuses for failure, smaller failure domains, maintenance on the networks, and the systems become easier. Your ability to integrate across workloads becomes infinitely easier. Um, and again, you know, we always want to kind of separate each one of those layers. So just as in say, a VX land network, my leaf and spine don't have to be tightly coupled together. I can now do this at a different layer. And so you can run a DPU with any networking in the core there. And so you get this extreme flexibility. You can start small, you can scale large. Um, you know, to me, the, the possibilities are endless. Yes, >>It's a great security control plan. Really flexibility is key. And, and also being situationally aware of any kind of threats or new vectors or whatever's happening in the network. Alessandra, this is huge upside, right? You've already identified some successes with some customers on your early field trials. What are they doing and why are they attracted to the solution? >>Yeah, I think the response from customers has been, uh, the most, uh, encouraging and, uh, exciting, uh, for, uh, for us to, uh, to sort of continue and work and develop this product. And we have actually learned a lot in the process. Um, we talked to tier two tier three cloud providers. Uh, we talked to, uh, SP um, software Tyco type of networks, uh, as well as a large enterprise customers, um, in, uh, one particular case. Um, uh, one, uh, I think, um, let me, let me call out a couple of examples here, just to give you a flavor. Uh, there is a service provider, a cloud provider, uh, in Asia who is actually managing a cloud, uh, where they are offering services based on multiple hypervisors. They are native services based on Zen, but they also are on ramp into the cloud, uh, workloads based on, uh, ESI and, uh, uh, and KVM, depending on what the customer picks from the piece on the menu. >>And they have the problem of now orchestrating through their orchestrate or integrating with the Zen center with vSphere, uh, with, uh, open stack to coordinate these multiple environments and in the process to provide security, they actually deploy virtual appliances everywhere, which has a lot of costs, complication, and eats up into the server CPU. The problem is that they saw in this technology, they call it actually game changing is actually to remove all this complexity of in a single network and distribute the micro-segmentation service directly into the fabric. And overall, they're hoping to get out of it, uh, uh, tremendous, uh, um, opics, uh, benefit and overall, um, uh, operational simplification for the cloud infrastructure. That's one potent a use case. Uh, another, uh, large enterprise customer global enterprise customer, uh, is running, uh, both ESI and hyper V in that environment. And they don't have a solution to do micro-segmentation consistently across hypervisors. >>So again, micro-segmentation is a huge driver security looks like it's a recurring theme, uh, talking to most of these customers and in the Tyco space, um, uh, we're working with a few types of customers on the CFT program, uh, where the main goal is actually to our Monet's network operation. They typically handle all the VNF search with their own homegrown DPDK stack. This is overly complex. It is frankly also as low and inefficient, and then they have a physical network to manage the, the idea of having again, one network, uh, to coordinate the provision in our cloud services between the, the take of VNF, uh, and, uh, the rest of the infrastructure, uh, is extremely powerful on top of the offloading capability of the, by the bluefin DPOs. Those are just some examples. >>That was a great use case, a lot more potential. I see that with the unified cloud networking, great stuff, feed, shout out to you guys at Nvidia had been following your success for a long time and continuing to innovate as cloud scales and pluribus here with the unified networking, kind of bring it to the next level. Great stuff. Great to have you guys on. And again, software keeps driving the innovation again, networking is just a part of it, and it's the key solution. So I got to ask both of you to wrap this up. How can cloud operators who are interested in, in this, uh, new architecture and solution, uh, learn more because this is an architectural shift. People are working on this problem. They're trying to think about multiple clouds of trying to think about unification around the network and giving more security, more flexibility, uh, to their teams. How can people learn more? >>Yeah, so, uh, all Sandra and I have a talk at the upcoming Nvidia GTC conference. Um, so that's the week of March 21st through 24th. Um, you can go and register for free and video.com/at GTC. Um, you can also watch recorded sessions if you ended up watching us on YouTube a little bit after the fact. Um, and we're going to dive a little bit more into the specifics and the details and what we're providing in the solution. >>Alexandra, how can people learn more? >>Yeah, absolutely. People can go to the pluribus, a website, www boost networks.com/eft, and they can fill up the form and, uh, they will contact durables to either know more or to know more and actually to sign up for the actual early field trial program, which starts at the end of April. >>Okay. Well, we'll leave it there. Thanks. You both for joining. Appreciate it up next. You're going to hear an independent analyst perspective and review some of the research from the enterprise strategy group ESG. I'm John ferry with the >>Cube. Thanks for watching. >>Okay. We've heard from the folks at networks and Nvidia about their effort to transform cloud networking and unify bespoke infrastructure. Now let's get the perspective from an independent analyst and to do so. We welcome in ESG, senior analysts, Bob LA Liberte, Bob. Good to see you. Thanks for coming into our east coast studios. >>Oh, thanks for having me. It's great to be >>Here. Yeah. So this, this idea of unified cloud networking approach, how serious is it? What's what's driving it. >>Yeah, there's certainly a lot of drivers behind it, but probably the first and foremost is the fact that application environments are becoming a lot more distributed, right? So the, it pendulum tends to swing back and forth. And we're definitely on one that's swinging from consolidated to distributed. And so applications are being deployed in multiple private data centers, multiple public cloud locations, edge locations. And as a result of that, what you're seeing is a lot of complexity. So organizations are having to deal with this highly disparate environment. They have to secure it. They have to ensure connectivity to it and all that's driving up complexity. In fact, when we asked in one of our last surveys and last year about network complexity, more than half 54% came out and said, Hey, our network environment is now either more or significantly more complex than it used to be. >>And as a result of that, what you're seeing is it's really impacting agility. So everyone's moving to these modern application environments, distributing them across areas so they can improve agility yet it's creating more complexity. So a little bit counter to the fact and, you know, really counter to their overarching digital transformation initiatives. From what we've seen, you know, nine out of 10 organizations today are either beginning in process or have a mature digital transformation process or initiative, but their top goals, when you look at them, it probably shouldn't be a surprise. The number one goal is driving operational efficiency. So it makes sense. I've distributed my environment to create agility, but I've created a lot of complexity. So now I need these tools that are going to help me drive operational efficiency, drive better experience. >>I mean, I love how you bring in the data yesterday. Does a great job with that. Uh, questions is, is it about just unifying existing networks or is there sort of a need to rethink kind of a do-over network, how networks are built? >>Yeah, that's a, that's a really good point because certainly unifying networks helps right. Driving any kind of operational efficiency helps. But in this particular case, because we've made the transition to new application architectures and the impact that's having as well, it's really about changing and bringing in new frameworks and new network architectures to accommodate those new application architectures. And by that, what I'm talking about is the fact that these new modern application architectures, microservices, containers are driving a lot more east west traffic. So in the old days, it used to be easier in north south coming out of the server, one application per server, things like that. Right now you've got hundreds, if not thousands of microservices communicating with each other users communicating to them. So there's a lot more traffic and a lot of it's taking place within the servers themselves. The other issue that you starting to see as well from that security perspective, when we were all consolidated, we had those perimeter based legacy, you know, castle and moat security architectures, but that doesn't work anymore when the applications aren't in the castle, right. >>When everything's spread out that that no longer happens. So we're absolutely seeing, um, organizations trying to, trying to make a shift. And, and I think much, like if you think about the shift that we're seeing with all the remote workers and the sassy framework to enable a secure framework there, this it's almost the same thing. We're seeing this distributed services framework come up to support the applications better within the data centers, within the cloud data centers, so that you can drive that security closer to those applications and make sure they're, they're fully protected. Uh, and that's really driving a lot of the, you know, the zero trust stuff you hear, right? So never trust, always verify, making sure that everything is, is, is really secure micro-segmentation is another big area. So ensuring that these applications, when they're connected to each other, they're, they're fully segmented out. And that's again, because if someone does get a breach, if they are in your data center, you want to limit the blast radius, you want to limit the amount of damage that's done. So that by doing that, it really makes it a lot harder for them to see everything that's in there. >>You know, you mentioned zero trust. It used to be a buzzword, and now it's like become a mandate. And I love the mode analogy. You know, you build a moat to protect the queen and the castle, the Queens left the castles, it's just distributed. So how should we think about this, this pluribus and Nvidia solution. There's a spectrum, help us understand that you've got appliances, you've got pure software solutions. You've got what pluribus is doing with Nvidia, help us understand that. >>Yeah, absolutely. I think as organizations recognize the need to distribute their services to closer to the applications, they're trying different models. So from a legacy approach, you know, from a security perspective, they've got these centralized firewalls that they're deploying within their data centers. The hard part for that is if you want all this traffic to be secured, you're actually sending it out of the server up through the rack, usually to in different location in the data center and back. So with the need for agility, with the need for performance, right, that adds a lot of latency. Plus when you start needing to scale, that means adding more and more network connections, more and more appliances. So it can get very costly as well as impacting the performance. The other way that organizations are seeking to solve this problem is by taking the software itself and deploying it on the servers. Okay. So that's a, it's a great approach, right? It brings it really close to the applications, but the things you start running into there, there's a couple of things. One is that you start seeing that the DevOps team start taking on that networking and security responsibility, which they >>Don't want to >>Do, they don't want to do right. And the operations teams loses a little bit of visibility into that. Um, plus when you load the software onto the server, you're taking up precious CPU cycles. So if you're really wanting your applications to perform at an optimized state, having additional software on there, isn't going to, isn't going to do it. So, you know, when we think about all those types of things, right, and certainly the other side effects of that is the impact of the performance, but there's also a cost. So if you have to buy more servers because your CPU's are being utilized, right, and you have hundreds or thousands of servers, right, those costs are going to add up. So what, what Nvidia and pluribus have done by working together is to be able to take some of those services and be able to deploy them onto a smart Nick, right? >>To be able to deploy the DPU based smart SMARTNICK into the servers themselves. And then pluribus has come in and said, we're going to unify create that unified fabric across the networking space, into those networking services all the way down to the server. So the benefits of having that are pretty clear in that you're offloading that capability from the server. So your CPU's are optimized. You're saving a lot of money. You're not having to go outside of the server and go to a different rack somewhere else in the data center. So your performance is going to be optimized as well. You're not going to incur any latency hit for every trip round trip to the, to the firewall and back. So I think all those things are really important. Plus the fact that you're going to see from a, an organizational aspect, we talked about the dev ops and net ops teams. The network operations teams now can work with the security teams to establish the security policies and the networking policies. So that they've dev ops teams. Don't have to worry about that. So essentially they just create the guardrails and let the dev op team run. Cause that's what they want. They want that agility and speed. >>Yeah. Your point about CPU cycles is key. I mean, it's estimated that 25 to 30% of CPU cycles in the data center are wasted. The cores are wasted doing storage offload or, or networking or security offload. And, you know, I've said many times everybody needs a nitro like Amazon nugget, but you can't go, you can only buy Amazon nitro if you go into AWS. Right. Everybody needs a nitro. So is that how we should think about this? >>Yeah. That's a great analogy to think about this. Um, and I think I would take it a step further because it's, it's almost the opposite end of the spectrum because pluribus and video are doing this in a very open way. And so pluribus has always been a proponent of open networking. And so what they're trying to do is extend that now to these distributed services. So leverage working with Nvidia, who's also open as well, being able to bring that to bear so that organizations can not only take advantage of these distributed services, but also that unified networking fabric, that unified cloud fabric across that environment from the server across the switches, the other key piece of what pluribus is doing, because they've been doing this for a while now, and they've been doing it with the older application environments and the older server environments, they're able to provide that unified networking experience across a host of different types of servers and platforms. So you can have not only the modern application supported, but also the legacy environments, um, you know, bare metal. You could go any type of virtualization, you can run containers, et cetera. So a wide gambit of different technologies hosting those applications supported by a unified cloud fabric from pluribus. >>So what does that mean for the customer? I don't have to rip and replace my whole infrastructure, right? >>Yeah. Well, think what it does for, again, from that operational efficiency, when you're going from a legacy environment to that modern environment, it helps with the migration helps you accelerate that migration because you're not switching different management systems to accomplish that. You've got the same unified networking fabric that you've been working with to enable you to run your legacy as well as transfer over to those modern applications. Okay. >>So your people are comfortable with the skillsets, et cetera. All right. I'll give you the last word. Give us the bottom line here. >>So yeah, I think obviously with all the modern applications that are coming out, the distributed application environments, it's really posing a lot of risk on these organizations to be able to get not only security, but also visibility into those environments. And so organizations have to find solutions. As I said, at the beginning, they're looking to drive operational efficiency. So getting operational efficiency from a unified cloud networking solution, that it goes from the server across the servers to multiple different environments, right in different cloud environments is certainly going to help organizations drive that operational efficiency. It's going to help them save money for visibility, for security and even open networking. So a great opportunity for organizations, especially large enterprises, cloud providers who are trying to build that hyperscaler like environment. You mentioned the nitro card, right? This is a great way to do it with an open solution. >>Bob, thanks so much for, for coming in and sharing your insights. Appreciate it. >>You're welcome. Thanks. >>Thanks for watching the program today. Remember all these videos are available on demand@thekey.net. You can check out all the news from today@siliconangle.com and of course, pluribus networks.com many thanks diplomas for making this program possible and sponsoring the cube. This is Dave Volante. Thanks for watching. Be well, we'll see you next time.

Published Date : Mar 16 2022

SUMMARY :

And one of the best examples is Amazon's nitro. So if you can eliminate that waste, and Pete Lummus from Nvidia to take a deeper dive into the technology. Great to have you welcome folks. Thank you. So let's get into the, the problem situation with cloud unified network. and the first mandate for them is to become as agile as a hyperscaler. How does this tie together? Each of the public clouds have different networks that needs to be unified. So that's the fourth tenant How do customers get this vision realized? And I appreciate the tee up. That's the blue field and video. And so that is the first that's, that's the first step in the getting into realizing What is the relationship with clothes? So we have, you know, this concept of a Bluefield data processing unit, which if you think about it, the host, from the switch to the host, and really have that single pane of glass for So it really is a magical partnership between the two companies with pulled out of the market and, and you guys step up and create these new solutions. Um, so that, you know, if you sort of think about what, So if you look at what we've done with the DPU, with credit and an SDK, which is an open SDK called And it's all kind of, again, this is the new architecture Mike, you were talking about, how does customers So they need to migrate there and they need this architecture to be cost-effective. And then, um, uh, you know, with this, with this, our architectural approach effectively, Get the unified cloud architecture, I'm the customer guy, So now by, by terminating the networking on the DPU, Um, and the next benefit obviously So you have to have this new security model. And I think that's sort of what you see with the partnership between pluribus and Nvidia is the DPU is really the the go to market with an Nvidia? in the future, but right now, um, we're, we feel like we're partnered with the number one, And I talked about sort of, you know, uh, how much better that next generation of Bluefield So as we add new generations of Bluefield, you know, next, This is the future of, of cloud operations. You can go to www.pluribusnetworks.com/e Thanks so much for sharing the news. How can you simplify and unify your cloud networks to increase agility and business velocity? Ultimately the unified cloud fabric extends seamlessly across And we'll examine some of the use cases with Alessandra Burberry, Um, and the novelty about this system that integrates a distributed control So how does it integrate into Nvidia hardware and specifically So the first byproduct of this approach is that whatever And second, this gives you the ability to free up, I would say around 20, and this is what we think this enables a very clean demarcation between computer and So Pete, I gotta get, I gotta get you in here. And so, you know, again, it comes down to pragmatism and I think, So if infrastructure is code, you know, you're talking about, you know, that part of the stack And so that ability to automate, into the pluribus unified cloud networking vision, because this is what people are talking but this is not the place where you deploy most of your services in the cloud, particularly from a security standpoint, on the kind of hypervisor or compute solution you choose. That's probably the number one, I mean, I'm not, I'm just joking server listen network list, but the idea is it should the Butte technology and, uh, uh, we can actually integrate those capabilities directly So I love to get your thoughts about Uh, and so we provide, uh, you know, armed CPU's memory scale large enough, the question is, can you afford it? What's the benefit to the customers with this approach? And so you can run a DPU You've already identified some successes with some customers on your early field trials. couple of examples here, just to give you a flavor. And overall, they're hoping to get out of it, uh, uh, tremendous, and then they have a physical network to manage the, the idea of having again, one network, So I got to ask both of you to wrap this up. Um, so that's the week of March 21st through 24th. more or to know more and actually to sign up for the actual early field trial program, You're going to hear an independent analyst perspective and review some of the research from the enterprise strategy group ESG. Now let's get the perspective It's great to be What's what's driving it. So organizations are having to deal with this highly So a little bit counter to the fact and, you know, really counter to their overarching digital transformation I mean, I love how you bring in the data yesterday. So in the old days, it used to be easier in north south coming out of the server, So that by doing that, it really makes it a lot harder for them to see And I love the mode analogy. but the things you start running into there, there's a couple of things. So if you have to buy more servers because your CPU's are being utilized, the server and go to a different rack somewhere else in the data center. So is that how we should think about this? environments and the older server environments, they're able to provide that unified networking experience across environment, it helps with the migration helps you accelerate that migration because you're not switching different management I'll give you the last word. that it goes from the server across the servers to multiple different environments, right in different cloud environments Bob, thanks so much for, for coming in and sharing your insights. You're welcome. You can check out all the news from today@siliconangle.com and of course,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DonniePERSON

0.99+

Bob LibertePERSON

0.99+

Dave VellantePERSON

0.99+

AmazonORGANIZATION

0.99+

Alessandra BurberryPERSON

0.99+

SandraPERSON

0.99+

Dave VolantePERSON

0.99+

NvidiaORGANIZATION

0.99+

Pete BloombergPERSON

0.99+

MichaelPERSON

0.99+

AsiaLOCATION

0.99+

AlexandraPERSON

0.99+

hundredsQUANTITY

0.99+

Pete LummusPERSON

0.99+

AWSORGANIZATION

0.99+

Bob LA LibertePERSON

0.99+

MikePERSON

0.99+

JohnPERSON

0.99+

ESGORGANIZATION

0.99+

BobPERSON

0.99+

two companiesQUANTITY

0.99+

25QUANTITY

0.99+

Alessandra BobbyPERSON

0.99+

two yearsQUANTITY

0.99+

Palo AltoLOCATION

0.99+

thousandsQUANTITY

0.99+

BluefieldORGANIZATION

0.99+

NetAppsORGANIZATION

0.99+

demand@thekey.netOTHER

0.99+

20%QUANTITY

0.99+

last yearDATE

0.99+

a yearQUANTITY

0.99+

March 21stDATE

0.99+

FirstQUANTITY

0.99+

www.pluribusnetworks.com/eOTHER

0.99+

TycoORGANIZATION

0.99+

late AprilDATE

0.99+

DokaTITLE

0.99+

400 gigQUANTITY

0.99+

yesterdayDATE

0.99+

second versionQUANTITY

0.99+

two servicesQUANTITY

0.99+

first stepQUANTITY

0.99+

third areaQUANTITY

0.99+

oneQUANTITY

0.99+

second aspectQUANTITY

0.99+

OneQUANTITY

0.99+

EachQUANTITY

0.99+

www.pluribusnetworks.comOTHER

0.99+

PetePERSON

0.99+

last yearDATE

0.99+

one applicationQUANTITY

0.99+

two thingsQUANTITY

0.99+

Alessandro Barbieri and Pete Lumbis


 

>>mhm. Okay, we're back. I'm John. Fully with the Cuban. We're going to go deeper into a deep dive into unified cloud networking solution from Pluribus and NVIDIA. And we'll examine some of the use cases with Alexandra Barberry, VP of product Management and Pluribus Networks. And Pete Lambasts, the director of technical market and video. Remotely guys, thanks for coming on. Appreciate it. >>I think >>so. Deep dive. Let's get into the what and how Alexandra, we heard earlier about the pluribus and video partnership in the solution you're working together on. What is it? >>Yeah. First, let's talk about the what? What are we really integrating with the NVIDIA Bluefield deep You Technology pluribus says, uh, has been shipping, uh, in volume in multiple mission critical networks. So this adviser, one network operating systems it runs today on merchant silicon switches and effectively, it's a standard based open network computing system for data centre. Um, and the novelty about this operating system is that it integrates a distributed the control plane for Atwater made effective in STN overlay. This automation is completely open and interoperable, and extensible to other type of clouds is nothing closed and this is actually what we're now porting to the NVIDIA GPU. >>Awesome. So how does it integrate into video hardware? And specifically, how is plural is integrating its software within video hardware? >>Yeah, I think we leverage some of the interesting properties of the blue field the GPU hardware, which allows actually to integrate, um, our soft our network operating system in a manner which is completely isolated and independent from the guest operating system. So the first byproduct of this approach is that whatever we do at the network level on the GPU card is completely agnostic to the hyper visor layer or OS layer running on on the host even more. Um, uh, we can also independently manage this network. Note this switch on a nick effectively, uh, managed completely independently from the host. You don't have to go through the network operating system running on X 86 to control this network node. So you truly have the experience effectively of a top of rack for virtual machine or a top of rack for kubernetes spots. Where instead of, uh, um, if you allow me with analogy instead of connecting a server nique directly to a switchboard now you're connecting a VM virtual interface to a virtual interface on the switch on a nick. And also as part of this integration, we, uh, put a lot of effort, a lot of emphasis in accelerating the entire day to play in for networking and security. So we are taking advantage of the DACA, uh, video DACA api to programme the accelerators and this your accomplished two things with that number one, you, uh, have much greater performance, much better performance than running the same network services on an X 86 CPU. And second, this gives you the ability to free up. I would say around 2025% of the server capacity to be devoted either to additional war close to run your cloud applications. Or perhaps you can actually shrink the power footprint and compute footprint of your data centre by 20% if you want to run. The same number of computer work was so great efficiencies in the overall approach. >>And this is completely independent of the server CPU, right? >>Absolutely. There is zero quote from pluribus running on the X 86 this is what why we think this enables a very clean demarcation between computer and network. >>So, Pete, I gotta get I gotta get you in here. We heard that the GPUS enable cleaner separation of devops and net ops. Can you explain why that's important? Because everybody's talking. Def SEC ops, right now you've got Net ops. Net net SEC ops, this separation. Why is this clean separation important? >>Yeah, I think it's, uh, you know, it's a pragmatic solution, in my opinion, Um, you know, we wish the world was all kind of rainbows and unicorns, but it's a little a little messier than that. And I think a lot of the devops stuff in that, uh, mentality and philosophy. There's a natural fit there, right? You have applications running on servers. So you're talking about developers with those applications integrating with the operators of those servers? Well, the network has always been this other thing, and the network operators have always had a very different approach to things than compute operators. And, you know, I think that we we in the networking industry have gotten closer together. But there's still a gap. There's still some distance, and I think in that distance isn't going to be closed and So again it comes down to pragmatism. And I think, you know, one of my favourite phrases is look, good fences make good neighbours. And that's what this is. Yeah, >>it's a great point because devops has become kind of the calling card for cloud. Right? But devops is a simply infrastructure as code infrastructure is networking, right? So if infrastructure as code, you know, you're talking about, you know, that part of the stack under the covers under the hood, if you will. This is super important distinction. And this is where the innovation is. Can you elaborate on how you see that? Because this is really where the action is right now. >>Yeah, exactly. And I think that's where one from from the policy, the security, the zero trust aspect of this right. If you get it wrong on that network side, all of a sudden, you you can totally open up that those capabilities and so security is part of that. But the other part is thinking about this at scale, right. So we're taking one top of rack switch and adding, you know, up to 48 servers per rack, and so that ability to automate orchestrate and manage its scale becomes absolutely critical. >>Alexandra, this is really the why we're talking about here. And this is scale and again getting it right. If you don't get it right, you're gonna be really kind of up. You know what you know. So this is a huge deal. Networking matters. Security matters. Automation matters. DEVOPS. Net ops all coming together. Clean separation. Help us understand how this joint solution within video gets into the pluribus unified cloud networking vision. Because this is what people are talking about and working on right now. >>Yeah, absolutely. So I think here with this solution, we're talking to major problems in cloud networking. One is the operation of cloud networking, and the second is distributing security services in the cloud infrastructure. First, let me talk about first. What are we really unifying? If you really find something, something must be at least fragmented or disjointed. And what is this? Joint is actually the network in the cloud. If you look holistically how networking is deployed in the cloud, you have your physical fabric infrastructure, right? Your switches and routers. You build your I P clause fabric leaf and spine to apologies. this is actually well understood the problem. I would say, um, there are multiple vendors with a similar technologies. Very well, standardised. Very well understood. Um, and almost a commodity, I would say building an I P fabric these days. But this is not the place where you deploy most of your services in the cloud, particularly from a security standpoint. Those services are actually now moved into the compute layer where you actually were called. Builders have to instrument a separate network virtualisation layer, where they deploy segmentation and security closer to the workloads. And this is where the complication arise. This high value part of the cloud network is where you have a plethora of options, that they don't talk to each other, and they are very dependent on the kind of hyper visor or compute solution you choose. Um, for example, the networking API between an SX I environment or and hyper V or a Zen are completely disjointed. You have multiple orchestration layers and when and then when you throw in Also kubernetes in this In this in this type of architecture, uh, you're introducing yet another level of networking, and when you burn it, it runs on top of the M s, which is a prevalent approach. You actually just stuck in multiple networks on the compute layer that they eventually run on the physical fabric infrastructure. Those are all ships in the night effectively, right? They operate as completely disjointed. And we're trying to attack this problem first with the notion of a unified fabric, which is independent from any work clothes. Uh, whether it's this fabric spans on a switch which can become connected to a bare metal workload or can spend all the way inside the deep You where you have your multi hypervisors computer environment. It's one a P I one common network control plane and one common set of segmentation services for the network. That's probably number one. >>You know, it's interesting you I hear you talking. I hear one network different operating models reminds me the old server list days. You know there's still servers, but they called server list. Is there going to be a term network list? Because at the end of the, it should be one network, not multiple operating models. This this is like a problem that you guys are working on. Is that right? I mean, I'm not I'm just joking. Server, Listen, network list. But the idea is it should be one thing. >>Yeah, it's effectively. What we're trying to do is we're trying to recompose this fragmentation in terms of network operations across physical networking and server networking. Server networking is where the majority of the problems are because of the as much as you have standardised the ways of building, uh, physical networks and cloud fabrics with high people articles on the Internet. And you don't have that kind of, uh, sort of, uh, operational efficiency at the server layer. And this is what we're trying to attack first with this technology. The second aspect we're trying to attack is how we distribute the security services throughout the infrastructure more efficiently. Whether it's micro segmentation is a state, full firewall services or even encryption, those are all capabilities enabled by the blue field deep you technology and, uh, we can actually integrate those capabilities directly into the network fabric. Limiting dramatically, at least for is to have traffic, the sprawl of security appliances with a virtual or physical that is typically the way people today segment and secured the traffic in the >>cloud. All kidding aside about network. Listen, Civil is kind of fun. Fun play on words There the network is one thing is basically distributed computing, right? So I love to get your thoughts about this Distributed security with zero trust as the driver for this architecture you guys are doing. Can you share in more detail the depth of why DPU based approach is better than alternatives? >>Yeah, I think. What's what's beautiful and kind of what the deep you brings that's new to this model is completely isolated. Compute environment inside. So you know, it's the yo dog. I heard you like a server, So I put a server inside your server. Uh, and so we provide, you know, arm CPUs, memory and network accelerators inside, and that is completely isolated from the host. So the server, the the actual X 86 host just thinks it has a regular nick in there. But you actually have this full control plane thing. It's just like taking your top of rack, switch and shovel. Get inside of your compute node. And so you have not only the separation, um, within the data plane, but you have this complete control plane separation. So you have this element that the network team can now control and manage. But we're taking all of the functions we used to do at the top of rack Switch, and we distribute them now. And, you know, as time has gone on, we've we've struggled to put more and more and more into that network edge. And the reality is the network edge is the compute layer, not the top of rack switch layer. And so that provides this phenomenal enforcement point for security and policy. And I think outside of today's solutions around virtual firewalls, um, the other option is centralised appliances. And even if you can get one that can scale large enough, the question is, can you afford it? And so what we end up doing is we kind of hope that if aliens good enough or we hope that if you excellent tunnel is good enough, and we can actually apply more advanced techniques there because we can't physically, financially afford that appliance to see all of the traffic, and now that we have a distributed model with this accelerator, we could do it. >>So what's the what's in it for the customer real quick. I think this is an interesting point. You mentioned policy. Everyone in networking those policies just a great thing. And it has. You hear it being talked about up the stack as well. When you start getting to orchestrate microservices and what not all that good stuff going on their containers and whatnot and modern applications. What's the benefit to the customers with this approach? Because what I heard was more scale, more edge deployment, flexibility relative to security policies and application. Enablement. I mean, is that what what's the customer get out of this architecture? What's the enablement? >>It comes down to taking again the capabilities that were that top of rack switch and distracting them down. So that makes simplicity smaller. Blast Radius is for failure, smaller failure domains, maintenance on the networks and the systems become easier. Your ability to integrate across workloads becomes infinitely easier. Um, and again, you know, we always want to kind of separate each one of those layers. So, just as in, say, a Vieques land network, my leaf and spine don't have to be tightly coupled together. I can now do this at a different layer and so you can run a deep You with any networking in the core there. And so you get this extreme flexibility, you can start small. You can scale large. Um, you know, to me that the possibilities are endless. >>It's a great security control Playing really flexibility is key, and and also being situationally aware of any kind of threats or new vectors or whatever is happening in the network. Alexandra, this is huge Upside, right? You've already identified some, uh, successes with some customers on your early field trials. What are they doing? And why are they attracted? The solution? >>Yeah, I think the response from customer has been the most encouraging and exciting for for us to, uh, to sort of continuing work and develop this product. And we have actually learned a lot in the process. Um, we talked to three or two or three cloud providers. We talked to s P um, sort of telco type of networks, uh, as well as enter large enterprise customers. Um, in one particular case, um uh, one, I think. Let me let me call out a couple of examples here just to give you a flavour. There is a service provider, a cloud provider in Asia who is actually managing a cloud where they are offering services based on multiple hypervisors their native services based on Zen. But they also, um, ramp into the cloud workloads based on SX I and N K P M. Depending on what the customer picks from the piece from the menu. And they have the problem of now orchestrating through the orchestrate or integrating with Zen Centre with this fear with open stock to coordinate this multiple environments and in the process to provide security, they actually deploy virtual appliances everywhere, which has a lot of cost complication, and it's up into the service of you the promise that they saw in this technology they call it. Actually, game changing is actually to remove all this complexity, even a single network, and distribute the micro segmentation service directly into the fabric. And overall, they're hoping to get out of it. Tremendous OPEC's benefit and overall operational simplification for the cloud infrastructure. That's one important use case, um, another large enterprise customer, a global enterprise customer is running both Essex I and I purvey in their environment, and they don't have a solution to do micro segmentation consistently across Hypervisors. So again, micro segmentation is a huge driver. Security looks like it's a recurring theme talking to most of these customers and in the telco space. Um, uh, we're working with a few telco customers on the CFT programme, uh, where the main goal is actually to Arman Eyes Network operation. They typically handle all the V NFC with their own homegrown DPD K stock. This is overly complex. It is, frankly, also slow and inefficient. And then they have a physical network to manage the idea of having again one network to coordinate the provisioning of cloud services between the take of the NFC. Uh, the rest of the infrastructure is extremely powerful on top of the offloading capability. After by the blue fill the pews. Those are just some examples. >>There's a great use case, a lot more potential. I see that with the unified cloud networking. Great stuff shout out to you guys that NVIDIA, you've been following your success for a long time and continuing to innovate his cloud scales and pluribus here with unified networking. Kind of bringing the next level great stuff. Great to have you guys on and again, software keeps, uh, driving the innovation again. Networking is just part of it, and it's the key solution. So I got to ask both of you to wrap this up. How can cloud operators who are interested in in this new architecture and solution learn more? Because this is an architectural ship. People are working on this problem. They're trying to think about multiple clouds are trying to think about unification around the network and giving more security more flexibility to their teams. How can people learn more? >>And so, uh, Alexandra and I have a talk at the upcoming NVIDIA GTC conference, so it's the week of March 21st through 24th. Um, you can go and register for free and video dot com slash gtc. Um, you can also watch recorded sessions if you end up watching this on YouTube a little bit after the fact, Um, and we're going to dive a little bit more into the specifics and the details and what we're providing a solution >>as Alexandra. How can people learn more? >>Yeah, so that people can go to the pluribus website www pluribus networks dot com slash e. F t and they can fill up the form and, uh, they will contact Pluribus to either no more or to know more and actually to sign up for the actual early field trial programme. Which starts at the end of it. >>Okay, well, we'll leave it there. Thank you both for joining. Appreciate it up. Next, you're going to hear an independent analyst perspective and review some of the research from the Enterprise Strategy Group E s G. I'm John Ferry with the Cube. Thanks for watching. Mhm. Mhm.

Published Date : Mar 4 2022

SUMMARY :

And Pete Lambasts, the director of technical market and Let's get into the what and how Alexandra, we heard earlier about the pluribus and video Um, and the novelty about this operating system is that it integrates a distributed the And specifically, how is plural is integrating its software within video hardware? of the server capacity to be devoted either to additional war close to is what why we think this enables a very clean demarcation between computer and network. We heard that the GPUS enable cleaner separation of Yeah, I think it's, uh, you know, it's a pragmatic solution, in my opinion, Um, you know, So if infrastructure as code, you know, you're talking about, you know, that part of the stack But the other part is thinking about this at scale, right. You know what you know. the place where you deploy most of your services in the cloud, particularly from a security standpoint. I hear one network different operating models reminds me the old server enabled by the blue field deep you technology and, So I love to get your thoughts scale large enough, the question is, can you afford it? What's the benefit to the customers with this approach? I can now do this at a different layer and so you can run Alexandra, this is huge Upside, Let me let me call out a couple of examples here just to give you a flavour. So I got to ask both of you to wrap this bit more into the specifics and the details and what we're providing a solution How can people learn more? Yeah, so that people can go to the pluribus website www pluribus networks dot analyst perspective and review some of the research from the Enterprise Strategy Group E s G.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
AlexandraPERSON

0.99+

NVIDIAORGANIZATION

0.99+

AsiaLOCATION

0.99+

Pete LambastsPERSON

0.99+

twoQUANTITY

0.99+

John FerryPERSON

0.99+

threeQUANTITY

0.99+

PluribusORGANIZATION

0.99+

20%QUANTITY

0.99+

Alexandra BarberryPERSON

0.99+

Pete LumbisPERSON

0.99+

JohnPERSON

0.99+

Alessandro BarbieriPERSON

0.99+

FirstQUANTITY

0.99+

OPECORGANIZATION

0.99+

second aspectQUANTITY

0.99+

PetePERSON

0.99+

bothQUANTITY

0.99+

firstQUANTITY

0.99+

March 21stDATE

0.99+

24thDATE

0.99+

OneQUANTITY

0.98+

secondQUANTITY

0.98+

Arman Eyes NetworkORGANIZATION

0.98+

todayDATE

0.98+

two thingsQUANTITY

0.98+

AtwaterORGANIZATION

0.98+

Pluribus NetworksORGANIZATION

0.98+

oneQUANTITY

0.98+

YouTubeORGANIZATION

0.96+

one thingQUANTITY

0.92+

DACATITLE

0.92+

one networkQUANTITY

0.92+

EnterpriseORGANIZATION

0.91+

single networkQUANTITY

0.91+

zero quoteQUANTITY

0.89+

one common setQUANTITY

0.88+

zero trustQUANTITY

0.88+

one important use caseQUANTITY

0.87+

Essex IORGANIZATION

0.84+

telcoORGANIZATION

0.84+

three cloud providersQUANTITY

0.82+

N K PORGANIZATION

0.82+

CubanPERSON

0.82+

KCOMMERCIAL_ITEM

0.81+

X 86OTHER

0.8+

zeroQUANTITY

0.79+

ZenORGANIZATION

0.79+

each oneQUANTITY

0.78+

one particular caseQUANTITY

0.76+

up to 48 servers per rackQUANTITY

0.74+

around 2025%QUANTITY

0.73+

coupleQUANTITY

0.68+

GroupORGANIZATION

0.67+

ViequesORGANIZATION

0.65+

X 86COMMERCIAL_ITEM

0.64+

XCOMMERCIAL_ITEM

0.61+

NVIDIA GTC conferenceEVENT

0.6+

pluribusORGANIZATION

0.57+

NVIDIA BluefieldORGANIZATION

0.54+

CentreCOMMERCIAL_ITEM

0.52+

X 86TITLE

0.51+

ZenTITLE

0.47+

86TITLE

0.45+

CubeORGANIZATION

0.44+

SXTITLE

0.41+

Sandy Carter, AWS & Fred Swaniker, The Room | AWS re:Invent 2021


 

>>Welcome back to the cubes coverage of ADA reinvent 2021 here, the cube coverage. I'm Judd for a, your host we're on the ground with two sets on the floor, real event. Of course, it's hybrid. It's online as well. You can check it out there. All the on-demand replays are there. We're here with Sandy Carter, worldwide vice president, public sector partners and programs. And we've got Fred Swanick, her founder, and chief curator of the room. We're talking about getting the best talent programming and in the cloud, doing great things, innovation all happening, Sandy. Great to see you. Thanks for coming on the cube, but appreciate it. Thanks for halfway to see. Okay. So tell us about the room. What is the room what's going on? >>Um, well, I mentioned in the room is to help the world's most extraordinary do us to fulfill their potential. So, um, it's a community of exceptional talent that we are building throughout the world, um, and connecting this talent to each other and connecting them to the organizations that are looking for people who can really move the needle for those organizations. >>So what kind of results are you guys seeing right now? Give us some stats. >>Well, it's a, it's a relatively new concept. So we're about 5,000 members so far, um, from 77 different countries. Um, and this is, you know, we're talking about sort of the top two to 3% of talent in different fields. Um, and, um, as we go forward, you know, we're really looking, seeing this as an opportunity to curate, um, exceptional talent. Um, and it feels like software engineering, data science, UX, UI design, cloud computing, um, and, uh, it really helped to, um, identify diverse talent as well from pockets that have typically been untapped for technology. Okay. >>I want to ask you kind of, what's the, how you read the tea leaves. How do I spot the talent, but first talk about the relationship with Amazon. What's the program together? How you guys working together? It's a great mission. I mean, we need more people anyway, coding everywhere, globally. What's the AWS connection. >>So Fred and I met and, uh, he had this, I mean the brilliant concept of the room. And so, uh, obviously you need to run that on the cloud. And so he's got organizations he's working at connecting them through the room and kind of that piece that he was needing was the technology. So we stepped in to help him with the technology piece because he's got all the subject matter expertise to train 3 million Africans, um, coming up on tech, we also were able to provide him some of the classwork as well for the cloud computing models. So some of those certs and things that we want to get out into the marketplace as well, we're also helping Fred with that as well. So >>I mean, want to, just to add onto that, you know, one of the things that's unique about the room is that we're trying to really build a long-term relationship with talent. So imagine joining the room as a 20 year old and being part of it until you're 60. So you're going to have a lot of that. You collect on someone as they progress through different stages of their career and the ability for us to leverage that data, um, and continuously learn about someone's, you know, skills and values and use, um, predictive algorithms to be able to match them to the right opportunities at the right time of their lives. And this is where the machine learning comes in and the, you know, the data lake that we're building to build to really store this massive data that we're going to be building on the top talent to the world. >>You know, that's a really good point. It's a list that's like big trend in tech where it's, it's still it's over the life's life of the horizon of the person. And it's also blends community, exactly nurturing, identifying, and assisting. But at the same day, not just giving people the answer, they got to grow on their own, but some people grow differently. So again, progressions are nonlinear sometimes and creativity can come out of nowhere. Got it. Uh, which brings me up to my number one question, because this always was on my mind is how do you spot talent? What's the secret sauce? >>Well, there is no real secret source because every person is unique. So what we look for are people who have an extra dose of five things, courage, passion, resilience, imagination, and good values, right? And this is what we're looking for. And you will someone who is unusually driven to achieve great things. Um, so of course, you know, you look at it from a combination of their, their training, you know, what they, what they've learned, but also what they've actually done in the workplace and feedback that you get from previous employers and data that we collect through our own interactions with this person. Um, and so we screened them through, you know, with the town that we had, didn't fly, we take them through really rigorous selection process. So, um, it takes, uh, for example, people go through an online assessments and then they go through an in-person interview and then we'll take them through a one to three month bootcamp to really identify, you know, people who are exceptional and of course get data from different sources about the person as well. >>Sandy, how do you see this collaboration helping, uh, your other clients? I mean, obviously talent, cross pollinates, um, learnings, what's your, you see this level of >>It has, uh, you know, AWS grows, obviously we're going to need more talent, especially in Africa because we're growing so rapidly there and there's going to be so much talent available in Africa here in just a few short years. Most of the tech talent will be in Africa. I think that that's really essential, but also as looking after my partners, I had Fred today on the keynote explaining to all my partners around the world, 55,000 streaming folks, how they can also leverage the room to fill some of their roles as well. Because if you think about it, you know, we heard from Presidio there's 3 million open cyber security roles. Um, you know, we're training 20 of mine million cloud folks because we have a gap. We see a gap around the world. And part of my responsibility with partners is making sure that they can get access to the right skills. And we're counting on the room and what Fred has produced to produce some of those great skills. You have AI, AML and dev ops. Tell us some of the areas you haven't. >>You know, we're looking at, uh, business intelligence, data science, um, full-stack software engineering, cybersecurity, um, you know, IOT talent. So fields that, um, the world needs a lot more talented. And I think today, a lot of technology, um, talent is moving from one place to another and what we need is new supply. And so what the room is doing is not only a community of top 10, but we're actually producing and training a lot more new talent. And that was going to hopefully, uh, remove a key bottleneck that a lot of companies are facing today as they try to undergo the digital trends. >>Well, maybe you can add some hosts on there. We need some cube hosts, come on, always looking for more talent on the set. You could be there. >>Yeah. The other interesting thing, John, Fred and I on stage today, he was talking about how easy to the first narrative written for easy to was written by a gentleman out of South Africa. So think about that right. ECE to talent. And he was talking about Ian Musk is based, you know, south African, right? So think about all the great talent that exists. There. There you go. There you go. So how do you get access to that talent? And that's why we're so excited to partner with Fred. Not only is he wicked impressive when a time's most influential people, but his mission, his life purpose has really been to develop this great talent. And for us, that gets us really excited because we, yeah, >>I think there's plenty of opportunities to around new business models in the U S for instance, um, my friends started upstart, which they were betting on people almost like a stock market. You know, almost like currency will fund you and you pay us back. And there's all kinds of gamification techniques that you can start to weave into the system. Exactly. As you get the flywheel going, exactly, you can look at it holistically and say, Hey, how do we get more people in and harvest the value of knowledge? >>That's exactly. I mean, one of the elements of the technology platform that we developed to the Amazon with AWS is the room intelligence platform. And in there is something called legacy points. So every time you, as a member of the room, give someone else an opportunity. You invest in their venture, you hire them, you mentor them, you get points and you can leverage those points for some really cool experiences, right? So you want to game-ify um, this community that is, uh, you know, essentially crowdsourcing opportunities. And you're not only getting things from the room, but you're also giving to others to enable everyone to grow. >>Yeah, what's the coolest thing you've seen. And this is a great initiative. First of all, it's a great model. I think it's, this is the future. Cause I'm a big believer that communities groups, as we get into this hybrid world is going to open up the virtualization. What the virtual world has shown us is virtualization, which is a cloud technology when Amazon started with Zen, which is virtualization technology, but virtualization, conceptually is replicating things. So if you think hybrid world, you can blend the connect people together. So now you have this social construct, this connective tissue between relationships, and it's always evolving, you know, this and you've been involved in community from, from, from the early days when you have that social evolution, it's not software as a mechanism. It's a human thing. Exactly. It's organism, it evolves. And so if you can get the software to think like that and the group to drive the behavior, it's not community software. >>Exactly. I mean, we say that the room is not an online community. It's really an offline community powered by technology. So our vision is to actually have physical rooms in different cities around the world, whether it's talent gathers, but imagine showing up at a, at a room space and we've got the technology to know what your interests are. We know that you're working on a new venture and there's this, there's a venture capitalists in that area, investing that venture, we can connect you right then that space powered by the, >>And then you can have watch parties. For instance, there's an event going on in us. You can do some watch parties and time shifted and then re replicated online and create a localization, but yet have that connection in >>Present. Exactly, exactly. Exactly. So what are the >>Learnings, what's your big learning share with the audience? What you've learned, because this is really kind of on the front edge of the new kind of innovation we're seeing, being enabled with software. >>I mean, one thing we're learning is that, uh, talent is truly, uh, evenly distribute around the world, but what is not as opportunity. And so, um, there's some truly exceptional talent that is hidden and on tap today. And if we can, you know, and, and today with the COVID pandemic companies or around the world, a lot more open to hiring more talent. So there's a huge opportunity to access new talent from, from sources that haven't been tapped before. Well, but also learnings the power of blending, the online and offline world. So, um, you know, the room is, as I mentioned, brings people together, normally in line, but also offline. And so when you're able to meet talent and actually see someone's personality and get a sense of the culture fit the 360 degree for your foot, some of that, you can't just get on a LinkedIn. Yes. That I built it to make a decision, to hire someone who is much better. And finally, we're also learning about the importance of long-term relationships. One of my motives in the room is relationships not transactions where, um, you actually get to meet someone in an environment where they're not pretending in an interview and you get to really see who they are and build relationships with them before you need to hide them. And these are some really unique ways that we think we can redefine how talent finds opportunity in the 21st. So >>You can put a cube in every room, we pick >>You up because, >>And the cube, what we do here is that when people collaborate, whether they're doing an interview together, riffing and sharing content is creating knowledge, but that shared experience creates a bonding. So when you have that kind of mindset and this room concept where it's not just resume, get a job, see you later, it's learning, having peers and colleagues and people around you, and then seeing them in a journey, multiple laps around the track of humans >>And going through a career, not just a job. >>Yes, exactly. And then, and then celebrating the ups and downs in learning. It's not always roses, as you know, it's always pain before you accelerate. >>Exactly. And you never quite arrive at your destination. You're always growing, and this is where technology can really play. >>Okay. So super exciting. Where's this go next, Sandy. And next couple of minutes left in. >>So, um, one of the things that we've envisioned, so this is not done yet, but, um, Fred and I imagined like, what if you could have an Alexa set up and you could say, Hey, you know, Alexa, what should be my next job? Or how should I go train? Or I'm really interested in being on a Ted talk. What could I do having an Alexa skill might be a really cool thing to do. And with the great funding that Fred Scott and you should talk about the $400 million to that, he's already raised $400 million. I mean, there, I think the sky's the limit on platforms. Like >>That's a nice chunk of change. There it is. We've got some fat financing as they say, >>But, well, it's a big mission. So to request significant resources, >>Who's backing you guys. What's the, who's the, where's the money coming from? >>It's coming from, um, the MasterCard foundation. They, our biggest funder, um, as well as, um, some philanthropists, um, and essentially these are people who truly see the potential, uh, to unlock, um, opportunity for millions of people global >>For Glen, a global scale. The vision has global >>Executive starting in Africa, but truly global. Our vision is eventually to have a community of about 10 to 20 million of the most extraordinary doers in the world, in this community, and to connect them to opportunity >>Angela and diverse John. I mean, this is the other thing that gets me excited because innovation comes from diversity of thought and given the community, we'll have so many diverse individuals in it that are going to get trained and mentored to create something that is amazing for their career as well. That really gets me excited too, as well as Amazon website, >>Smart people, and yet identifying the fresh voices and the fresh minds that come with it, all that that comes together, >>The social capital that they need to really accelerate their impact. >>Then you read the room and then you get wherever you need. Thanks so much. Congratulations on your great mission. Love the room. Um, you need to be the in Cuban, every room, you gotta get those fresh voices out there. See any graduates on a great project, super exciting. And SageMaker, AI's all part of, it's all kind of, it's a cool wave. It's fun. Can I join? Can I play? I tell you I need a room. >>I think he's top talent. >>Thanks so much for coming. I really appreciate your insight. Great stuff here, bringing you all the action and knowledge and insight here at re-invent with the cube two sets on the floor. It's a hybrid event. We're in person in Las Vegas for a real event. I'm John ferry with the cube, the leader in global tech coverage. Thanks for watching.

Published Date : Dec 2 2021

SUMMARY :

Thanks for coming on the cube, but appreciate it. and connecting this talent to each other and connecting them to the organizations that are looking for people who can really move So what kind of results are you guys seeing right now? and, um, as we go forward, you know, we're really looking, I want to ask you kind of, what's the, how you read the tea leaves. And so, uh, obviously you need to run that on the cloud. I mean, want to, just to add onto that, you know, one of the things that's unique about the room is that we're trying to really build a But at the same day, not just giving people the answer, they got to grow on their own, but some people grow differently. to really identify, you know, people who are exceptional and of course get data from different sources about the person Um, you know, we're training 20 of mine million cloud you know, IOT talent. Well, maybe you can add some hosts on there. So how do you get access to that talent? that you can start to weave into the system. So you want to game-ify um, this community that is, And so if you can get the software to think like there's a venture capitalists in that area, investing that venture, we can connect you right then that space powered And then you can have watch parties. So what are the of the new kind of innovation we're seeing, being enabled with software. And if we can, you know, and, and today with the COVID pandemic companies or around the world, So when you have that kind of mindset and this room It's not always roses, as you know, it's always pain before you accelerate. And you never quite arrive at your destination. And next couple of minutes left in. And with the great funding that Fred Scott and you should talk about the That's a nice chunk of change. So to request significant resources, Who's backing you guys. It's coming from, um, the MasterCard foundation. For Glen, a global scale. to 20 million of the most extraordinary doers in the world, in this community, and to connect them to opportunity individuals in it that are going to get trained and mentored to create something I tell you I need a room. Great stuff here, bringing you all the action and knowledge and insight here

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
AmazonORGANIZATION

0.99+

Fred SwanickPERSON

0.99+

FredPERSON

0.99+

Ian MuskPERSON

0.99+

Fred SwanikerPERSON

0.99+

AfricaLOCATION

0.99+

20QUANTITY

0.99+

20 yearQUANTITY

0.99+

JohnPERSON

0.99+

AWSORGANIZATION

0.99+

Sandy CarterPERSON

0.99+

SandyPERSON

0.99+

South AfricaLOCATION

0.99+

Las VegasLOCATION

0.99+

Fred ScottPERSON

0.99+

$400 millionQUANTITY

0.99+

60QUANTITY

0.99+

two setsQUANTITY

0.99+

3 millionQUANTITY

0.99+

360 degreeQUANTITY

0.99+

todayDATE

0.99+

LinkedInORGANIZATION

0.99+

U SLOCATION

0.99+

AngelaPERSON

0.99+

77 different countriesQUANTITY

0.99+

oneQUANTITY

0.98+

firstQUANTITY

0.98+

GlenPERSON

0.98+

3%QUANTITY

0.98+

John ferryPERSON

0.98+

five thingsQUANTITY

0.97+

OneQUANTITY

0.97+

first narrativeQUANTITY

0.96+

three monthQUANTITY

0.96+

about 10QUANTITY

0.95+

55,000 streaming folksQUANTITY

0.94+

about 5,000 membersQUANTITY

0.93+

20 millionQUANTITY

0.92+

FirstQUANTITY

0.92+

millionQUANTITY

0.92+

AlexaTITLE

0.91+

MasterCard foundationORGANIZATION

0.87+

south AfricanOTHER

0.87+

3 million open cyberQUANTITY

0.87+

millions of peopleQUANTITY

0.87+

PresidioORGANIZATION

0.84+

21stQUANTITY

0.82+

CubanLOCATION

0.81+

Ted talkTITLE

0.77+

top 10QUANTITY

0.74+

COVID pandemicEVENT

0.72+

number one questionQUANTITY

0.72+

one placeQUANTITY

0.68+

top twoQUANTITY

0.64+

re:InventEVENT

0.62+

SageMakerORGANIZATION

0.59+

ADATITLE

0.56+

The RoomORGANIZATION

0.52+

AfricansPERSON

0.5+

2021DATE

0.49+

2021TITLE

0.48+

ZenCOMMERCIAL_ITEM

0.4+

lexaTITLE

0.38+

Kumaran Siva, AMD | IBM Think 2021


 

>>from around the globe. It's the >>cube >>With digital coverage of IBM think 2021 brought to you by IBM. Welcome back to the cube coverage of IBM Think 2021. I'm john for the host of the cube here for virtual event Cameron Siva who's here with corporate vice president with a M. D. Uh CVP and business development. Great to see you. Thanks for coming on the cube. >>Nice to be. It's an honor to be here. >>You know, love A. M. D. Love the growth, love the processors. Epic 7000 and three series was just launched. Its out in the field. Give us a quick overview of the of the of the processor, how it's doing and how it's going to help us in the data center and the edge >>for sure. No this is uh this is an exciting time for A. M. D. This is probably one of the most exciting times uh to be honest and in my 2020 plus years of uh working in sex industry, I think I've never been this excited about a new product as I am about the the third generation ethic processor that were just announced. Um So the Epic 7003, what we're calling it a series processor. It's just a fantastic product. We not only have the fastest server processor in the world with the AMG Epic 7763 but we also have the fastest CPU core so that the process of being the complete package to complete socket and then we also the fastest poor in the world with the the Epic um 72 F three for frequency. So that one runs run super fast on each core. And then we also have 64 cores in the CPU. So it's it's addressing both kind of what we call scale up and scale out. So it's overall overall just just an enormous, enormous product line that that I think um you know, we'll be we'll be amazing within within IBM IBM cloud. Um The processor itself includes 256 megabytes of L three cache, um you know, cash is super important for a variety of workloads in the large cache size. We have shown our we've seen scale in particular cloud applications, but across the board, um you know, database, uh java all sorts of things. This processor is also based on the Zen three core, which is basically 19% more instructions per cycle relative to ours, N two. So that was the prior generation, the second generation Epic Force, which is called Rome. So this this new CPU is actually quite a bit more capable. It runs also at a higher frequency with both the 64 4 and the frequency optimized device. Um and finally, we have um what we call all in features. So rather than kind of segment our product line and charge you for every little, you know, little thing you turn on or off. We actually have all in features includes, you know, really importantly security, which is becoming a big, big team and something that we're partnering with IBM very closely on um and then also things like 628 lanes of pc I E gen four, um are your faces that grew up to four terabytes so you can do these big large uh large um in memory databases. The pc I interfaces gives you lots and lots of storage capability so all in all super products um and we're super excited to be working with IBM honest. >>Well let's get into some of the details on this impact because obviously it's not just one place where these processes are going to live. You're seeing a distributed surface area core to edge um, cloud and hybrid is now in play. It's pretty much standard now. Multi cloud on the horizon. Company's gonna start realizing, okay, I gotta put this to work and I want to get more insights out of the data and civilian applications that are evolving on this. But you guys have seen some growth in the cloud with the Epic processors, what can customers expect and why our cloud providers choosing Epic processors, >>you know, a big part of this is actually the fact that I that am be um delivers upon our roadmap. So we, we kind of do what we say and say what we do and we delivered on time. Um so we actually announced I think was back in august of 2019, their second generation, Epic part and then now in March, we are now in the third generation. Very much on schedule. Very much um, intern expectations and meeting the performance that we had told the industry and told our customers that we're going to meet back then. So it's a really super important pieces that our customers are now learning to expect performance, jenin, Jenin and on time from A. M. D, which is, which is uh, I think really a big part of our success. The second thing is, I think, you know, we are, we are a leader in terms of the core density that we provide and cloud in particular really values high density. So the 64 cores is absolutely unique today in the industry and that it has the ability to be offered both in uh bare metal. Um, as we have been deployed in uh, in IBM cloud and also in virtualized type environment. So it has that ability to spend a lot of different use cases. Um and you can, you know, you can run each core uh really fast, But then also have the scale out and then be able to take advantage of all 64 cores. Each core has two threads up to 128 threads per socket. It's a super powerful uh CPU and it has a lot of value for um for the for the cloud cloud provider, they're actually about over 400 total instances by the way of A. M. D processors out there. And that's all the flavors, of course, not just that they're generation, but still it's it's starting to really proliferate. We're trying to see uh M d I think all across the cloud, >>more cores, more threads all goodness. I gotta ask you, you know, I interviewed Arvin the ceo of IBM before he was Ceo at a conference and you know, he's always been, I know him, he's always loved cloud, right? So, um, but he sees a little bit differently than just being like copying the clouds. He sees it as we see it unfolding here, I think Hybrid. Um, and so I can almost see the playbook evolving. You know, Red has an operating system, Cloud and Edge is a distributed system, it's got that vibe of a system architecture, almost got processors everywhere. Could you give us a sense of the over an overview of the work you're doing with IBM Cloud and what a M. D s role is there? And I'm curious, could you share for the folks watching too? >>For sure. For sure. By the way, IBM cloud is a fantastic partner to work with. So, so, first off you talked about about the hybrid, hybrid cloud is a really important thing for us and that's um that's an area that we are definitely focused in on. Uh but in terms of our specific joint partnerships and we do have an announcement last year. Um so it's it's it's somewhat public, but we are working together on Ai where IBM is a is an undisputed leader with Watson and some of the technologies that you guys bring there. So we're bringing together, you know, it's kind of this real hard work goodness with IBM problems and know how on the AI side. In addition, IBM is also known for um you know, really enterprise grade, yeah, security and working with some of the key sectors that need and value, reliability, security, availability, um in those areas. Uh and so I think that partnership, we have quite a bit of uh quite a strong relationship and partnership around working together on security and doing confidential computer. >>Tell us more about the confidential computing. This is a joint development agreement, is a joint venture joint development agreement. Give us more detail on this. Tell us more about this announcement with IBM cloud, an AMG confidential computing. >>So that's right. So so what uh you know, there's some key pillars to this. One of this is being able to to work together, define open standards, open architecture. Um so jointly with an IBM and also pulling in something assets in terms of red hat to be able to work together and pull together a confidential computer that can so some some key ideas here, we can work with work within a hybrid cloud. We can work within the IBM cloud and to be able to provide you with, provide, provide our joint customers are and customers with uh with unprecedented security and reliability uh in the cloud, >>what's the future of processors, I mean, what should people think when they expect to see innovation? Um Certainly data centers are evolving with core core features to work with hybrid operating model in the cloud. People are getting that edge relationship basically the data centers a large edge, but now you've got the other edges, we got industrial edges, you got consumers, people wearables, you're gonna have more and more devices big and small. Um what's the what's the road map look like? How do you describe the future of a. M. D. In in the IBM world? >>I think I think R I B M M D partnership is bright, future is bright for sure, and I think there's there's a lot of key pieces there. Uh you know, I think IBM brings a lot of value in terms of being able to take on those up earlier, upper uh layers of software and that and the full stack um so IBM strength has really been, you know, as a systems company and as a software company. Right, So combining that with the Andes Silicon, uh divided and see few devices really really is is it's a great combination, I see, you know, I see um growth in uh you know, obviously in in deploying kind of this, this scale out model where we have these very large uh large core count Cpus I see that trend continuing for sure. Uh you know, I think that that is gonna, that is sort of the way of the future that you want cloud data applications that can scale across multi multiple cores within the socket and then across clusters of Cpus with within the data center um and IBM is in a really good position to take advantage of that to go to, to to drive that within the cloud. That income combination with IBM s presence on prem uh and so that's that's where the hybrid hybrid cloud value proposition comes in um and so we actually see ourselves uh you know, playing in both sides, so we do have a very strong presence now and increasingly so on premises as well. And we we partner we were very interested in working with IBM on the on on premises uh with some of some of the key customers and then offering that hybrid connectivity onto, onto the the IBM cloud as well. >>I B M and M. D. Great partnership, great for clarifying and and sharing that insight come, I appreciate it. Thanks for for coming on the cube, I do want to ask you while I got you here. Um kind of a curveball question if you don't mind. As you see hybrid cloud developing one of the big trends is this ecosystem play right? So you're seeing connections between IBM and their and their partners being much more integrated. So cloud has been a big KPI kind of model. You connect people through a. P. I. S. There's a big trend that we're seeing and we're seeing this really in our reporting on silicon angle the rise of a cloud service provider within these ecosystems where hey, I could build on top of IBM cloud and build a great business. Um and as I do that, I might want to look at an architecture like an AMG, how does that fit into to your view as a doing business development over at A. M. D. I mean because because people are building on top of these ecosystems are building their own clouds on top of cloud, you're seeing data. Cloud, just seeing these kinds of clouds, specialty clouds. So I mean we could have a cute cloud on top of IBM maybe someday. So, so I might want to build out a whole, I might be a cloud. So that's more processors needed for you. So how do you see this enablement? Because IBM is going to want to do that, it's kind of like, I'm kind of connecting the dots here in real time, but what's your, what's your take on that? What's your reaction? >>I think, I think that's I think that's right and I think m d isn't, it isn't a pretty good position with IBM to be able to, to enable that. Um we do have some very significant osD partnerships, a lot of which that are leveraged into IBM um such as Red hat of course, but also like VM ware and Nutanix. Um this provide these always V partners provide kind of the base level infrastructure that we can then build upon and then have that have that A P I. And be able to build build um uh the the multi cloud environments that you're talking about. Um and I think that, I think that's right. I think that is that is one of the uh you know, kind of future trends that that we will see uh you know, services that are offered on top of IBM cloud that take advantage of the the capabilities of the platform that come with it. Um and you know, the bare metal offerings that that IBM offer on their cloud is also quite unique um and hyper very performance. Um and so this actually gives um I think uh the the kind of uh call the medic cloud that unique ability to kind of go in and take advantage of the M. D. Hardware at a performance level and at a um uh to take advantage of that infrastructure better than they could in another cloud environments. I think that's that's that's actually very key and very uh one of the one of the features of the IBM problems that differentiates it >>so much headroom there corns really appreciate you sharing that. I think it's a great opportunity. As I say, if you're you want to build and compete. Finally, there's no with the white space with no competition or be better than the competition. So as they say in business, thank you for coming on sharing. Great great future ahead for all builders out there. Thanks for coming on the cube. >>Thanks thank you very much. >>Okay. IBM think cube coverage here. I'm john for your host. Thanks for watching. Mm

Published Date : May 12 2021

SUMMARY :

It's the With digital coverage of IBM think 2021 brought to you by IBM. It's an honor to be here. You know, love A. M. D. Love the growth, love the processors. so that the process of being the complete package to complete socket and then we also the fastest poor some growth in the cloud with the Epic processors, what can customers expect Um and you can, you know, you can run each core uh Um, and so I can almost see the playbook evolving. So we're bringing together, you know, it's kind of this real hard work goodness with IBM problems and know with IBM cloud, an AMG confidential computing. So so what uh you know, there's some key pillars to this. In in the IBM world? in um and so we actually see ourselves uh you know, playing in both sides, Thanks for for coming on the cube, I do want to ask you while I got you here. I think that is that is one of the uh you know, So as they say in business, thank you for coming on sharing. Thanks for watching.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
IBMORGANIZATION

0.99+

ArvinPERSON

0.99+

Cameron SivaPERSON

0.99+

MarchDATE

0.99+

19%QUANTITY

0.99+

64 coresQUANTITY

0.99+

each coreQUANTITY

0.99+

Each coreQUANTITY

0.99+

august of 2019DATE

0.99+

628 lanesQUANTITY

0.99+

256 megabytesQUANTITY

0.99+

last yearDATE

0.99+

2020DATE

0.99+

64 coresQUANTITY

0.99+

NutanixORGANIZATION

0.99+

second thingQUANTITY

0.99+

2021DATE

0.99+

two threadsQUANTITY

0.99+

second generationQUANTITY

0.99+

AMDORGANIZATION

0.99+

both sidesQUANTITY

0.98+

OneQUANTITY

0.98+

bothQUANTITY

0.98+

third generationQUANTITY

0.98+

AMGORGANIZATION

0.98+

Epic 7003COMMERCIAL_ITEM

0.97+

JeninPERSON

0.97+

Andes SiliconORGANIZATION

0.97+

Zen threeCOMMERCIAL_ITEM

0.97+

third generationQUANTITY

0.97+

M. D.PERSON

0.94+

four terabytesQUANTITY

0.94+

firstQUANTITY

0.94+

todayDATE

0.94+

one placeQUANTITY

0.94+

EpicORGANIZATION

0.93+

Think 2021COMMERCIAL_ITEM

0.92+

IBM cloudORGANIZATION

0.92+

Epic 7763COMMERCIAL_ITEM

0.91+

oneQUANTITY

0.9+

jeninPERSON

0.9+

three seriesQUANTITY

0.89+

EpicCOMMERCIAL_ITEM

0.88+

A. M.ORGANIZATION

0.85+

A. M.PERSON

0.85+

RedPERSON

0.83+

CeoPERSON

0.82+

Mm Kumaran SivaPERSON

0.8+

about over 400 total instancesQUANTITY

0.79+

64 4QUANTITY

0.78+

johnPERSON

0.77+

up to 128 threadsQUANTITY

0.72+

Epic um 72 F threeCOMMERCIAL_ITEM

0.71+

javaTITLE

0.7+

7000COMMERCIAL_ITEM

0.7+

Epic ForceCOMMERCIAL_ITEM

0.69+

E gen fourCOMMERCIAL_ITEM

0.67+

M. DPERSON

0.67+

IBM29 Kumaran Siva VTT


 

>>from around the globe. It's the >>cube with >>Digital coverage of IBM think 2021 brought to you by IBM. Welcome back to the cube coverage of IBM Think 2021. I'm john for the host of the cube here for virtual event Cameron Siva who's here with corporate vice president with a M. D. Uh CVP and business development. Great to see you. Thanks for coming on the cube. >>Nice to be. It's an honor to be here. >>You know, love A. M. D. Love the growth, loved the processors. Epic 7000 and three series was just launched its out in the field. Give us a quick overview of the of the of the processor, how it's doing and how it's going to help us in the data center on the edge >>for sure. No this is uh this is an exciting time for A. M. D. This is probably one of the most exciting times uh to be honest and in my 2020 plus years of uh working in sex industry, I think I've never been this excited about a new product as I am about the the third generation Epic processor that we just announced. Um So the Epic 7003, what we're calling it a serious processor. It's just a fantastic product. We not only have the fastest server processor in the world with the AMG Epic 7763 but we also have the fastest CPU core so that the process of being the complete package, the complete socket and then we also the fastest poor in the world with the the Epic um 72 F three for frequency. So that one runs run super fast on each core. And then we also have 64 cores in the CPU. So it's it's addressing both kind of what we call scale up and scale out. So it's overall overall just just an enormous, enormous product line that that I think um you know, we'll be we'll be amazing within within IBM IBM cloud. Um The processor itself includes 256 megabytes of L three cache. Um you know, cash is super important for a variety of workloads in the large cat size. We have shown our we've seen scale in particular cloud applications, but across the board, um you know, database, uh java whole sorts of things. This processor is also based on the Zen three core, which is basically 19% more instructions per cycle relative to ours, N two. So that was the prior generation, the second generation Epic Force, which is called Rome. So this this new CPU is actually quite a bit more capable. It runs also at a higher frequency with both the 64 4 and the frequency optimized device. Um and finally, we have um we call all in features so rather than kind of segment our product line and charge you for every little, you know, little thing you turn on or off. We actually have all in features includes, you know, really importantly security, which is becoming a big, big team and something that we're partnering with IBM very closely on um and then also things like 628 lanes of pc I E gen four, um are your faces that grew up to four terabytes so you can do these big large uh large um in memory databases, the Pc I interfaces gives you lots and lots of storage capability. So all in all super products um and we're super excited to be working with IBM honest. >>Well, let's get into some of the details on this impact because obviously it's not just one place where these processes are gonna live. You're seeing a distributed surface area core to edge um cloud and hybrid is now in play. It's pretty much standard now. Multi cloud on the horizon. Company's gonna start realizing, okay, I gotta put this to work and I want to get more insights out of the data and civilian applications that are evolving on this. But you guys have seen some growth in the cloud with the Epic processors, what can customers expect and why our cloud providers choosing Epic processors, >>you know, a big part of this is actually the fact that I that am d um delivers upon our roadmap. So we we kind of do what we say and say what we do and we delivered on time. Um so we actually announced I think was back in august of 2019, their second generation. That big part and then now in March, we are now in the third generation, very much on schedule, very much um intent, expectations and meeting the performance that we had told the industry and told our customers that we're going to meet back then. So it's a really super important pieces that our customers are now learning to expect performance, jenin, jenin and on time from A. M. D, which is, which is uh, I think really a big part of our success. The second thing is, I think, you know, we are, we are a leader in terms of the core density that we provide and cloud in particular really values high density. So the 64 cores is absolutely unique today in the industry and that it has the ability to be offered both in uh, bare metal, um, as we have been deployed in uh, in IBM Club and also in virtualized type environment. So it has that ability to spend a lot of different use cases. Um And you can, you know, you can run each core really fast, But then also have the scale out and then be able to take advantage of all 64 cores. Each core has two threads up to 128 threads per socket. It's a super powerful uh CPU and it has a lot of value for um for the with a cloud cloud provider, they're actually about over 400 total instances by the way of A. M. D. Processors out there. And that's all the flavors, of course, not just that they're generation, but still it's it's starting to really proliferate. We're trying to see uh M d I think all across the cloud, >>more cores, more threads all goodness. I gotta ask you, you know, I interviewed Arvin the Ceo of IBM before he was Ceo at a conference and you know, he's always been I know him, he's always loved cloud, right? So, um but he sees a little bit differently than just being like copying the clouds. He sees it as we see it unfolding here. I think Hybrid. Um and so I can almost see the playbook evolving. You know, Red has an operating system. Cloud and Edge is a distributed system. It's got that vibe of a system architecture, you got processors everywhere. Could you give us a sense of the over an overview of the work you're doing with IBM Cloud and what a M. D s role is there? And I'm curious could you share for the folks watching too? >>For sure. For sure. By the way, IBM cloud is a fantastic partner to work with. So, so, first off you talked about about the hybrid, hybrid cloud is a really important thing for us and that's um that's an area that we are definitely focused in on, uh but in terms of our specific joint partnerships and we did an announcement last year, so it's it's it's somewhat public, but we are working together on ai where IBM is a is an undisputed leader with Watson and some of the technologies that you guys bring there. So we're bringing together, you know, it's kind of this real hard work goodness with IBM s progress and know how on the AI side. In addition, IBM is also known for um you know, really enterprise grade, yeah, security and working with some of the key sectors that need and value, reliability, security, availability um in those areas. Uh and so I think that partnership, we have quite a bit of uh quite a strong relationship and partnership around working together on security and doing confidential computer. >>Tell us more about the confidential computing. This is a joint development agreement, is a joint venture joint development agreement. Give us more detail on this. Tell us more about this announcement with IBM cloud, an AMG confidential computing. >>So that's right. So so what uh, you know, there's some key pillars to this. One of us is being able to to work together, define open standards, open architecture. Um so jointly with an IBM and also pulling in some of the assets in terms of red hat to be able to work together and pull together a confidential computer that can so some some key ideas here, we can work with, work within a hybrid cloud. We can work within the IBM cloud and to be able to provide you with, provide, provide our joint customers are and customers with with with unprecedented security and reliability uh in the cloud, >>what's the future of processors? I mean, what should people think when they expect to see innovation? Um Certainly data centers are evolving with core core features to work with hybrid operating model in the cloud. People are getting that edge relationship basically the data centers a large edge, but now you've got the other edges, we got industrial edges, you got consumers, people wearables. You're gonna have more and more devices big and small. Um What's the what's the road map look like? How do you describe the future of a. M. D. In in the IBM world? >>I think I think R I B M M. D partnership is bright, future is bright for sure, and I think there's there's a lot of key pieces there. Uh you know, I think IBM brings a lot of value in terms of being able to take on those up earlier, upper uh layers of software and that and the full stack um so IBM strength has really been, you know, as a systems company and as a software company. Right? So combining that with the Andes silicon, uh divide and see few devices really really is is it's a great combination. I see, you know, I see um growth in uh you know, obviously in in deploying kind of this, this scale out model where we have these very large uh large core count cpus, I see that trend continuing for sure. Uh you know, I think that that is gonna that is sort of the way of the future that you want cloud data applications that can scale across multi multiple cores within the socket and then across clusters of Cpus with within the data center. Um and IBM is in a really good position to take advantage of that to go to to to drive that within the cloud. That income combination with IBM s presence on prem. Uh and so that's that's where the hybrid hybrid cloud value proposition comes in. Um and so we actually see ourselves uh you know, playing in both sides. So we do have a very strong presence now and increasingly so on premises as well. And we we partner we were very interested in working with IBM on the on on premises uh with some of some of the key customers and then offering that hybrid connectivity onto, onto the the IBM cloud as >>well. I B M and M. D. Great partnership, great for clarifying and and sharing that insight come. I appreciate it. Thanks for for coming on the cube. I do want to ask you while I got you here. Um kind of a curveball question if you don't mind. You know, as you see hybrid cloud developing one of the big trends is this ecosystem play, right? So you're seeing connections between IBM and their and their partners being much more integrated. So cloud has been a big KPI kind of model. You connect people through a. P. I. S. There's a big trend that we're seeing and we're seeing this really in our reporting on silicon angle the rise of a cloud service provider within these ecosystems where hey, I could build on top of IBM cloud and build a great business. Um and as I do that, I might want to look at an architecture like an AMG, how does that fit into to your view as a doing business development over at AMG because because people are building on top of these ecosystems are building their own clouds on top of clouds, just seeing data cloud, just seeing these kinds of clouds, specialty clouds. So we could have a cute cloud on on top of IBM maybe someday. So, so I might want to build out a whole, I might be a cloud, so that's more processors needed for you. So how do you see this enablement? Because IBM is going to want to do that, it's kind of like, I'm kind of connecting the dots here in real time, but what's your, what's your take on that? What's your reaction? >>I think, I think that's I think that's right and I think m d isn't it isn't a pretty good position with IBM to be able to to enable that. Um we do have some very significant OsD partnerships, a lot of which that are leveraged into IBM um such as red hat of course, but also like VM ware and Nutanix. Um this provide these OS V partners provide kind of the base level infrastructure that we can then build upon and then have that have that A P. I. And be able to build, build um uh the the multi cloud environments that you're talking about. Um and I think that I think that's right, I think that is that is one of the uh you know, kind of future trends that that we will see uh you know, services that are offered on top of IBM cloud that take advantage of the the capabilities of the platform that come with it. Um and you know, the bare metal offerings that that IBM offer on their cloud is also quite unique um and hyper very performance. Um and so this actually gives um I think uh the the kind of uh I've been called a meta cloud, that unique ability to kind of go in and take advantage of the M. D. Hardware at a performance level and at a um uh to take advantage of that infrastructure better than they could in another crowd environments. I think that's that's that's actually very key and very uh one of the, one of the features of the IBM problems that differentiates it >>so much headroom there corns really appreciate you sharing that. I think it's a great opportunity. As I say, if you're you want to build and compete. Finally, there's no with the white space, with no competition or be better than the competition. So as they say in business, thank you for coming on sharing. Great, great future ahead for all builders out there. Thanks for coming on the cube. >>Thanks thank you very >>much. Okay. IBM think cube coverage here. I'm john for your host. Thanks for watching. Mm mm

Published Date : Apr 16 2021

SUMMARY :

It's the Digital coverage of IBM think 2021 brought to you by IBM. It's an honor to be here. You know, love A. M. D. Love the growth, loved the processors. so that the process of being the complete package, the complete socket and then we also the fastest poor some growth in the cloud with the Epic processors, what can customers expect I think, you know, we are, we are a leader in terms of the core density that we Um and so I can almost see the playbook evolving. So we're bringing together, you know, it's kind of this real hard work goodness with IBM s progress and know with IBM cloud, an AMG confidential computing. So so what uh, you know, there's some key pillars to this. Um What's the in. Um and so we actually see ourselves uh you know, playing in both sides. Um kind of a curveball question if you don't mind. Um and I think that I think that's right, I think that is that is one of the uh you know, So as they say in business, thank you for coming on sharing. Thanks for watching.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
IBMORGANIZATION

0.99+

Cameron SivaPERSON

0.99+

MarchDATE

0.99+

august of 2019DATE

0.99+

64 coresQUANTITY

0.99+

19%QUANTITY

0.99+

each coreQUANTITY

0.99+

628 lanesQUANTITY

0.99+

Each coreQUANTITY

0.99+

AMGORGANIZATION

0.99+

256 megabytesQUANTITY

0.99+

ArvinPERSON

0.99+

last yearDATE

0.99+

64 coresQUANTITY

0.99+

OneQUANTITY

0.99+

both sidesQUANTITY

0.99+

second generationQUANTITY

0.99+

second thingQUANTITY

0.99+

third generationQUANTITY

0.98+

Kumaran SivaPERSON

0.98+

bothQUANTITY

0.98+

NutanixORGANIZATION

0.98+

two threadsQUANTITY

0.97+

Epic 7003COMMERCIAL_ITEM

0.96+

EpicCOMMERCIAL_ITEM

0.96+

M. D.PERSON

0.96+

four terabytesQUANTITY

0.95+

third generationQUANTITY

0.94+

todayDATE

0.94+

EpicORGANIZATION

0.93+

Think 2021COMMERCIAL_ITEM

0.93+

oneQUANTITY

0.93+

IBM ClubORGANIZATION

0.92+

one placeQUANTITY

0.92+

M. DPERSON

0.91+

RedPERSON

0.91+

A. M.PERSON

0.9+

Epic 7763COMMERCIAL_ITEM

0.9+

firstQUANTITY

0.9+

AndesORGANIZATION

0.88+

three seriesQUANTITY

0.86+

E gen fourCOMMERCIAL_ITEM

0.86+

jeninPERSON

0.86+

Zen three coreCOMMERCIAL_ITEM

0.85+

2020 plusDATE

0.85+

64 4QUANTITY

0.82+

CeoPERSON

0.81+

about over 400 totalQUANTITY

0.8+

javaTITLE

0.8+

A. M. D.PERSON

0.79+

IBM cloudORGANIZATION

0.76+

johnPERSON

0.75+

CloudTITLE

0.74+

WatsonORGANIZATION

0.73+

72QUANTITY

0.73+

twoQUANTITY

0.72+

Muddu Sudhakar, Investor | theCUBE on Cloud 2021


 

(gentle music) >> From the Cube Studios in Palo Alto and Boston, connecting with thought leaders all around the world. This is theCube Conversation. >> Hi everybody, this is Dave Vellante, we're back at Cube on Cloud, and with me is Muddu Sudhakar. He's a long time alum of theCube, a technologist and executive, a serial entrepreneur and an investor. Welcome my friend, good to see you. >> Good to see you, Dave. Pleasure to be with you. Happy elections, I guess. >> Yeah, yeah. So I wanted to start, this work from home, pivot's been amazing, and you've seen the enterprise collaboration explode. I wrote a piece a couple months ago, looking at valuations of various companies, right around the snowflake IPO, I want to ask you about that, but I was looking at the valuations of various companies, at Spotify, and Shopify, and of course Zoom was there. And I was looking at just simple revenue multiples, and I said, geez, Zoom actually looks, might look undervalued, which is crazy, right? And of course the stock went up after that, and you see teams, Microsoft Teams, and Microsoft doing a great job across the board, we've written about that, you're seeing Webex is exploding, I mean, what do you make of this whole enterprise collaboration play? >> No, I think the look there is a trend here, right? So I think this probably trend started before COVID, but COVID is going to probably accelerate this whole digital transformation, right? People are going to work remotely a lot more, not everybody's going to come back to the offices even after COVID, so I think this whole collaboration through Slack, and Zoom, and Microsoft Teams and Webex, it's going to be the new game now, right? Both the video, audio and chat solutions, that's really going to help people like eyeballs. You're not going to spend time on all four of them, right? It's like everyday from a consumer side, you're going to spend time on your Gmail, Facebook, maybe Twitter, maybe Instagram, so like in the consumer side, on your personal life, you have something on the enterprise. The eyeballs are going to be in these platforms. >> Yeah. Well. >> But we're not going to take everything. >> Well, So you are right, there's a permanence to this, and I got a lot of ground to cover with you. And I always like our conversations mood because you tell it like it is, I'm going to stay on that work from home pivot. You know a lot about security, but you've seen three big trends, like mega trends in security, Endpoint, Identity Access Management, and Cloud Security, you're seeing this in the stock prices of companies like CrowdStrike, Zscaler, Okta- >> Right >> Sailpoint- >> Right, I mean, they exploded, as a result of the pandemic, and I think I'm inferring from your comment that you see that as permanent, but that's a real challenge from a security standpoint. What's the impact of Cloud there? >> No, it isn't impact but look, first is all the services required to be Cloud, right? See, the whole ideas for it to collaborate and do these things. So you cannot be running an application, like you can't be running conference and SharePoint oN-Prem, and try to on a Zoom and MS teams. So that's why, if you look at Microsoft is very clever, they went with Office 365, SharePoint 365, now they have MS Teams, so I think that Cloud is going to drive all these workloads that you have been talking about a lot, right? You and John have been saying this for years now. The eruption of Cloud and SAS services are the vehicle to drive this next-generation collaboration. >> You know what's so cool? So Cloud obviously is the topic, I wonder how you look at the last 10 years of Cloud, and maybe we could project forward, I mean the big three Cloud vendors, they're running it like $20 billion a quarter, and they're growing collectively, 35, 40% clips, so we're really approaching a hundred billion dollars for these three. And you hear stats like only 20% of the workloads are in the public Cloud, so it feels like we're just getting started. How do you look at the impact of Cloud on the market, as you say, the last 10 years, and what do you expect going forward? >> No, I think it's very fascinating, right? So I remember when theCube, you guys are talking about 10 years back, now it's been what? More than 10 years, 15 years, since AWS came out with their first S3 service back in 2006. >> Right. >> Right? so I think look, Cloud is going to accelerate even more further. The areas is going to accelerate is for different reasons. I think now you're seeing the initial days, it's all about startups, initial workloads, Dev test and QA test, now you're talking about real production workloads are moving towards Cloud, right? Initially it was backup, we really didn't care for backup they really put there. Now you're going to have Cloud health primary services, your primary storage will be there, it's not going to be an EMC, It's not going to be a NetApp storage, right? So workloads are going to shift from the business applications, and these business applications, will be running on the Cloud, and I'll make another prediction, make customer service and support. Customer service and support, again, we should be running on the Cloud. You're not want to run the thing on a Dell server, or an IBM server, or an HP server, with your own hosted environment. That model is not because there's no economies of scale. So to your point, what will drive Cloud for the next 10 years, will be economies of scale. Where can you take the cost? How can I save money? If you don't move to the Cloud, you won't save money. So all those workloads are going to go to the Cloud are people who really want to save, like global gradual custom, right? If you stay on the ASP model, a hosted, you're not going to save your costs, your costs will constantly go up from a SaaS perspective. >> So that doesn't bode well for all the On-prem guys, and you hear a lot of the vendors that don't own a Cloud that talk about repatriation, but the numbers don't support that. So what do those guys do? I mean, they're talking multi-Cloud, of course they're talking hybrid, that's IBM's big play, how do you see it? >> I think, look, see there, to me, multi-Cloud makes sense, right? You don't want one vendor that you never want to get, so having Amazon, Microsoft, Google, it gives them a multi-Cloud. Even hybrid Cloud does make sense, right? There'll be some workloads. It's like, we are still running On-prem environment, we still have mainframe, so it's never going to be a hundred percent, but I would say the majority, your question is, can we get to 60, 70, 80% workers in the next 10 years? I think you will. I think by 2025, more than 78% of the Cloud Migration by the next five years, 70% of workload for enterprise will be on the Cloud. The remaining 25, maybe Hybrid, maybe On-prem, but I get panics, really doesn't matter. You have saved and part of your business is running on the Cloud. That's your cost saving, that's where you'll see the economies of scale, and that's where all the growth will happen. >> So square the circle for me, because again, you hear the stat on the IDC stat, IBM Ginni Rometty puts it out there a lot that only 20% of the workloads are in the public Cloud, everything else is On-prem, but it's not a zero sum game, right? I mean the Cloud native stuff is growing like crazy, the On-prem stuff is flat to down, so what's going to happen? When you talk about 70% of the workloads will be in the Cloud, do you see those mission critical apps and moving into the car, I mean the insurance companies going to put their claims apps in the Cloud, or the financial services companies going to put their mission critical workloads in the Cloud, or they just going to develop new stuff that's Cloud native that is sort of interacts with the On-prem. How do you see that playing out? >> Yeah, no, I think absolutely, I think a very good question. So two things will happen. I think if you take an enterprise, right? Most businesses what they'll do is the workloads that they should not be running On-prem, they'll move it up. So obviously things like take, as I said, I use the word SharePoint, right? SharePoint and conference, all the knowledge stuff is still running on people's data centers. There's no reason. I understand, I've seen statistics that 70, 80% of the On-prem for SharePoint will move to SharePoint on the Cloud. So Microsoft is going to make tons of money on that, right? Same thing, databases, right? Whether it's CQL server, whether there is Oracle database, things that you are running as a database, as a Cloud, we move to the Cloud. Whether that is posted in Oracle Cloud, or you're running Oracle or Mongo DB, or Dynamo DB on AWS or SQL server Microsoft, that's going to happen. Then what you're talking about is really the App concept, the applications themselves, the App server. Is the App server is going to run On-prem, how much it's going to laureate outside? There may be a hybrid Cloud, like for example, Kafka. I may use a Purse running on a Kafka as a service, or I may be using Elasticsearch for my indexing on AWS or Google Cloud, but I may be running my App locally. So there'll be some hybrid place, but what I would say is for every application, 75% of your Comprende will be on the Cloud. So think of it like the Dev. So even for the On-prem app, you're not going to be a 100 percent On-prem. The competent, the billing materials will move to the Cloud, your Purse, your storage, because if you put it On-prem, you need to add all this, you need to have all the whole things to buy it and hire the people, so that's what is going to happen. So from a competent perspective, 70% of your bill of materials will move to the Cloud, even for an On-prem application. >> So, Of course, the susification of the industry in the last decade and in my three favorite companies last decade, you've worked for two of them, Tableau, ServiceNow, and Splunk. I want to ask you about those, but I'm interested in the potential disruption there. I mean, you've got these SAS companies, Salesforce of course is another one, but they can't get started in 1999. What do you see happening with those? I mean, we're basically building these sort of large SAS, platforms, now. Do you think that the Cloud native world that developers can come at this from an angle where they can disrupt those companies, or are they too entrenched? I mean, look at service now, I mean, I don't know, $80 billion market capital where they are, they bigger than Workday, I mean, just amazing how much they've grown and you feel like, okay, nothing can stop them, but there's always disruption in this industry, what are your thoughts on that. >> Not very good with, I think there'll be disrupted. So to me actually to your point, ServiceNow is now close to a 100 billion now, 95 billion market coverage, crazy. So from evaluation perspective, so I think the reason they'll be disrupted is that the SAS vendors that you talked about, ServiceNow, and all this plan, most of these services, they're truly not a multi-tenant or what do you call the Cloud Native. And that is the Accenture. So because of that, they will not be able to pass the savings back to the enterprises. So the cost economics, the economics that the Cloud provides because of the multi tenancy ability will not. The second reason there'll be disrupted is AI. So far, we talked about Cloud, but AI is the core. So it's not really Cloud Native, Dave, I look at the AI in a two-piece. AI is going to change, see all the SAS vendors were created 20 years back, if you remember, was an operator typing it, I don't respond administered we'll type a Splunk query. I don't need a human to type a query anymore, system will actually find it, that's what the whole security game has changed, right? So what's going to happen is if you believe in that, that AI, your score will disrupt all the SAS vendors, so one angle SAS is going to have is a Cloud. That's where you make the Cloud will take up because a SAS application will be Cloudified. Being SAS is not Cloud, right? Second thing is SAS will be also, I call it, will be AI-fied. So AI and machine learning will be trying to drive at the core so that I don't need that many licenses. I don't need that many humans. I don't need that many administrators to manage, I call them the tuners. Once you get a driverless car, you don't need a thousand tuners to tune your Tesla, or Google Waymo car. So the same philosophy will happen is your Dev Apps, your administrators, your service management, people that you need for service now, and these products, Zendesk with AI, will tremendously will disrupt. >> So you're saying, okay, so yeah, I was going to ask you, won't the SAS vendors, won't they be able to just put, inject AI into their platforms, and I guess I'm inferring saying, yeah, but a lot of the problems that they're solving, are going to go away because of AI, is that right? And automation and RPA and things of that nature, is that right? >> Yes and no. So I'll tell you what, sorry, you have asked a very good question, let's answer, let me rephrase that question. What you're saying is, "Why can't the existing SAS vendors do the AI?" >> Yes, right. >> Right, >> And there's a reason they can't do it is their pricing model is by number of seats. So I'm not going to come to Dave, and say, come on, come pay me less money. It's the same reason why a board and general lover build an electric car. They're selling 10 million gasoline cars. There's no incentive for me, I'm not going to do any AI, I'm going to put, I'm not going to come to you and say, hey, buy me a hundred less license next year from it. So that is one reason why AI, even though these guys do any AI, it's going to be just so I call it, they're going to, what do you call it, a whitewash, kind of like you put some paint brush on it, trying to show you some AI you did from a marketing dynamics. But at the core, if you really implement the AI with you take the driver out, how are you going to change the pricing model? And being a public company, you got to take a hit on the pricing model and the price, and it's going to have a stocking part. So that, to your earlier question, will somebody disrupt them? The person who is going to disrupt them, will disrupt them on the pricing model. >> Right. So I want to ask you about that, because we saw a Snowflake, and it's IPO, we were able to pour through its S-1, and they have a different pricing model. It's a true Cloud consumption model, Whereas of course, most SAS companies, they're going to lock you in for at least one year term, maybe more, and then, you buy the license, you got to pay X. If you, don't use it, you still got to pay for it. Snowflake's different, actually they have a different problem, that people are using it too much and the sea is driving the CFO crazy because the bill is going up and up and up, but to me, that's the right model, It's just like the Amazon model, if you can justify it, so how do you see the pricing, that consumption model is actually, you're seeing some of the On-prem guys at HPE, Dell, they're doing as a service. They're kind of taking a page out of the last decade SAS model, so I think pricing is a real tricky one, isn't it? >> No, you nailed it, you nailed it. So I think the way in which the Snowflake there, how the disruptors are data warehouse, that disrupted the open source vendors too. Snowflake distributed, imagine the playbook, you disrupted something as the $ 0, right? It's an open source with Cloudera, Hortonworks, Mapper, that whole big data that you want me to, or that market is this, that disrupting data warehouses like Netezza, Teradata, and the charging more money, they're making more money and disrupting at $0, because the pricing models by consumption that you talked about. CMT is going to happen in the service now, Zen Desk, well, 'cause their pricing one is by number of seats. People are going to say, "How are my users are going to ask?" right? If you're an employee help desk, you're back to your original health collaborative. I may be on Slack, I could be on zoom, I'll maybe on MS Teams, I'm going to ask by using usage model on Slack, tools by employees to service now is the pricing model that people want to pay for. The more my employees use it, the more value I get. But I don't want to pay by number of seats, so the vendor, who's going to figure that out, and that's where I look, if you know me, I'm right over as I started, that's what I've tried to push that model look, I love that because that's the core of how you want to change the new game. >> I agree. I say, kill me with that problem, I mean, some people are trying to make it a criticism, but you hit on the point. If you pay more, it's only because you're getting more value out of it. So I wanted to flip the switch here a little bit and take a customer angle. Something that you've been on all sides. And I want to talk a little bit about strategies, you've been a strategist, I guess, once a strategist, always a strategist. How should organizations be thinking about their approach to Cloud, it's cost different for different industries, but, back when the cube started, financial services Cloud was a four-letter word. But of course the age of company is going to matter, but what's the framework for figuring out your Cloud strategy to get to your 70% and really take advantage of the economics? Should I be Mono Cloud, Multi-Cloud, Multi-vendor, what would you advise? >> Yeah, no, I mean, I mean, I actually call it the tech stack. Actually you and John taught me that what was the tech stack, like the lamp stack, I think there is a new Cloud stack needs to come, and that I think the bottomline there should be... First of all, anything with storage should be in the Cloud. I mean, if you want to start, whether you are, financial, doesn't matter, there's no way. I come from cybersecurity side, I've seen it. Your attackers will be more with insiders than being on the Cloud, so storage has to be in the Cloud then come compute, Kubernetes. If you really want to use containers and Kubernetes, it has to be in the public Cloud, leverage that have the computer on their databases. That's where it can be like if your data is so strong, maybe run it On-prem, maybe have it on a hosted model for when it comes to database, but there you have a choice between hybrid Cloud and public Cloud choice. Then on top when it comes to App, the app itself, you can run locally or anywhere, the App and database. Now the areas that you really want to go after to migrate is look at anything that's an enterprise workload that you don't need people to manage it. You want your own team to move up in the career. You don't want thousand people looking at... you don't want to have a, for example, IT administrators to call central people to the people to manage your compute storage. That workload should be more, right? You already saw Sierra moved out to Salesforce. We saw collaboration already moved out. Zoom is not running locally. You already saw SharePoint with knowledge management mode up, right? With a box, drawbacks, you name anything. The next global mode is a SAS workloads, right? I think Workday service running there, but work data will go into the Cloud. I bet at some point Zendesk, ServiceNow, then either they put it on the public Cloud, or they have to create a product and public Cloud. To your point, these public Cloud vendors are at $2 trillion market cap. They're they're bigger than the... I call them nation States. >> Yeah, >> So I'm servicing though. I mean, there's a 2 trillion market gap between Amazon and Azure, I'm not going to compete with them. So I want to take this workload to run it there. So all these vendors, if you see that's where Shandra from Adobe is pushing this right, Adobe, Workday, Anaplan, all the SAS vendors we'll move them into the public Cloud within these vendors. So those workloads need to move out, right? So that all those things will start, then you'll start migrating, but I call your procurement. That's where the RPA comes in. The other thing that we didn't talk about, back to your first question, what is the next 10 years of Cloud will be RPA? That third piece to Cloud is RPA because if you have your systems On-prem, I can't automate them. I have to do a VPN into your house there and then try to automate your systems, or your procurement, et cetera. So all these RPA vendors are still running On-prem, most of them, whether it's UI path automation anywhere. So the Cloud should be where the brain should be. That's what I call them like the octopus analogy, the brain is in the Cloud, the tentacles are everywhere, they should manage it. But if my tentacles have to do a VPN with your house to manage it, I'm always will have failures. So if you look at the why RPA did not have the growth, like the Snowflake, like the Cloud, because they are running it On-prem, most of them still. 80% of the RP revenue is On-prem, running On-prem, that needs to be called clarified. So AI, RPA and the SAS, are the three reasons Cloud will take off. >> Awesome. Thank you for that. Now I want to flip the switch again. You're an investor or a multi-tool player here, but so if you're, let's say you're an ecosystem player, and you're kind of looking at the landscape as you're in an investor, of course you've invested in the Cloud, because the Cloud is where it's at, but you got to be careful as an ecosystem player to pick a spot that both provides growth, but allows you to have a moat as, I mean, that's why I'm really curious to see how Snowflake's going to compete because they're competing with AWS, Microsoft, and Google, unlike, Frank, when he was at service now, he was competing with BMC and with on-prem and he crushed it, but the competitors are much more capable here, but it seems like they've got, maybe they've got a moat with MultiCloud, and that whole data sharing thing, we'll see. But, what about that? Where are the opportunities? Where's that white space? And I know there's a lot of white space, but what's the framework to look at, from an investor standpoint, or even a CEO standpoint, where you want to put place your bets. >> No, very good question, so look, I did something. We talk as an investor in the board with many companies, right? So one thing that says as an investor, if you come back and say, I want to create a next generation Docker or a computer, there's no way nobody's going to invest. So that we can motor off, even if you want to do object storage or a block storage, I mean, I've been an investor board member of so many storage companies, there's no way as an industry, I'll write a check for a compute or storage, right? If you want to create a next generation network, like either NetSuite, or restart Juniper, Cisco, there is no way. But if you come back and say, I want to create a next generation Viper for remote working environments, where AI is at the core, I'm interested in that, right? So if you look at how the packets are dropped, there's no intelligence in either not switching today. The packets come, I do it. The intelligence is not built into the network with AI level. So if somebody comes with an AI, what good is all this NVD, our GPS, et cetera, if you cannot do wire speed, packet inspection, looking at the content and then route the traffic. If I see if it's a video package, but in UN Boston, there's high interview day of they should be loading our package faster, because you are a premium ISP. That intelligence has not gone there. So you will see, and that will be a bad people will happen in the network, switching, et cetera, right? So that is still an angle. But if you work and it comes to platform services, remember when I was at Pivotal and VMware, all models was my boss, that would, yes, as a platform, service is a game already won by the Cloud guys. >> Right. (indistinct) >> Silicon Valley Investors, I don't think you want to invest in past services, right? I mean, you might come with some lecture edition database to do some updates, there could be some game, let's say we want to do a time series database, or some metrics database, there's always some small angle, but the opportunity to go create a national database there it's very few. So I'm kind of eliminating all the black spaces, right? >> Yeah. >> We have the white spaces that comes in is the SAS level. Now to your point, if I'm Amazon, I'm going to compete with Snowflake, I have Redshift. So this is where at some point, these Cloud platforms, I call them aircraft carriers. They're not going to stay on the aircraft carriers, they're going to own the land as well. So they're going to move up to the SAS space. The question is you want to create a SAS service like CRM. They are not going to create a CRM like service, they may not create a sales force and service now, but if you're going to add a data warehouse, I can very well see Azure, Google, and AWS, going to create something to compute a Snowflake. Why would I not? It's so close to my database and data warehouse, I already have Redshift. So that's going to be nightlights, same reason, If you look at Netflix, you have a Netflix and you have Amazon prime. Netflix runs on Amazon, but you have Amazon prime. So you have the same model, you have Snowflake, and you'll have Redshift. The both will help each other, there'll be a... What do you call it? Coexistence will happen. But if you really want to invest, you want to invest in SAS companies. You do not want to be investing in a compliment players. You don't want to a feature. >> Yeah, that's great, I appreciate that perspective. And I wonder, so obviously Microsoft play in SAS, Google's got G suite. And I wonder if people often ask the Andy Jassy, you're going to move up the stack, you got to be an application, a SAS vendor, and you never say never with Atavist, But I wonder, and we were talking to Jerry Chen about this, years ago on theCube, and his angle was that Amazon will play, but they'll play through developers. They'll enable developers, and they'll participate, they'll take their, lick off the cone. So it's going to be interesting to see how directly Amazon plays, but at some point you got Tam expansion, you got to play in that space. >> Yeah, I'll give you an example of knowing, I got acquired by a couple of times by EMC. So I learned a lot from Joe Tucci and Paul Merage over the years. see Paul and Joe, what they did is to look at how 20 years, and they are very close to Boston in your area, Joe, what games did is they used to sell storage, but you know what he did, he went and bought the Apps to drive them. He bought like Legato, he bought Documentum, he bought Captiva, if you remember how he acquired all these companies as a services, he bought VMware to drive that. So I think the good angle that Microsoft has is, I'm a SAS player, I have dynamics, I have CRM, I have SharePoint, I have Collaboration, I have Office 365, MS Teams for users, and then I have the platform as Azure. So I think if I'm Amazon, (indistinct). I got to own the apps so that I can drive this workforce on my platform. >> Interesting. >> Just going to developers, like I know Jerry Chan, he was my peer a BMF. I don't think just literally to developers and that model works in open source, but the open source game is pretty much gone, and not too many companies made money. >> Well, >> Most companies pretty much gone. >> Yeah, he's right. Red hats not bad idea. But it's very interesting what you're saying there. And so, hey, its why Oracle wants to have Tiktok, running on their platform, right? I mean, it's going to. (laughing) It's going to drive that further integration. I wanted to ask you something, you were talking about, you wouldn't invest in storage or compute, but I wonder, and you mentioned some commentary about GPU's. Of course the videos has been going crazy, but they're now saying, okay, how do we expand our Team, they make the acquisition of arm, et cetera. What about this DPU thing, if you follow that, that data processing unit where they're like hyper dis-aggregation and then they reaggregate, and as an offload and really to drive data centric workloads. Have you looked at that at all? >> I did, I think, and that's a good angle. So I think, look, it's like, it goes through it. I don't know if you remember in your career, we have seen it. I used to get Silicon graphics. I saw the first graphic GPU, right? That time GPU was more graphic processor unit, >> Right, yeah, work stations. >> So then become NPUs at work processing units, right? There was a TCP/IP office offloading, if you remember right, there was like vector processing unit. So I think every once in a while the industry, recreated this separate unit, as a co-processor to the main CPU, because main CPU's inefficient, and it makes sense. And then Google created TPU's and then we have the new world of the media GPU's, now we have DPS all these are good, but what's happening is, all these are driving for machine learning, AI for the training period there. Training period Sometimes it's so long with the workloads, if you can cut down, it makes sense. >> Yeah. >> Because, but the question is, these aren't so specialized in nature. I can't use it for everything. >> Yup. >> I want Ideally, algorithms to be paralyzed, I want the training to be paralyzed, I want so having deep use and GPS are important, I think where I want to see them as more, the algorithm, there should be more investment from the NVIDIA's and these guys, taking the algorithm to be highly paralyzed them. (indistinct) And I think that still has not happened in industry yet. >> All right, so we're pretty much out of time, but what are you doing these days? Where are you spending your time, are you still in Stealth, give us a little glimpse. >> Yeah, no, I'm out of the Stealth, I'm actually the CEO of Aisera now, Aisera, obviously I invested with them, but I'm the CEO of Aisero. It's funded by Menlo ventures, Norwest, True, along with Khosla ventures and Ram Shriram is a big investor. Robin's on the board of Google, so these guys, look, we are going out to the collaboration game. How do you automate customer service and support for employees and then users, right? In this whole game, we talked about the Zoom, Slack and MS Teams, that's what I'm spending time, I want to create next generation service now. >> Fantastic. Muddu, I always love having you on you, pull punches, you tell it like it is, that you're a great visionary technologist. Thanks so much for coming on theCube, and participating in our program. >> Dave, it's always a pleasure speaking to you sir. Thank you. >> Okay. Keep it right there, there's more coming from Cuba and Cloud right after this break. (slow music)

Published Date : Jan 22 2021

SUMMARY :

From the Cube Studios Welcome my friend, good to see you. Pleasure to be with you. I want to ask you about that, but COVID is going to probably accelerate Yeah. because you tell it like it is, that you see that as permanent, So that's why, if you look I wonder how you look at you guys are talking about 10 years back, So to your point, what will drive Cloud and you hear a lot of the I think you will. the On-prem stuff is flat to Is the App server is going to run On-prem, I want to ask you about those, So the same philosophy will So I'll tell you what, sorry, I'm not going to come to you and say, hey, the license, you got to pay X. I love that because that's the core But of course the age of Now the areas that you So AI, RPA and the SAS, where you want to put place your bets. So if you look at how Right. but the opportunity to go So you have the same So it's going to be interesting to see the Apps to drive them. I don't think just literally to developers I wanted to ask you something, I don't know if you AI for the training period there. Because, but the question is, taking the algorithm to but what are you doing these days? but I'm the CEO of Aisero. Muddu, I always love having you on you, pleasure speaking to you sir. right after this break.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

MicrosoftORGANIZATION

0.99+

DavePERSON

0.99+

AmazonORGANIZATION

0.99+

1999DATE

0.99+

Jerry ChenPERSON

0.99+

AdobeORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

$0QUANTITY

0.99+

AWSORGANIZATION

0.99+

OracleORGANIZATION

0.99+

$ 0QUANTITY

0.99+

PaulPERSON

0.99+

JohnPERSON

0.99+

NetezzaORGANIZATION

0.99+

Ram ShriramPERSON

0.99+

2006DATE

0.99+

twoQUANTITY

0.99+

CiscoORGANIZATION

0.99+

35QUANTITY

0.99+

HortonworksORGANIZATION

0.99+

Muddu SudhakarPERSON

0.99+

Jerry ChanPERSON

0.99+

95 billionQUANTITY

0.99+

JoePERSON

0.99+

2025DATE

0.99+

WebexORGANIZATION

0.99+

TeradataORGANIZATION

0.99+

DellORGANIZATION

0.99+

60QUANTITY

0.99+

BostonLOCATION

0.99+

Palo AltoLOCATION

0.99+

70%QUANTITY

0.99+

FrankPERSON

0.99+

AiseroORGANIZATION

0.99+

Paul MeragePERSON

0.99+

NVIDIAORGANIZATION

0.99+

$2 trillionQUANTITY

0.99+

HPORGANIZATION

0.99+

70QUANTITY

0.99+

IBMORGANIZATION

0.99+

SpotifyORGANIZATION

0.99+

ShopifyORGANIZATION

0.99+

NorwestORGANIZATION

0.99+

75%QUANTITY

0.99+

BMCORGANIZATION

0.99+

first questionQUANTITY

0.99+

ClouderaORGANIZATION

0.99+

EMCORGANIZATION

0.99+

two-pieceQUANTITY

0.99+

MudduPERSON

0.99+

VMwareORGANIZATION

0.99+

Andy JassyPERSON

0.99+

15 yearsQUANTITY

0.99+

AccentureORGANIZATION

0.99+

MapperORGANIZATION

0.99+

SASORGANIZATION

0.99+

80%QUANTITY

0.99+

100 percentQUANTITY

0.99+

AiseraORGANIZATION

0.99+

PivotalORGANIZATION

0.99+

OktaORGANIZATION

0.99+

$80 billionQUANTITY

0.99+

Joe TucciPERSON

0.99+

next yearDATE

0.99+

20 yearsQUANTITY

0.99+

ZscalerORGANIZATION

0.99+

CubeORGANIZATION

0.99+

Muddu Sudhakar | CUBE on Cloud


 

(gentle music) >> From the Cube Studios in Palo Alto and Boston, connecting with thought leaders all around the world. This is theCube Conversation. >> Hi everybody, this is Dave Vellante, we're back at Cube on Cloud, and with me is Muddu Sudhakar. He's a long time alum of theCube, a technologist and executive, a serial entrepreneur and an investor. Welcome my friend, good to see you. >> Good to see you, Dave. Pleasure to be with you. Happy elections, I guess. >> Yeah, yeah. So I wanted to start, this work from home, pivot's been amazing, and you've seen the enterprise collaboration explode. I wrote a piece a couple months ago, looking at valuations of various companies, right around the snowflake IPO, I want to ask you about that, but I was looking at the valuations of various companies, at Spotify, and Shopify, and of course Zoom was there. And I was looking at just simple revenue multiples, and I said, geez, Zoom actually looks, might look undervalued, which is crazy, right? And of course the stock went up after that, and you see teams, Microsoft Teams, and Microsoft doing a great job across the board, we've written about that, you're seeing Webex is exploding, I mean, what do you make of this whole enterprise collaboration play? >> No, I think the look there is a trend here, right? So I think this probably trend started before COVID, but COVID is going to probably accelerate this whole digital transformation, right? People are going to work remotely a lot more, not everybody's going to come back to the offices even after COVID, so I think this whole collaboration through Slack, and Zoom, and Microsoft Teams and Webex, it's going to be the new game now, right? Both the video, audio and chat solutions, that's really going to help people like eyeballs. You're not going to spend time on all four of them, right? It's like everyday from a consumer side, you're going to spend time on your Gmail, Facebook, maybe Twitter, maybe Instagram, so like in the consumer side, on your personal life, you have something on the enterprise. The eyeballs are going to be in these platforms. >> Yeah. Well. >> But we're not going to take everything. >> Well, So you are right, there's a permanence to this, and I got a lot of ground to cover with you. And I always like our conversations mood because you tell it like it is, I'm going to stay on that work from home pivot. You know a lot about security, but you've seen three big trends, like mega trends in security, Endpoint, Identity Access Management, and Cloud Security, you're seeing this in the stock prices of companies like CrowdStrike, Zscaler, Okta- >> Right >> Sailpoint- >> Right, I mean, they exploded, as a result of the pandemic, and I think I'm inferring from your comment that you see that as permanent, but that's a real challenge from a security standpoint. What's the impact of Cloud there? >> No, it isn't impact but look, first is all the services required to be Cloud, right? See, the whole ideas for it to collaborate and do these things. So you cannot be running an application, like you can't be running conference and SharePoint oN-Prem, and try to on a Zoom and MS teams. So that's why, if you look at Microsoft is very clever, they went with Office 365, SharePoint 365, now they have MS Teams, so I think that Cloud is going to drive all these workloads that you have been talking about a lot, right? You and John have been saying this for years now. The eruption of Cloud and SAS services are the vehicle to drive this next-generation collaboration. >> You know what's so cool? So Cloud obviously is the topic, I wonder how you look at the last 10 years of Cloud, and maybe we could project forward, I mean the big three Cloud vendors, they're running it like $20 billion a quarter, and they're growing collectively, 35, 40% clips, so we're really approaching a hundred billion dollars for these three. And you hear stats like only 20% of the workloads are in the public Cloud, so it feels like we're just getting started. How do you look at the impact of Cloud on the market, as you say, the last 10 years, and what do you expect going forward? >> No, I think it's very fascinating, right? So I remember when theCube, you guys are talking about 10 years back, now it's been what? More than 10 years, 15 years, since AWS came out with their first S3 service back in 2006. >> Right. >> Right? so I think look, Cloud is going to accelerate even more further. The areas is going to accelerate is for different reasons. I think now you're seeing the initial days, it's all about startups, initial workloads, Dev test and QA test, now you're talking about real production workloads are moving towards Cloud, right? Initially it was backup, we really didn't care for backup they really put there. Now you're going to have Cloud health primary services, your primary storage will be there, it's not going to be an EMC, It's not going to be a ETAP storage, right? So workloads are going to shift from the business applications, and this business App again, will be running on the Cloud, and I'll make another prediction, make customer service and support. Customer service and support, again, we should be running on the Cloud. You're not want to run the thing on a Dell server, or an IBM server, or an HP server, with your own hosted environment. That model is not because there's no economies of scale. So to your point, what will drive Cloud for the next 10 years, will be economies of scale. Where can you take the cost? How can I save money? If you don't move to the Cloud, you won't save money. So all those workloads are going to go to the Cloud are people who really want to save, like global gradual custom, right? If you stay on the ASP model, a hosted, you're not going to save your costs, your costs will constantly go up from a SAS perspective. >> So that doesn't bode well for all the On-prem guys, and you hear a lot of the vendors that don't own a Cloud that talk about repatriation, but the numbers don't support that. So what do those guys do? I mean, they're talking multi-Cloud, of course they're talking hybrid, that's IBM's big play, how do you see it? >> I think, look, see there, to me, multi-Cloud makes sense, right? You don't want one vendor that you never want to get, so having Amazon, Microsoft, Google, it gives them a multi-Cloud. Even hybrid Cloud does make sense, right? There'll be some workloads. It's like, we are still running On-prem environment, we still have mainframe, so it's never going to be a hundred percent, but I would say the majority, your question is, can we get to 60, 70, 80% workers in the next 10 years? I think you will. I think by 2025, more than 78% of the Cloud Migration by the next five years, 70% of workload for enterprise will be on the Cloud. The remaining 25, maybe Hybrid, maybe On-prem, but I get panics, really doesn't matter. You have saved and part of your business is running on the Cloud. That's your cost saving, that's where you'll see the economies of scale, and that's where all the growth will happen. >> So square the circle for me, because again, you hear the stat on the IDC stat, IBM Ginni Rometty puts it out there a lot that only 20% of the workloads are in the public Cloud, everything else is On-prem, but it's not a zero sum game, right? I mean the Cloud native stuff is growing like crazy, the On-prem stuff is flat to down, so what's going to happen? When you talk about 70% of the workloads will be in the Cloud, do you see those mission critical apps and moving into the car, I mean the insurance companies going to put their claims apps in the Cloud, or the financial services companies going to put their mission critical workloads in the Cloud, or they just going to develop new stuff that's Cloud native that is sort of interacts with the On-prem. How do you see that playing out? >> Yeah, no, I think absolutely, I think a very good question. So two things will happen. I think if you take an enterprise, right? Most businesses what they'll do is the workloads that they should not be running On-prem, they'll move it up. So obviously things like take, as I said, I use the word SharePoint, right? SharePoint and conference, all the knowledge stuff is still running on people's data centers. There's no reason. I understand, I've seen statistics that 70, 80% of the On-prem for SharePoint will move to SharePoint on the Cloud. So Microsoft is going to make tons of money on that, right? Same thing, databases, right? Whether it's CQL server, whether there is Oracle database, things that you are running as a database, as a Cloud, we move to the Cloud. Whether that is posted in Oracle Cloud, or you're running Oracle or Mongo DB, or Dynamo DB on AWS or SQL server Microsoft, that's going to happen. Then what you're talking about is really the App concept, the applications themselves, the App server. Is the App server is going to run On-prem, how much it's going to laureate outside? There may be a hybrid Cloud, like for example, Kafka. I may use a Purse running on a Kafka as a service, or I may be using Elasticsearch for my indexing on AWS or Google Cloud, but I may be running my App locally. So there'll be some hybrid place, but what I would say is for every application, 75% of your Comprende will be on the Cloud. So think of it like the Dev. So even for the On-prem app, you're not going to be a 100 percent On-prem. The competent, the billing materials will move to the Cloud, your Purse, your storage, because if you put it On-prem, you need to add all this, you need to have all the whole things to buy it and hire the people, so that's what is going to happen. So from a competent perspective, 70% of your bill of materials will move to the Cloud, even for an On-prem application. >> So, Of course, the susification of the industry in the last decade and in my three favorite companies last decade, you've worked for two of them, Tableau, ServiceNow, and Splunk. I want to ask you about those, but I'm interested in the potential disruption there. I mean, you've got these SAS companies, Salesforce of course is another one, but they can't get started in 1999. What do you see happening with those? I mean, we're basically building these sort of large SAS, platforms, now. Do you think that the Cloud native world that developers can come at this from an angle where they can disrupt those companies, or are they too entrenched? I mean, look at service now, I mean, I don't know, $80 billion market capital where they are, they bigger than Workday, I mean, just amazing how much they've grown and you feel like, okay, nothing can stop them, but there's always disruption in this industry, what are your thoughts on that. >> Not very good with, I think there'll be disrupted. So to me actually to your point, ServiceNow is now close to a 100 billion now, 95 billion market coverage, crazy. So from evaluation perspective, so I think the reason they'll be disrupted is that the SAS vendors that you talked about, ServiceNow, and all this plan, most of these services, they're truly not a multi-tenant or what do you call the Cloud Native. And that is the Accenture. So because of that, they will not be able to pass the savings back to the enterprises. So the cost economics, the economics that the Cloud provides because of the multi tenancy ability will not. The second reason there'll be disrupted is AI. So far, we talked about Cloud, but AI is the core. So it's not really Cloud Native, Dave, I look at the AI in a two-piece. AI is going to change, see all the SAS vendors were created 20 years back, if you remember, was an operator typing it, I don't respond administered we'll type a Splunk query. I don't need a human to type a query anymore, system will actually find it, that's what the whole security game has changed, right? So what's going to happen is if you believe in that, that AI, your score will disrupt all the SAS vendors, so one angle SAS is going to have is a Cloud. That's where you make the Cloud will take up because a SAS application will be Cloudified. Being SAS is not Cloud, right? Second thing is SAS will be also, I call it, will be AI-fied. So AI and machine learning will be trying to drive at the core so that I don't need that many licenses. I don't need that many humans. I don't need that many administrators to manage, I call them the tuners. Once you get a driverless car, you don't need a thousand tuners to tune your Tesla, or Google Waymo car. So the same philosophy will happen is your Dev Apps, your administrators, your service management, people that you need for service now, and these products, Zendesk with AI, will tremendously will disrupt. >> So you're saying, okay, so yeah, I was going to ask you, won't the SAS vendors, won't they be able to just put, inject AI into their platforms, and I guess I'm inferring saying, yeah, but a lot of the problems that they're solving, are going to go away because of AI, is that right? And automation and RPA and things of that nature, is that right? >> Yes and no. So I'll tell you what, sorry, you have asked a very good question, let's answer, let me rephrase that question. What you're saying is, "Why can't the existing SAS vendors do the AI?" >> Yes, right. >> Right, >> And there's a reason they can't do it is their pricing model is by number of seats. So I'm not going to come to Dave, and say, come on, come pay me less money. It's the same reason why a board and general lover build an electric car. They're selling 10 million gasoline cars. There's no incentive for me, I'm not going to do any AI, I'm going to put, I'm not going to come to you and say, hey, buy me a hundred less license next year from it. So that is one reason why AI, even though these guys do any AI, it's going to be just so I call it, they're going to, what do you call it, a whitewash, kind of like you put some paint brush on it, trying to show you some AI you did from a marketing dynamics. But at the core, if you really implement the AI with you take the driver out, how are you going to change the pricing model? And being a public company, you got to take a hit on the pricing model and the price, and it's going to have a stocking part. So that, to your earlier question, will somebody disrupt them? The person who is going to disrupt them, will disrupt them on the pricing model. >> Right. So I want to ask you about that, because we saw a Snowflake, and it's IPO, we were able to pour through its S-1, and they have a different pricing model. It's a true Cloud consumption model, Whereas of course, most SAS companies, they're going to lock you in for at least one year term, maybe more, and then, you buy the license, you got to pay X. If you, don't use it, you still got to pay for it. Snowflake's different, actually they have a different problem, that people are using it too much and the sea is driving the CFO crazy because the bill is going up and up and up, but to me, that's the right model, It's just like the Amazon model, if you can justify it, so how do you see the pricing, that consumption model is actually, you're seeing some of the On-prem guys at HPE, Dell, they're doing as a service. They're kind of taking a page out of the last decade SAS model, so I think pricing is a real tricky one, isn't it? >> No, you nailed it, you nailed it. So I think the way in which the Snowflake there, how the disruptors are data warehouse, that disrupted the open source vendors too. Snowflake distributed, imagine the playbook, you disrupted something as the $ 0, right? It's an open source with Cloudera, Hortonworks, Mapper, that whole big data that you want me to, or that market is this, that disrupting data warehouses like Netezza, Teradata, and the charging more money, they're making more money and disrupting at $0, because the pricing models by consumption that you talked about. CMT is going to happen in the service now, Zen Desk, well, 'cause their pricing one is by number of seats. People are going to say, "How are my users are going to ask?" right? If you're an employee help desk, you're back to your original health collaborative. I may be on Slack, I could be on zoom, I'll maybe on MS Teams, I'm going to ask by using usage model on Slack, tools by employees to service now is the pricing model that people want to pay for. The more my employees use it, the more value I get. But I don't want to pay by number of seats, so the vendor, who's going to figure that out, and that's where I look, if you know me, I'm right over as I started, that's what I've tried to push that model look, I love that because that's the core of how you want to change the new game. >> I agree. I say, kill me with that problem, I mean, some people are trying to make it a criticism, but you hit on the point. If you pay more, it's only because you're getting more value out of it. So I wanted to flip the switch here a little bit and take a customer angle. Something that you've been on all sides. And I want to talk a little bit about strategies, you've been a strategist, I guess, once a strategist, always a strategist. How should organizations be thinking about their approach to Cloud, it's cost different for different industries, but, back when the cube started, financial services Cloud was a four-letter word. But of course the age of company is going to matter, but what's the framework for figuring out your Cloud strategy to get to your 70% and really take advantage of the economics? Should I be Mono Cloud, Multi-Cloud, Multi-vendor, what would you advise? >> Yeah, no, I mean, I mean, I actually call it the tech stack. Actually you and John taught me that what was the tech stack, like the lamp stack, I think there is a new Cloud stack needs to come, and that I think the bottomline there should be... First of all, anything with storage should be in the Cloud. I mean, if you want to start, whether you are, financial, doesn't matter, there's no way. I come from cybersecurity side, I've seen it. Your attackers will be more with insiders than being on the Cloud, so storage has to be in the Cloud and encompass compute whoever it is. If you really want to use containers and Kubernetes, it has to be in the public Cloud, leverage that have the computer on their databases. That's where it can be like if your data is so strong, maybe run it On-prem, maybe have it on a hosted model for when it comes to database, but there you have a choice between hybrid Cloud and public Cloud choice. Then on top when it comes to App, the app itself, you can run locally or anywhere, the App and database. Now the areas that you really want to go after to migrate is look at anything that's an enterprise workload that you don't need people to manage it. You want your own team to move up in the career. You don't want thousand people looking at... you don't want to have a, for example, IT administrators to call central people to the people to manage your compute storage. That workload should be more, right? You already saw Sierra moved out to Salesforce. We saw collaboration already moved out. Zoom is not running locally. You already saw SharePoint with knowledge management mode up, right? With a box, drawbacks, you name anything. The next global mode is a SAS workloads, right? I think Workday service running there, but work data will go into the Cloud. I bet at some point Zendesk, ServiceNow, then either they put it on the public Cloud, or they have to create a product and public Cloud. To your point, these public Cloud vendors are at $2 trillion market cap. They're they're bigger than the... I call them nation States. >> Yeah, >> So I'm servicing though. I mean, there's a 2 trillion market gap between Amazon and Azure, I'm not going to compete with them. So I want to take this workload to run it there. So all these vendors, if you see that's where Shandra from Adobe is pushing this right, Adobe, Workday, Anaplan, all the SAS vendors we'll move them into the public Cloud within these vendors. So those workloads need to move out, right? So that all those things will start, then you'll start migrating, but I call your procurement. That's where the RPA comes in. The other thing that we didn't talk about, back to your first question, what is the next 10 years of Cloud will be RPA? That third piece to Cloud is RPA because if you have your systems On-prem, I can't automate them. I have to do a VPN into your house there and then try to automate your systems, or your procurement, et cetera. So all these RPA vendors are still running On-prem, most of them, whether it's UI path automation anywhere. So the Cloud should be where the brain should be. That's what I call them like the octopus analogy, the brain is in the Cloud, the tentacles are everywhere, they should manage it. But if my tentacles have to do a VPN with your house to manage it, I'm always will have failures. So if you look at the why RPA did not have the growth, like the Snowflake, like the Cloud, because they are running it On-prem, most of them still. 80% of the RP revenue is On-prem, running On-prem, that needs to be called clarified. So AI, RPA and the SAS, are the three reasons Cloud will take off. >> Awesome. Thank you for that. Now I want to flip the switch again. You're an investor or a multi-tool player here, but so if you're, let's say you're an ecosystem player, and you're kind of looking at the landscape as you're in an investor, of course you've invested in the Cloud, because the Cloud is where it's at, but you got to be careful as an ecosystem player to pick a spot that both provides growth, but allows you to have a moat as, I mean, that's why I'm really curious to see how Snowflake's going to compete because they're competing with AWS, Microsoft, and Google, unlike, Frank, when he was at service now, he was competing with BMC and with on-prem and he crushed it, but the competitors are much more capable here, but it seems like they've got, maybe they've got a moat with MultiCloud, and that whole data sharing thing, we'll see. But, what about that? Where are the opportunities? Where's that white space? And I know there's a lot of white space, but what's the framework to look at, from an investor standpoint, or even a CEO standpoint, where you want to put place your bets. >> No, very good question, so look, I did something. We talk as an investor in the board with many companies, right? So one thing that says as an investor, if you come back and say, I want to create a next generation Docker or a computer, there's no way nobody's going to invest. So that we can motor off, even if you want to do object storage or a block storage, I mean, I've been an investor board member of so many storage companies, there's no way as an industry, I'll write a check for a compute or storage, right? If you want to create a next generation network, like either NetSuite, or restart Juniper, Cisco, there is no way. But if you come back and say, I want to create a next generation Viper for remote working environments, where AI is at the core, I'm interested in that, right? So if you look at how the packets are dropped, there's no intelligence in either not switching today. The packets come, I do it. The intelligence is not built into the network with AI level. So if somebody comes with an AI, what good is all this NVD, our GPS, et cetera, if you cannot do wire speed, packet inspection, looking at the content and then route the traffic. If I see if it's a video package, but in UN Boston, there's high interview day of they should be loading our package faster, because you are a premium ISP. That intelligence has not gone there. So you will see, and that will be a bad people will happen in the network, switching, et cetera, right? So that is still an angle. But if you work and it comes to platform services, remember when I was at Pivotal and VMware, all models was my boss, that would, yes, as a platform, service is a game already won by the Cloud guys. >> Right. (indistinct) >> Silicon Valley Investors, I don't think you want to invest in past services, right? I mean, you might come with some lecture edition database to do some updates, there could be some game, let's say we want to do a time series database, or some metrics database, there's always some small angle, but the opportunity to go create a national database there it's very few. So I'm kind of eliminating all the black spaces, right? >> Yeah. >> We have the white spaces that comes in is the SAS level. Now to your point, if I'm Amazon, I'm going to compete with Snowflake, I have Redshift. So this is where at some point, these Cloud platforms, I call them aircraft carriers. They're not going to stay on the aircraft carriers, they're going to own the land as well. So they're going to move up to the SAS space. The question is you want to create a SAS service like CRM. They are not going to create a CRM like service, they may not create a sales force and service now, but if you're going to add a data warehouse, I can very well see Azure, Google, and AWS, going to create something to compute a Snowflake. Why would I not? It's so close to my database and data warehouse, I already have Redshift. So that's going to be nightlights, same reason, If you look at Netflix, you have a Netflix and you have Amazon prime. Netflix runs on Amazon, but you have Amazon prime. So you have the same model, you have Snowflake, and you'll have Redshift. The both will help each other, there'll be a... What do you call it? Coexistence will happen. But if you really want to invest, you want to invest in SAS companies. You do not want to be investing in a compliment players. You don't want to a feature. >> Yeah, that's great, I appreciate that perspective. And I wonder, so obviously Microsoft play in SAS, Google's got G suite. And I wonder if people often ask the Andy Jassy, you're going to move up the stack, you got to be an application, a SAS vendor, and you never say never with Atavist, But I wonder, and we were talking to Jerry Chen about this, years ago on theCube, and his angle was that Amazon will play, but they'll play through developers. They'll enable developers, and they'll participate, they'll take their, lick off the cone. So it's going to be interesting to see how directly Amazon plays, but at some point you got Tam expansion, you got to play in that space. >> Yeah, I'll give you an example of knowing, I got acquired by a couple of times by EMC. So I learned a lot from Joe Tucci and Paul Merage over the years. see Paul and Joe, what they did is to look at how 20 years, and they are very close to Boston in your area, Joe, what games did is they used to sell storage, but you know what he did, he went and bought the Apps to drive them. He bought like Legato, he bought Documentum, he bought Captiva, if you remember how he acquired all these companies as a services, he bought VMware to drive that. So I think the good angle that Microsoft has is, I'm a SAS player, I have dynamics, I have CRM, I have SharePoint, I have Collaboration, I have Office 365, MS Teams for users, and then I have the platform as Azure. So I think if I'm Amazon, (indistinct). I got to own the apps so that I can drive this workforce on my platform. >> Interesting. >> Just going to developers, like I know Jerry Chan, he was my peer a BMF. I don't think just literally to developers and that model works in open source, but the open source game is pretty much gone, and not too many companies made money. >> Well, >> Most companies pretty much gone. >> Yeah, he's right. Red hats not bad idea. But it's very interesting what you're saying there. And so, hey, its why Oracle wants to have Tiktok, running on their platform, right? I mean, it's going to. (laughing) It's going to drive that further integration. I wanted to ask you something, you were talking about, you wouldn't invest in storage or compute, but I wonder, and you mentioned some commentary about GPU's. Of course the videos has been going crazy, but they're now saying, okay, how do we expand our Team, they make the acquisition of arm, et cetera. What about this DPU thing, if you follow that, that data processing unit where they're like hyper dis-aggregation and then they reaggregate, and as an offload and really to drive data centric workloads. Have you looked at that at all? >> I did, I think, and that's a good angle. So I think, look, it's like, it goes through it. I don't know if you remember in your career, we have seen it. I used to get Silicon graphics. I saw the first graphic GPU, right? That time GPU was more graphic processor unit, >> Right, yeah, work stations. >> So then become NPUs at work processing units, right? There was a TCP/IP office offloading, if you remember right, there was like vector processing unit. So I think every once in a while the industry, recreated this separate unit, as a co-processor to the main CPU, because main CPU's inefficient, and it makes sense. And then Google created TPU's and then we have the new world of the media GPU's, now we have DPS all these are good, but what's happening is, all these are driving for machine learning, AI for the training period there. Training period Sometimes it's so long with the workloads, if you can cut down, it makes sense. >> Yeah. >> Because, but the question is, these aren't so specialized in nature. I can't use it for everything. >> Yup. >> I want Ideally, algorithms to be paralyzed, I want the training to be paralyzed, I want so having deep use and GPS are important, I think where I want to see them as more, the algorithm, there should be more investment from the NVIDIA's and these guys, taking the algorithm to be highly paralyzed them. (indistinct) And I think that still has not happened in industry yet. >> All right, so we're pretty much out of time, but what are you doing these days? Where are you spending your time, are you still in Stealth, give us a little glimpse. >> Yeah, no, I'm out of the Stealth, I'm actually the CEO of Aisera now, Aisera, obviously I invested with them, but I'm the CEO of Aisero. It's funded by Menlo ventures, Norwest, True, along with Khosla ventures and Ram Shriram is a big investor. Robin's on the board of Google, so these guys, look, we are going out to the collaboration game. How do you automate customer service and support for employees and then users, right? In this whole game, we talked about the Zoom, Slack and MS Teams, that's what I'm spending time, I want to create next generation service now. >> Fantastic. Muddu, I always love having you on you, pull punches, you tell it like it is, that you're a great visionary technologist. Thanks so much for coming on theCube, and participating in our program. >> Dave, it's always a pleasure speaking to you sir. Thank you. >> Okay. Keep it right there, there's more coming from Cuba and Cloud right after this break. (slow music)

Published Date : Nov 6 2020

SUMMARY :

From the Cube Studios Welcome my friend, good to see you. Pleasure to be with you. I want to ask you about that, but COVID is going to probably accelerate Yeah. because you tell it like it is, that you see that as permanent, So that's why, if you look and what do you expect going forward? you guys are talking about 10 years back, So to your point, what will drive Cloud and you hear a lot of the I think you will. the On-prem stuff is flat to Is the App server is going to run On-prem, I want to ask you about those, So the same philosophy will So I'll tell you what, sorry, I'm not going to come to you and say, hey, the license, you got to pay X. I love that because that's the core But of course the age of Now the areas that you So AI, RPA and the SAS, where you want to put place your bets. So if you look at how Right. but the opportunity to go So you have the same So it's going to be interesting to see the Apps to drive them. I don't think just literally to developers I wanted to ask you something, I don't know if you AI for the training period there. Because, but the question is, taking the algorithm to but what are you doing these days? but I'm the CEO of Aisero. Muddu, I always love having you on you, pleasure speaking to you sir. right after this break.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

MicrosoftORGANIZATION

0.99+

1999DATE

0.99+

AmazonORGANIZATION

0.99+

AdobeORGANIZATION

0.99+

$0QUANTITY

0.99+

Jerry ChenPERSON

0.99+

GoogleORGANIZATION

0.99+

OracleORGANIZATION

0.99+

PaulPERSON

0.99+

DavePERSON

0.99+

AWSORGANIZATION

0.99+

JohnPERSON

0.99+

$ 0QUANTITY

0.99+

twoQUANTITY

0.99+

NetezzaORGANIZATION

0.99+

2006DATE

0.99+

HortonworksORGANIZATION

0.99+

CiscoORGANIZATION

0.99+

35QUANTITY

0.99+

FrankPERSON

0.99+

Muddu SudhakarPERSON

0.99+

Ram ShriramPERSON

0.99+

95 billionQUANTITY

0.99+

JoePERSON

0.99+

2025DATE

0.99+

WebexORGANIZATION

0.99+

TeradataORGANIZATION

0.99+

60QUANTITY

0.99+

Jerry ChanPERSON

0.99+

$80 billionQUANTITY

0.99+

BostonLOCATION

0.99+

DellORGANIZATION

0.99+

Palo AltoLOCATION

0.99+

IBMORGANIZATION

0.99+

Paul MeragePERSON

0.99+

NVIDIAORGANIZATION

0.99+

BMCORGANIZATION

0.99+

HPORGANIZATION

0.99+

NorwestORGANIZATION

0.99+

70%QUANTITY

0.99+

70QUANTITY

0.99+

first questionQUANTITY

0.99+

AiseroORGANIZATION

0.99+

SpotifyORGANIZATION

0.99+

ShopifyORGANIZATION

0.99+

EMCORGANIZATION

0.99+

80%QUANTITY

0.99+

VMwareORGANIZATION

0.99+

ClouderaORGANIZATION

0.99+

two-pieceQUANTITY

0.99+

2 trillionQUANTITY

0.99+

Andy JassyPERSON

0.99+

100 percentQUANTITY

0.99+

AccentureORGANIZATION

0.99+

SASORGANIZATION

0.99+

15 yearsQUANTITY

0.99+

MudduPERSON

0.99+

MapperORGANIZATION

0.99+

75%QUANTITY

0.99+

100 billionQUANTITY

0.99+

PivotalORGANIZATION

0.99+

$2 trillionQUANTITY

0.99+

OktaORGANIZATION

0.99+

Joe TucciPERSON

0.99+

20 yearsQUANTITY

0.99+

next yearDATE

0.99+

AiseraORGANIZATION

0.99+

ZscalerORGANIZATION

0.99+

Justin Fielder, & Karen Openshaw, Zen Internet | Nutanix .NEXT EU 2019


 

>>Live from Copenhagen, Denmark. It's the cube covering Nutanix dot. Next 2019. Brought to you by Nutanix. >>Welcome back everyone to the cubes live coverage of dot. Next Nutanix. We are here in Copenhagen. I'm your host, Rebecca Knight. Along with my cohost Stu Miniman. We're joined by Karen Openshaw. She is the head of engineering at Zen intranet and Justin fielder, the CTO at Zen internet. Thank you both so much for your first timers on the cube. So welcome. We're gonna. We're really excited to have you. Why don't you start by telling our viewers a little bit about Zen internet, who, who you are, what you're all about. >>Yeah, sure. So, um, Zen is um, a UK based where up in near Manchester, um, managed service provider. Um, we turned over this year about 76 million pounds, um, which is, um, a great achievement for us that spout. Um, that's double digit growth we've had for the last few years. So we're really starting to motor as a business. Um, we employ about 550 people. Um, we have about 150,000 customers split across retail, um, indirect. So we have a very big channel business. We have a wholesale business where we sell our infrastructure, um, that then other people productize and put into, um, solutions for their customers. And then we have a corporate business, which is where Nutanix really comes in. Um, so we offer managed services both in networking, um, hosting the value added services that are required to make all of that safe and secure and, um, a solution for a corporate. Great. >>So managed service provider, uh, your company has been around for quite awhile. Predates when everyone was talking about cloud. Maybe give us a kind of the update today as to where you really see yourself fitting. What differentiates your, uh, your, your company in the marketplace? >>So I suppose, um, I mean Karen can add sort of what her team does, but I suppose the, the big difference is Zen is a very people first company. So Richard Tang, our founder, he founded the company nearly 25 years ago. Um, he stated publicly, he's never going to sell it. It's, it's, it's a, it's a very, very people orientated company, which of course has great, um, affinity to Newtanics his own, um, people first values. And fundamentally we believe that we always want to do the right thing for the customer even if that is difficult. Um, and so I still do whatever you want to say about, you know, how you pick up some of the, the, the hardness about keeping up with customers. >>Yeah. So we have customers that come to us asking for things that we don't necessarily sell at the time. And uh, we, we put quite a lot of effort into adapting our products at the time to deliver them what they need. Um, some of those challenging conversations can be about making sure the customer is getting the right product for what they want. So understanding what they need, making sure that we can support them not only in taking that product, but coming onto the product in the first place. And that's what we use a lot of our Nutanix infrastructure for. >>Good. Can you maybe, can you dig us in a little bit? Do you know, what does Nutanix enable for your business that ultimately then has an impact on your ultimate end user? >>It's done two things for us. So the first is our it operations. So we've been on a journey, I guess over the last three, four years, consolidating all our legacy and um, physical 10 onto virtual, uh, services. We've used Nutanix to do that. So with, with collated all of our services, we've got about 90 odd percent of all our legacy services on that it infrastructure now. So operationally it saves us a lot of time, effort, uh, costs, et cetera, much more reliable as well. But conversely to that, we also use it for our, our products offerings as well. So we used to be, um, managed hosting where a customer would come, give us a spec and we'd, we'd go and build a physical server hosted in our data center, host their applications on there, support them with that. We don't really do that anymore. We now use Nutanix as our hosting environment. So we've reduced our environmental footprint, we've reduced the amount of space that we need in a data center. And the power that we put through there again, operating that is, is it's easier for us because we can consolidate where the skills are from in terms of both it ops and in terms of the infrastructure for the managed services as well. >>One of the things that you said Justin, is that you're very people first company and that really fits in well with the culture at Nutanix. Can you, can you riff on that a little bit and just describe what it is to be working so closely with a company like Nutanix and how important it is that your cultures mesh? >>Yeah, sure. Um, I mean Nutanix has been part of Zen for, for many, many years. Um, and you know, we work in Israel, watched this industry for 25 years. Nothing stands still, literally nothing stands still. And therefore whatever you fought was a good idea last year, probably is now the worst possible idea because there's some great new idea. And I think it's that pace of change. And so what we've really found with Nutanix is as, as they've got to know us and we've got to know them and they can see that we're starting to really be able to take some solutions to the market that really resonate the, what they've done is they've literally embedded their people in our company. So we have, um, our systems engineers or account managers, they come up to our offices, they sit down, they understand our people, they understand where we're trying to go, they understand our propositions. >>And this is a journey for Nutanix. I mean Nutanix in the MSP land is not where it really, where they started. They started like Karen just said like we use them. That's actually where we started was Oh my God, I've got a thousand servers or this is just too much. Yeah, it's too much hassle to try and segment it yourself. Um, and it, it, it's that, it's that sort of hypervisor of hypervisors of hypervisors type approach. It just makes it easier. But conversely, it's therefore really important that you work out how take that value proposition to a customer. Because if you can't explain it, cause it's so easy, how do they know where, whether this is going to solve their problems. So that's been a fantastic part. Nutanix, it's really the Nutanix team felt like the Zen team and they're saying that they also feel the same. >>So you know, things like nothing ever goes 100% right. But it's always, you know who to call. They're all work because you've got that personal relationship and that's really important to us. >> It's more than that. So what we found with the Nutanix guys is that they'll help us fix problems that aren't necessarily Nutanix problems as well. So that's something we don't get from any of the, uh, of our suppliers. It's normally, no, that's nothing to do with me. You need to phone someone else, get support on that. It's done. It's guys will, they'll bring in their own experts on that particular combo and they'll support us through that. So that's good. >> At six speaks very much to the partnership that you're saying. They're not just a supplier of a product to you. Um, no, no. When I talked to the customer base, one of the biggest challenges and you know, any company has these days is a really understanding their application portfolio. >>What needs to change, what needs to stay the same, you know, Microsoft pushing everybody to office three 65, you know, changed a lot of companies out there. You know, what do I Salsify, what do I put in managed service provider? What do I just, you know, build natively in the public cloud. Can you bring us through kind of, you know, what you're seeing at your customer base and you know, where, where that does interact with the journey that Nutanix is bringing people on? Yeah, I mean maybe I can say that like the, all of our customers are on a journey, um, and they need help. They seriously need help for the, exactly. That reason that you've said. Um, I mean, this is, this is my, this is my job to understand this stuff. That's, that's what a CTO of an MSP is required to do. Um, the problem is is if you're a CIO of, we were really good in construction, you can revolutionize the construction in C by the application of it, particularly during the sales cycle. You know, the ability to VR walk through, you know, argument or, all of that sort of really cool stuff. >>And then you've got a thousand sub-contractors that you're trying to manage from an it perspective. And that juxtaposition of the problem is really problematic I think for a lot of people. And so what we've done is we said the first step you can do is just take what you've got and get rid of the management overhead. That's the easiest, simplest, straightforward. And some of the Nutanix, the sort of lift and shift capability that has got that, they will go and inspect a work load somewhere else. They will work out what resources are required for it. They will pick it up and then we'll move it. And we've had some fantastic success of our customers. They're, they're, they're our greatest advocates. They just say, Oh my God, it just happened one day it was over that and next day it was over there. Um, and then you can start to analyze what that is, what's happening. >>And that's where we can really add value because this is not as simple as just an application because it's about your security posture. It's about your Dar requirements. It's about what, what your appetite for risk versus reward versus cost. And that's really hard to do when you don't have the simple thing which is there, which is, Oh, that serve, that piece of tin costs me $10,000 and therefore you can work that out yourself. So I think the key to all of this is giving tools to the end users so that the CIO in that company and their it team so that they can make those choices in collaboration with an MSP like us. Um, and that goes back to what you were saying. It's about, you know, when we hit problems, we might not even know there's a problem before we've hit it. And therefore having Nutanix deeply embedded within us is really important to them. Being able to go back to the customer and sometimes to the customer, you actually have to go, what are you doing that isn't going to work in the longterm? >>And, and, and as you said, you also have to provide the value so that the customer understands what they're actually getting to in terms of a customer's future needs are we are living in this multicloud world. How are we, how would you describe the customer mindset and how are you coming in with solutions that work for the customer and then having to break that, break the news to them on occasion that what on earth are you trying to do here? This is not gonna work. >>Yeah, we have a few, um, interesting. I sort of like, okay, are you going or am I going to tell them? You know, and I actually can tell, I always send Karen, I'll be going. He doesn't. Um, I, I think it, it's, and, and this is where I think we weren't really, well, you know, it is about what is going on. Karen. Work with your engineering teams. Try and understand deeply actually what is going, why is it not a good idea to do that? And that's the, that's the thing. Once you're going to explain why most of it, Oh God, thank God for that. Finally someone's telling me why what I'm trying to achieve isn't the best way to do it. Because I think a lot of, a lot of people's just sort of, you know, it's a bit buzzwordy and they just think that they need to do this. And you know, it's, I mean, talk about, you know, the journey we've been through. Just sort of how do we move stuff onto there? What's that for years. I mean, you know, it's a huge amount of work. Carry any, any lessons learned maybe that you could do it for one 50 years. >>Are there any that I could repeat here as practices? Okay. It is, I think one of the biggest challenges is the, the reskilling of your teams. So I'm guessing everybody, first of all, to understand this, this bright new future that you're moving into. And then getting them trained upon it and training is >>not just going and sitting in a classroom. It's going and working on this thing and seeing problems occur and understanding how to fix them. That's the, that's the biggest problem that we, that we probably went through. I guess we want our customers to not have that though. So we, we want them to give us the, their work loads in there. It will solve that for them and that that's where we wanna we want to take it, I think in the future, helping them understand what they can do with cloud. So we, we don't just do private cloud, we do public cloud as well. So we could introduce um, opportunities and concepts from a public cloud perspective as well. Um, that will, that will, AWS is a, is a really good one and we are looking at other providers as well, so we help customers solve their problems, whatever that problem is. >>One of the things that's so salient about Zen internet is that it has a really strong culture. You said it's a people, people first culture, but it's also a very diverse culture. Uh, bringing in multiple perspectives, uh, women in technology, LGBTQ, uh, other races. Can you talk a little bit about what it means to work at a diverse company and how it changes how you think about problems and go about solving, >>solving them? Yeah, I guess it's a good question. I guess working in a company we're not as diverse as we'd like to be. We were not where we're at in terms of balancing out the number of women in the tech roles in particular. Um, and, and the diversity. If we give everybody a voice, which is the main thing, then uh, we will see a more, a more wide range in set of inputs there. So, um, developing our teams, high performing teams, you need that mixture of input there, not just about women by the way. It's about, it's about, we have a private zone network for example, where we try to ensure that diverse diversity and diverse people feel included in what we do as a business and work as well and have an opportunity to have an input into that. So where does it add for us? >>I guess people just think differently when they're from different cultural backgrounds. They're from different, um, different nationalities, different, um, races I guess different sexuality, different gender. They've all got different life experiences. So solving problems is probably the main thing that you get the benefit from that. And this industry is full of people trying to solve problems, um, and bring in diverse teams, not just about women in tech. Cause w we saw three women speaking this morning or the keynote, which was fantastic to see. Um, but it is about the diversity as well. So, uh, innovation is the key there, I guess. And I think, I think it's, it's not just about your staff. Um, if you've got the ability to think differently, that applies for out >>the entire ecosystem. Um, and you, you know, you can, you can take a different view. So we work very closely with the TM forum because you know, that that's sort of our industry and it's the sort of the, the, the whole application stack about how you approach that. And the TM forum of have really done some fantastic research that that now proves that the output is different if you have a diverse input. And that I think for our customers is really different. It's really important because then it's different. We're not one of the big guys. We're not BT, we're not Deutsche Telekom, we're not, you know, we're not one of these people. We think differently. We act differently, we behave differently. We have a different approach and the people first, I mean, you know, that doesn't mean we're, you know, we're, we're just here for a good fun time. >>We're here to drive this business forward, to try to generate profitability that we can reverse back in the business to enable us to get onto bigger and greater things. And we've got a five year plan which will see us, you know, at least double revenues quite happily. And we've very confident now that we can execute that. Assuming we can get that diversity in the business. And it's a huge challenge. It's how do you reach out to those people? How do you use the right language? How do you overcome unconscious bias? Yeah, that's a massive thing and it's great. Again, it Newtanics just resonates with us. Just some of the little stickers around that they are diverse, they've got different representations of people and it shows that someone has fought about that and that will resonate. And it's always the classic thing that, you know, you do something wrong once people remember it forever. You do a hundred things right. People won't even notice it. And that's the, that's the type of approach. So, um, for us, we, you know, we think it's a really exciting bear and it's something that the entire executive at Zen are absolutely focused on is getting this right because we know it will secure off. >>It'll make all the difference. Great. Justin and Karen, thank you so much for coming on the cube. That's great. I'm Rebecca Knight for Stu Miniman. Stay tuned for more of the cubes live coverage of.next from Copenhagen.

Published Date : Oct 9 2019

SUMMARY :

Brought to you by Nutanix. Thank you both so much for your first timers on the cube. And then we have a corporate business, to where you really see yourself fitting. Um, and so I still do whatever you want to say about, you know, how you pick up some of the, the, our products at the time to deliver them what they need. Do you know, what does Nutanix enable for your And the power that we put through there again, One of the things that you said Justin, is that you're very people first company and that really fits in well with Um, and you know, that you work out how take that value proposition to a customer. So you know, things like nothing ever goes 100% right. So what we found with the Nutanix guys is that they'll help us When I talked to the customer base, one of the biggest challenges and you know, any company has these days is a What needs to change, what needs to stay the same, you know, Microsoft pushing everybody to office three 65, is we said the first step you can do is just take what you've got and Um, and that goes back to what you were saying. that, break the news to them on occasion that what on earth are you trying to do here? And you know, the reskilling of your teams. So we could introduce um, opportunities and concepts Can you talk a little bit about what it means to work It's about, it's about, we have a private zone network for example, where we try to that you get the benefit from that. We have a different approach and the people first, I mean, you know, for us, we, you know, we think it's a really exciting bear and it's something that the entire executive at Zen Justin and Karen, thank you so much for coming on the cube.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Karen OpenshawPERSON

0.99+

Rebecca KnightPERSON

0.99+

KarenPERSON

0.99+

JustinPERSON

0.99+

NutanixORGANIZATION

0.99+

Justin FielderPERSON

0.99+

CopenhagenLOCATION

0.99+

Richard TangPERSON

0.99+

MicrosoftORGANIZATION

0.99+

ManchesterLOCATION

0.99+

AWSORGANIZATION

0.99+

UKLOCATION

0.99+

25 yearsQUANTITY

0.99+

Deutsche TelekomORGANIZATION

0.99+

$10,000QUANTITY

0.99+

IsraelLOCATION

0.99+

OneQUANTITY

0.99+

Stu MinimanPERSON

0.99+

100%QUANTITY

0.99+

ZenORGANIZATION

0.99+

five yearQUANTITY

0.99+

last yearDATE

0.99+

todayDATE

0.99+

Copenhagen, DenmarkLOCATION

0.99+

bothQUANTITY

0.99+

first stepQUANTITY

0.99+

about 150,000 customersQUANTITY

0.99+

firstQUANTITY

0.99+

two thingsQUANTITY

0.99+

about 550 peopleQUANTITY

0.98+

Justin fielderPERSON

0.98+

about 76 million poundsQUANTITY

0.98+

three womenQUANTITY

0.98+

four yearsQUANTITY

0.98+

10QUANTITY

0.98+

oneQUANTITY

0.97+

BTORGANIZATION

0.97+

this yearDATE

0.97+

next dayDATE

0.97+

Zen InternetORGANIZATION

0.96+

first companyQUANTITY

0.92+

first cultureQUANTITY

0.92+

2019DATE

0.9+

sixQUANTITY

0.9+

one 50 yearsQUANTITY

0.88+

this morningDATE

0.82+

one dayQUANTITY

0.78+

doubleQUANTITY

0.78+

thousand serversQUANTITY

0.77+

25 years agoDATE

0.75+

hundred thingsQUANTITY

0.75+

threeQUANTITY

0.72+

first timersQUANTITY

0.72+

last few yearsDATE

0.71+

Zen internetORGANIZATION

0.71+

Payal Singh, F5 | AnsibleFest 2019


 

>>live from Atlanta, Georgia. It's the Q covering Answerable Fest 2019. Brought to you by Red Hat. >>Welcome back. This is the Cubes Live coverage of anti professed 2019 here in Atlanta. Georgia Instrument in my co host is John Ferrier and happy to welcome to the program the first time guest pile sing. Who's a principal solutions engineer with F five? Of course. Five's a partner of Anti Bowl In the keynote this morning when they were laying out You know how to use all of these pieces? Oh, I need a load balancer. Great. Here. Here's five to the rescue. So tell us a little bit about you know your role inside F five and kind of fights activities here at the show. >>Sure. Sure. Uh, so thank you for the introduction. Yeah, My name is our piloting principal solution. Ngo S O. I work a lot with different alliance partners and answerable being one of them. Of course, s O. I develop technical integrated joint solutions with answerable. You know, we've had a great, great working relationship with the answerable. They've been absolutely wonderful to work with on at this summit. We have various activities We had a workshop at the contributor summit. We had a session yesterday. We have another workshop on Thursday. So we're really busy, you know, the boots being flowing. And so far, it's been an awesome experience. >>The other people of the show here, they really dig into what they're doing. Ah, you know, even on the bus ride to the party last night, people are talking about their configurations at lunchtime. Everybody is talking about it. Bring us inside a little bit, you know? So is the new collections what people are asking you about? Are there other deployment ways? You know, what are some of the things that are bringing people to talk to >>people That kind of talking, you know, on a broad spectrum, you know, there's some people are just starting out with answerable. They just want to know, you know, how do I write a play book with their 500? Get it running? Others are a little more advanced, you know, Let's get into rules, you know? What are we doing with rules? And then now collections is coming on top of mine. You know how you guys doing with collections, So of course we are in lockstep. You know, we have the first collections out. We're gonna bundle playbooks and a lot of work flows and rules that gonna be someone. It's gonna be easy for customers to just download used these work clothes out of the box and get started with that five. But we've had, you know, different use cases, different questions around Day zero deployment was his data management. Bliss is monitoring was back of resource. All sorts of questions >>in one of the things that's come up is, you know, hit the low hanging fruit and then go to the ant, worked close in tow and is more of a kind of the bigger opportunities. But, you know, we've been talking about Dev Ops two for 10 years, and this to me has always been like the area that's been ripe for Dev ops, configuration management, a lot of the plumbing. But now that it's 10 years later starting to see this glue layer, this integration layer come out and the ecosystem of partners is growing very rapidly for answerable. And so there's been a very nice evolution. This is kind of a nice add on to great community great customers for these guys. What's the integration like as you work with answerable? Because as more people come on and share and connect in, what's it take? What are some of the challenges? What some of the things that you guys need to do our partners need to do with danceable, >>Right? So contributing is, you know, it's been a little slow, I would say, because firstly, they got a kind of lawn answerable and they gotta learn. You know what sensible galaxy. How can I walk around it? And then there's the networking piece, right? How do I now make it work with F five? You know, is this role good enough? Should I be contributing or not? So we're working closely with, you know, Ned, ops engineers as well as the world changes to kind of say, you know, whatever you think is a good work, so is good enough to go there. So, you know, get your role uploaded on galaxy and, you know, show us what you're doing. It doesn't have to be the best, but just get it out there so way have a lot of workshops. You know, we also have this training on F. I called Super Netapp, which is kind of targeting that walked in that office. Engineers. So we're trying to educate people so that everybody is on board with with us. >>One of the conversation we've been having a lot this week has been about the collaboration between teams and historically that's been a challenge for networking. It's alright. Networking going to sit in the corner, tell me what you need. Oh, wait, You need those things changes. Nope, I'm not gonna do it for you are, you know. Okay, wait, get me a budget in 12 months and we'll get back to you. So, uh, how are things changing? Are they changing enough in your customers environments? >>That's a good question. So it is changing, but it's changing slowly. There's still a lot of silos like nettles. Guys are doing their stuff there. Watch guys are doing their self. But with automation is it's kind of hang in together because, you know, the network's engineers have their domain expertise, develops have tails. But, you know, we were able to get them in the same room because we don't get five and then we don't automation and and then they connect. They're like, Oh, you guys are doing what we've already done So it's happening, But it's so, but it's definitely drops that develops. You don't think this is >>the chairman? We've been covered. A lot of we've had a lot of events. We've talked about programmable infrastructure. Infrastructures code is kind of in the butt when you start getting into the networking side, because very interesting when you can program things, this is a nice future. Head room for Enterprises As their app start to think about micro service is what you're taking on the program ability of networking. How do you guys see that? What's your view? >>So program ability In the networking space, it's it's catching up like just five. As a company, we started with just rest a P. I called. Now we're going to moving to answerable to F eyes. Also coming out with this AP I call declared a baby I we have this F ai automation tow chain where we're kind of abstracting more and more off how much user needs to know about the device but be able to configure it really easily. So we're definitely moving towards that and I see other other networking when there's also kind off moving towards that program ability for sure. >>Did you have any specific customer stories you might be able to share? Understand. You might not be able to give the name of the company, but it's always helps to illustrate. >>Yeah, sure, definitely. So we had one customer who, you know, they had an older or not told a different load balancer. And they want to know my great order, the Air five. So they had a lot of firewall rules and, you know, a lot of policies that they wanted to move over. So they used to have these maintenance windows and move on application at a time, eh? So they started, came across sensible, started using answerable, and they were able to migrate like 5 to 10 applications for maintenance window. And they will, you know, they loved it. They've been using answerable. They've been great providence. Or what goes into our modules, you know, really helping us guiding us as well as to what they need. So they were a great, you know, customer story. Another customer we had was you know, we get a lot of use cases for if I that we want to be able to change an application or the network without incurring any downtime, you know, fail overs, it could be as simple as as broader Sze between data centers or, you know, something simple. But what this company did want to shift between fellow between data centers, they got into answerable, they were able to do it in minutes was his hours and, you know they loved it. >>I got to ask you about a Zen engineer. You think about the data center cloud we get that that's been around that workings been great, getting better as five G and I o. T Edge kind of comes into the picture how routing and networking works with compute and edge devices start to be an opportunity for these kinds of automation. How do you guys view that's future state of EJ and and as the surface area of the network gets larger and the edges really part of the equation now his need for automation great need for seeing observe abilities. Super hot area with micro service is now you got automation kind of Ah, nice area. Expand on. What's your thoughts on beyond the data center >>so beyond the data center. So f five is indifferent clouds right to donate ws as your g c p It's out there. We also have like you know, we've recently collaborated with not collaborated. You know, engine ex has become a part of their five. So, you know, we're out there on definitely with I od and you know, no one date us and the specific that there is a boom off applications and you know, we wantto not be a hindrance to anyone who's trying to automate applications anywhere. So ah, goal is also at five is everywhere and anywhere and securing abs, making them available >>and securities 200 big driver of automation. >>I'm glad you brought up in genetic. So you know, we've been very familiar seeing Engine X at a lot of the cloud shows how Zenger next kind of changing the conversation you're having with customers. >>So having a lot of conversations with develops engineers about an genetics, you know, some of them are already using it in the day to day activity, and, you know, they don't want to see how a five and engine excite gonna gonna come together And you know what kind of solutions we can offer. So if I were working on that strategy, But you know, definitely that there is a link between us and engine aches, and customers are happy to know that. You know, we're kind of now on the same pot, So if they're in the cloud on from, you know, they can choose which one they want, but they're going to get the same support and backing off. Five. >>Great. We're getting towards the end of answerable fests. Give us what you want. Kind of some of the key takeaways. People tohave about five here at the show. >>Sure. You know, if you haven't started automating at five Invincible. My key takeaways, you know, get started. It's really simple. We have sessions now. We have a workshop on those. They look that up a great resource for us. It's just answerable dot com slash five. We have great resources. Um, are answerable. Models are supported, were certified by that had answerable. So, you know, just dive in and start automating >>pale, saying Thank you so much for the update. Really appreciate it. And congratulations on the progress. >>Thank you so much. >>for John, for your arms to minimum, getting towards the end of two days water wall coverage here. Thanks, as always for watching the Cube.

Published Date : Sep 25 2019

SUMMARY :

Brought to you by Red Hat. So tell us a little bit about you know your role inside F five and So we're really busy, you know, the boots being flowing. the new collections what people are asking you about? Others are a little more advanced, you know, Let's get into rules, you know? in one of the things that's come up is, you know, hit the low hanging fruit and then go to the ant, So we're working closely with, you know, Ned, ops engineers as well as tell me what you need. you know, the network's engineers have their domain expertise, develops have tails. Infrastructures code is kind of in the butt when you start getting into the networking side, because very interesting So program ability In the networking space, it's it's catching Did you have any specific customer stories you might be able to share? So they had a lot of firewall rules and, you know, a lot of policies that they wanted to move I got to ask you about a Zen engineer. We also have like you know, So you know, we've been very familiar seeing Engine X at a lot So if they're in the cloud on from, you know, they can choose which one they want, Give us what you want. So, you know, pale, saying Thank you so much for the update. for John, for your arms to minimum, getting towards the end of two days water wall coverage here.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
John FerrierPERSON

0.99+

JohnPERSON

0.99+

ThursdayDATE

0.99+

Red HatORGANIZATION

0.99+

Payal SinghPERSON

0.99+

10 yearsQUANTITY

0.99+

Atlanta, GeorgiaLOCATION

0.99+

two daysQUANTITY

0.99+

yesterdayDATE

0.99+

fiveQUANTITY

0.99+

10 applicationsQUANTITY

0.99+

one customerQUANTITY

0.98+

Ngo S O.PERSON

0.98+

last nightDATE

0.98+

first collectionsQUANTITY

0.98+

5QUANTITY

0.98+

first timeQUANTITY

0.98+

12 monthsQUANTITY

0.97+

this weekDATE

0.97+

oneQUANTITY

0.97+

Atlanta.LOCATION

0.95+

2019DATE

0.94+

10 years laterDATE

0.94+

Super NetappTITLE

0.93+

FiveQUANTITY

0.93+

ZengerORGANIZATION

0.92+

Georgia InstrumentORGANIZATION

0.9+

Air fiveCOMMERCIAL_ITEM

0.9+

dot comORGANIZATION

0.88+

Dev Ops twoTITLE

0.88+

s O.PERSON

0.88+

500QUANTITY

0.85+

NedPERSON

0.85+

Engine XORGANIZATION

0.79+

200 big driverQUANTITY

0.78+

firstlyQUANTITY

0.76+

AnsibleFestEVENT

0.75+

I o. T EdgeORGANIZATION

0.74+

one ofQUANTITY

0.74+

Day zeroQUANTITY

0.74+

Answerable Fest 2019EVENT

0.73+

ZenORGANIZATION

0.68+

five GORGANIZATION

0.68+

about fiveQUANTITY

0.65+

this morningDATE

0.64+

F fiveTITLE

0.62+

fiveTITLE

0.57+

f fiveORGANIZATION

0.56+

OneQUANTITY

0.54+

Anti BowlORGANIZATION

0.51+

BlissORGANIZATION

0.47+

CubesORGANIZATION

0.42+

fivePERSON

0.38+

CubePERSON

0.36+

FiveTITLE

0.28+

Luke Behnke, Zendesk | PagerDuty Summit 2019


 

>>From San Francisco. It's the cube covering PagerDuty summit 2019 brought to you by PagerDuty. >>Hey, welcome back everybody. Jeff, Rick here with the queue. We're at PagerDuty. Simon in downtown San Francisco at the Western st Francis. I think we've just about busted the seams in this beautiful old hotel. Thousand people. Fourth conference. We're excited to be here. And the big announcement today is around, you know, PagerDuty getting closer to the revenue, getting closer to the customer, getting beyond just break fix and incident response. And a huge partner. Big announcement of that was Zen desk. So we're happy to have today from Zendesk. Luke Benkei, the VP of product. Lou, great to see you. Yeah. Hey Jeff, thanks for being here. Thanks for absolutely. So before we get into the announcements and some of this stuff with the, with PagerDuty, give us kind of an update on Zendesk. We're all happy to see as Zen desk email in our inbox have been, someone's working on are working on my customer service issue. >>But you guys are a lot more than that. We are, yeah. Thanks for asking. Yes. So Zendesk started in, you know, it as a great solution for customer support and solving customer support issues. And we've really expanded recently to think more about the overall customer experience. Uh, and so that means, you know, launching more channels where customers can reach out beyond just emails and tickets to live chat and messaging and really rich experiences to communicate with your customers. But it also means, uh, you know, getting into the sales automation world and kind of helping sales and success work together, uh, on the whole customer experience and the customer life cycle. And underneath all of it, uh, our new sunshine platforms and as sunshine, it's a CRM platform that allows you to bring in a ton of information about the customer. You know, the, the products that customer owns. >>Um, you know, how they, how you've done business with them across all the different systems you have, right, that you do business with. Some, most companies we talked to have hundreds of different systems that store a little bit of information about the customer elusive three 60 degree. I mean, the single view of the customer. You know, I talked to a customer recently that said, Oh, I have 12 CRMs. Like are you going to be my 13th? And we said, no. You had to bring the right bits of information into Zandesk in order to make the right kind of actions that you want to take on behalf of that customer. Whether it's routing them to the right agent at the right time, whether that's making sure this is a VIP customer that has a, a hot deal with your sales team and you want to alert the sales rep if there's an incident that's affecting that customer open right now. >>Or maybe you want to have a bot experience that really solves a lot of the customer, uh, pain with knowing who that customer is, what products they own, et cetera. Right? So, right. That's really been what we've been trying to do with sunshine is, is move beyond just customer support into, uh, a full blown CRM solution. The one, you know, one place where a lot of your customer information can live to deliver that experience. Okay. So then we've got PagerDuty. So PagerDuty is keeping track of have more incidents, not necessarily customer problems per se, but system system incidents and website incidents and all these. How does that system of record interface with your system of record to get a one plus one makes three? That's it. I mean, so you know, if PagerDuty is the source of truth where your dev ops team and your developers and your product team are when there's an incident, you know, I've been part of this, uh, unfortunately we've, you know, if we have an incident at Zendesk, I'm, I'm in there as well kind of understanding what's happening, you know. >>But what's really missing there is that customer context and who's affected, you know, and even as good as our monitoring might be, sometimes customers tell us they're having problems, uh, or, or the extent of the problems they're having before we've fully been able to dig into it. Right? And so taking those two systems, the incident management portal and the customer record on the customer communications portal and bringing those two together, you know, it's better for the dev ops teams. They can learn. Like maybe we're getting some insight from the field about exactly who's affected and it's great for the customer support team because they don't have to sit there and tapping the, the engineer on the shoulder like have you fixed it yet? Right. What's the latest? Right. They can write within Zendesk with the new integration that that the PagerDuty Zendesk integration that we are, that we announced today, right. >>Within Zendesk, you know, reps can see a support, reps can see exactly what's happening in, in pretty close to real time with that incident so that they can keep customers proactively up to date. You know, before the customer reaches out, I have a problem, you know, they can say, Hey, here's the latest, you know, we're working on it. We estimate a fixed in this amount of time. Okay, now we've launched a fix. You should start to see things coming back up. Right. Okay. That that's a one plus one equals three. Okay. This is a two way communication. It's a two way writing. Yeah. I'm just curious, how does it, how does it get mapped? How does this particular Zen desk issue that I just sent it a note that I'm having a problem get mapped to, you know, this particular incident that's being tracked in PagerDuty. >>We got, you know, a power outage at a, at a distribution center right place. How do I know those two are related? So it's a, it's a two way integration, right? So it's installed both into the PagerDuty console as well as into Zendesk support where your agents are. And so, uh, you can create a really, it's all about the incident number and so you can create that out of, out of PagerDuty and then start attaching tickets, uh, as they come in to that incident or a customer's. Our rep could create an incident in PagerDuty, right through Zendesk. And so, you know, you're really working off of that same information about that incident number and then you're able to start attaching customers and tickets and other information that your customer support rep has to that incident number. And then you're all working off the same, you know, the same playbook and you're all understanding in real time if, if the developers are updating what's happening, the latest, the latest on it, you can sort of see that right in Zendesk and it's all based on that, that incident. >>So that's gotta be a completely different set of data and or you know, kind of power that the customer service agent has with this whole new kind of dead data set of potential if not root causes, at least known symptoms. Yeah, exactly. That's right. I mean, you know, part of our job on the product team at Zendesk is to sit with real customers and watch them shadow agents, watch them do their job every day and it's an ma even sometimes I log in and actually field tickets myself for Zendesk and it's an incredible experience to sit there and you log in and customers just start reaching out to you and they want answers, they want information. And you know, we've, we deliver a lot of automation and, and products like that, but still it's up to that customer support rep to quickly get back to that customer. >>And so to have some data right in front of them, Oh, it looks like this customer uses a certain product, that product is affected by this outage. Right. To be able to immediately have that customer support rep kind of alerted there is an outage. It might be effecting this customer, here's the latest information I can give that customer, you know, that's just less back and forth and round trips that they have to do to solve that customer's problem. Right. You know, as customers ourselves, we don't want that. We don't want to have to sit and wait or do they even know my tickets open? Do they have an update for me? I've been waiting 20 minutes, you know, to cut that down to give the agents context, it's, it's huge. It really helps them do their job. And of course the Holy grail is to not be reactive, to wait for the ticket, but to get predictive and even prescriptive. >>That's it. So where's that kind of in terms of, of your roadmap, how close are we to know adding things where we can get ahead, you can get ahead of the clients can get ahead of we see this coming down the road, let's get ahead and nip it in the bud before it even becomes a problem. Yeah, I mean, you know, we all are accustomed to whatever the last great experience we had with a company that suddenly just becomes what we expect next. And I think a big trend we're seeing in the last year or two is really customers want to get more proactive. And so we launched the Zendesk sunshine platform, which is all about bringing more of that data in. And the vision there then is really being able, which a lot of our customers are doing today. You know, they're able to say, I know which customers are using a certain product and when that product has an issue, send a proactive ticket. >>You know, before they even reach out to you were aware of an issue. You might be seeing these symptoms, here's some troubleshooting advice and here's our latest update and we'll keep this ticket up to date. We'll keep this conversation up to date as we learn more. You know, customers are already doing that was NS, but you're exactly right. That is more and more customers are trying to get there because it's becoming expected. You know, customers don't want to have to uh, log in and find that something's down and then try to troubleshoot unplugged re, you know, figure out, maybe it's me, maybe it's them. They want to know, okay, I get it. I can now plan around that. Maybe I'll go have my agents go work on a different, um, you know, updating some knowledge content or maybe put them on a different channel for a little bit or move people around depending on what's happening in the business. >>You know, the other thing that came up in the keynote that I think it's pretty saying that I don't know that people are thinking about is that there's more people that need to know what's going on than just the people tasked with fixing the problem. Whether it's account reps, whether it's senior executives, whether it's the PR team, you know, depending on the incident, there's a lot of people that aren't directly involved in fixing the incident that's still need that information and that seems like a super valuable asset to go beyond the ticket to a much broader kind of communication of the issue. As we actually, as we started to work, uh, with PagerDuty on expanding this integration with Zendesk and PagerDuty, we were talking to their team and we both have the same mantra, which is that the customer experience, it's a team sport. You know, it's not just the developers who are trying to fix the problem on behalf of the customers and it's not just your front line customer support reps who are fielding all those inquiries, right? >>It's everybody's job. In the end, as you said, the sales rep wants to know what's happening with my top accounts. Do I need to get in touch with them? Do I need to put in a phone call? Uh, you know, do I need to alert other teams? Maybe we should stop the marketing campaign that we were about to send. Cause the last thing you want is a buy more stuff, email when the site is down right now. So let's really start to think about this as a team sport. And I think this integration is a really great, uh, you know, how customer support and product and dev ops and engineering can kind of work together to deliver a better customer experience. It's, it's, so, it's, so Kate, you know, kind of multifaceted, so many things that need to happen based on that. Really seeing that single service call, that single transaction. >>Awesome. Well Luke, thanks for uh, for sharing the story and yeah, it's great to hear the Zendesk is still doing well. We are like, I like Zen desk emails like, yeah, I know. The next thing that we'll do is I will start to solve your problem before you even have to get us on that split up. Like we'll be working on your behalf even when you're not getting it. Okay. So Luke, thanks. Thanks Jeff. Appreciate it. See ya. Alright, he's Luke. I'm Jeff. You're watching the cube where PagerDuty summit in downtown San Francisco. Thanks for watching. We'll see you next time.

Published Date : Sep 24 2019

SUMMARY :

summit 2019 brought to you by PagerDuty. you know, PagerDuty getting closer to the revenue, getting closer to the customer, getting beyond just break fix and incident Uh, and so that means, you know, launching more channels where customers can reach out beyond just Um, you know, how they, how you've done business with them across all the different systems you have, I mean, so you know, you know, it's better for the dev ops teams. You know, before the customer reaches out, I have a problem, you know, they can say, Hey, here's the latest, And so, you know, you're really working off of that same information about that incident number I mean, you know, part of our job on the product team at Zendesk is to sit with real customers I can give that customer, you know, that's just less back and forth and round trips that they have to do you know, we all are accustomed to whatever the last great experience we had with You know, before they even reach out to you were aware of an issue. you know, depending on the incident, there's a lot of people that aren't directly involved in fixing the incident that's a really great, uh, you know, how customer support and product and dev ops and We'll see you next time.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JeffPERSON

0.99+

Luke BenkeiPERSON

0.99+

LukePERSON

0.99+

Luke BehnkePERSON

0.99+

San FranciscoLOCATION

0.99+

20 minutesQUANTITY

0.99+

RickPERSON

0.99+

ZendeskORGANIZATION

0.99+

Thousand peopleQUANTITY

0.99+

13thQUANTITY

0.99+

12 CRMsQUANTITY

0.99+

KatePERSON

0.99+

twoQUANTITY

0.99+

threeQUANTITY

0.99+

hundredsQUANTITY

0.99+

two systemsQUANTITY

0.99+

oneQUANTITY

0.99+

todayDATE

0.99+

bothQUANTITY

0.99+

two wayQUANTITY

0.99+

PagerDutyTITLE

0.99+

Fourth conferenceQUANTITY

0.98+

LouPERSON

0.98+

60 degreeQUANTITY

0.97+

last yearDATE

0.97+

SimonPERSON

0.96+

PagerDutyORGANIZATION

0.96+

single transactionQUANTITY

0.95+

Western st FrancisLOCATION

0.94+

ZenCOMMERCIAL_ITEM

0.93+

PagerDuty summitEVENT

0.89+

PagerDuty Summit 2019EVENT

0.89+

one placeQUANTITY

0.88+

single service callQUANTITY

0.88+

two way integrationQUANTITY

0.81+

PagerDuty summit 2019EVENT

0.8+

two way communicationQUANTITY

0.8+

ZenORGANIZATION

0.73+

ZandeskLOCATION

0.73+

single viewQUANTITY

0.73+

systemsQUANTITY

0.53+

Alex Solomon, PagerDuty | PagerDuty Summit 2019


 

>>From San Francisco. It's the cube covering PagerDuty summit 2019 brought to you by PagerDuty. >>Hey, welcome back everybody. Jeff Frick here with the Q. We're a PagerDuty summit. It's the fourth year of the show. He's been here for three years. It's amazing to watch it grow. I think it's finally outgrown the Western Saint Francis here in lovely downtown San Francisco and we're really excited to be joined by our next guest. He's Alex Solomon, the co founder, co founder and CTO PagerDuty. Been at this over 10 years. Alex, first off, congratulations. And what a fantastic event. Thank you very much and thank you for having me on your show. So things have changed a lot since we had you on a year ago, this little thing called an IPO. So I'm just curious, you know, we have a lot of entrepreneurs. I watch a show as a founder and kind of go through this whole journey. What was that like? What are some of the things you'd like to share from that whole experience? >>Yeah, it was, it was incredible. I I, the word I like to use is surreal. Like just kind of going through it, not believing that it's real in a way. And adjoining by my, my lovely wife who came, came along for this festivities and just being able to celebrate that moment. I know it is just a moment in time and it's not, it's not the end of the journey certainly, but it is a big milestone for us and uh, being able to celebrate. We invited a lot of our customers, our early customers have been with us for years to join us in that, a celebration. Our investors who have believed in us from back in 2010. Right, right. We were just getting going and we just, we just had a great time. I love it. I love 10 year overnight success. 10 years in the making. >>One of my favorite expressions, and it was actually interesting when Jenn pulled up some of the statistics around kind of what the internet was, what the volume of traffic was, what the complexity in the systems are. And it's really changed a lot since you guys began this journey 10 years ago. Oh, it has. I mean back then, like the most popular monitoring tool is Nagios and new Relic was around but just barely. And now it's like Datadog has kind of taken over the world and the world has changed. We're talking about not just a microservices by containers and serverless and the cloud basically. Right. That's the kind of recurring theme that's changed over the last 10 years. But you guys made some early bets. You made bets on cloud. He made bets on dev ops. He made bets on automation. Yeah, those were pretty good. >>Uh, those, those turn out to be pretty good places to put your chips. Oh yeah. Right place, right time and um, you know, some, some experiential stuff and some just some raw luck. Right. All right, well let's get into it. On top of some of the product announcements that are happening today, what are some of the things you're excited to finally get to showcase to the world? Yeah, so one of the big ones is, uh, related to our event intelligence release. Uh, we launched the product last year, um, a few months before summit and this year we're making a big upgrade and we're announcing a big upgrade to the product where we have, uh, related incidents. So if you're debugging a problem and you have an incident that you're looking at, the question you're gonna ask is, uh, is it just my service or is there a bigger widespread problem happening at the same time? >>So we'll show you that very quickly. We'll show you are there other teams, uh, impacted by the same issue and we'll, we, we actually leveraged machine learning to draw those relationships between ongoing incidents. Right. I want to unpack a little bit kind of how you play with all these other tools. We, you know, we're just at Sumo logic a week or so ago. They're going to be on later their partner and people T I think it's confusing. There's like all these different types of tools. And do you guys partner with them all? I mean, the integration lists that you guys have built. Um, I wrote it down in service now. It's Splunk, it's Zendesk, it goes on and on. And on. Yeah. So explain to folks, how does the PagerDuty piece work within all these other systems? Sure. So, um, I would say we're really strong in terms of integrating with monitoring tools. >>So any sort of tool that's monitoring something and we'll admit an alert, uh, when something goes down or over an event when something's changed, we integrate and we have a very wide set of coverage with all, all of those tools. I think your like Datadog, uh, app dynamics, new Relic, even old school Nagios. Right. Um, and then we've also built a suite of integrations around all the ticketing systems out there. So service now a JIRA, JIRA service desk, um, a remedy as well. Uh, we also now have built a suite of integrations around the customer support side of things. So there'll be Zendesk and Salesforce. That's interesting. Jen. Megan had a good example in the keynote and kind of in this multi system world, you know, where's the system of record? Cause he used to be, you want it, everybody wanted it to be that system of record. >>They wanted to be the single player in the class. But it turns out that's not really the answer. There's different places for different solutions to add value within the journey within those other applications. Yeah, absolutely. I, I think the single pane of glass vision is something that a lot of companies have been chasing, but it's, it's, it's really hard to do because like for example, NewRelic, they started an APM and they got really good at that and that's kind of their specialty. Datadog's really good at metrics and they're all trying to converge and do everything and become the one monitoring solution to the Rooney rule them all right. But they're still the strongest in one area. Like Splunk for logs, new Relic and AppDynamics for APM and Datadog for metrics. And, um, I don't know where the world's going to take us. Like, are they, is there going to be one single monitoring tool or are, are you going to use four or five different tools? >>Right. My best guess is your, we're going to live in a world where you're still gonna use multiple tools. They each can do something really well, but it's about the integration. It's about building, bringing all that data together, right? That's from early days. We've called pager duty, the Switzerland monitoring, cause we're friends with everyone when we're partners with everyone and we sit on top right a work with all of these different roles. I thought her example, she gave him the keynote was pretty, it's kind of illustrative to me. She's talking about, you know, say your cables down and you know, you call Comcast and it's a Zendesk ticket. But >>you know then that integrates potentially with the PagerDuty piece that says, Hey we're, you know, we're working on a problem, you know, a backhoe clipped the cable down your street. And so to take kind of that triage and fix information and still pump that through to the Zen desk person who's engaging with the customer to actually give them a lot more information. So the two are different tracks, but they're really complimentary. >>Absolutely. And that's part of the incident life cycle is, is letting your customers know and helping them through customer support so that the support reps understand what's going on with the systems and can have an intelligent conversation with the customer. So that they're not surprised like a customer calls and says you're down. Oh, good to know. No, you want to know about that urge, which I think most people find out. Oh yeah. Another thing >>that struck me was this, this study that you guys have put together about unplanned work, the human impact of always on world. And you know, we talk a lot in tech about unplanned maintenance and unplanned downtime of machines, whether it's a, a computer or a military jet, you know, unplanned maintenance as a really destructive thing. I don't think I've ever heard anyone frame it for people and really to think about kind of the unplanned work that gets caused by an alert and notification that is so disruptive. And I thought that was a really interesting way to frame the problem and thinking of it from an employee centric point of view to, to reduce the nastiness of unplanned work. >>Absolutely. And that's, that V is very related to that journey of going from being reactive and just reacting to these situations to becoming proactive and being able to predict and, and, uh, address things before they impact the customer. Uh, I would say it's anywhere between 20 to 40 or even higher percent of your time. Maybe looking at software engineers is spent on the some plant work. So what you want to do is you want to minimize that. You want to make sure that, uh, there's a lot of automation in the process that you know what's going on, that you have visibility and that the easy things, the, the repetitive things are easy to automate and the system could just do it for you so that you, you focus on innovating and not on fixing fires. Right? Or if you did to fix the fire, you at least >>to get the fire to the right person who's got the right tone to fix the type. So why don't we just, you know, we see that all the time in incidents, especially at early days for triage. You know, what's happening? Who did it, you know, who's the right people to work on this problem. And you guys are putting a lot of the effort into AI and modeling and your 10 years of data history to get ahead of the curve in assigning that alerted that triage when it comes across the, the, uh, the trans though. >>Yeah, absolutely. And that's, that's another issues. Uh, not having the right ownership, get it, getting people, um, notified when they don't own it and there's nothing they can do about it. Like the old ways of, of uh, sending the alert to everyone and having a hundred people on a call bridge that just doesn't work anymore because they're just sitting there and they're not going to be productive the next day I work cause they're sitting there all night just kind of waiting for, for something to happen. And uh, that's kinda the, the old way of lack of ownership just blasted out to everyone and we have to be a lot more target and understand who owns what and what's, what, which systems are being impacted and they only let getting the right people on the auto call as quickly as possible. The other thing that came up, which I thought, you know, probably a lot of people are thinking of, they only think of the fixing guy that has to wear the pager. >>Sure. But there's a whole lot of other people that might need to be informed, be informed. We talked about in the Comcast example that people interacting with the customer, ABC senior executives need to be in for maybe people that are, you know, on the hook for the SLA on some of the softer things. So the assembly that team goes in need, who needs to know what goes well beyond just the two or three people that are the fixing people? Right. And that's, that's actually tied to one of our announcements, uh, at summit a business, our business response product. So it's all about, um, yes, we notify the people who are on call and are responsible for fixing the problem. You know, the hands on keyboard folks, the technical folks. But we've expanded our workflow solution to also Lupin stakeholders. So think like executives, business owners, people who, um, maybe they run a division but they're not going to go on call to fix the problem themselves, but they need to know what's going on. >>They need to know what the impact is. They need to know is there a revenue impact? Is there a customer impact? Is there a reputational brand impact to, to the business they're running. Which is another thing you guys have brought up, which is so important. It's not just about fixing, fixing the stuck server, it's, it is what is the brand impact, what is the business impact is a much broader conversation, which is interesting to pull it out of just the, just the poor guy in the pager waiting for it to buzz versus now the whole company really being engaged to what's going on. Absolutely. Like connecting the technical, what's happening with the technical services and, and uh, infrastructure to what is the, the impact on the business if something goes wrong. And how much, like are you actually losing revenue? There's certain businesses like e-commerce where you could actually measure your revenue loss on a per minute or per five minute basis. >>Right. And pretty important. Yeah. All right Alex. So you talked about the IPO is a milestone. It's, it's fading, it's fading in the rear view mirror. Now you're on the 90 day shot clock. So right. You gotta keep moving forward. So as you look forward now for your CTO role, what are some of your priorities over the next year or so that you kind of want to drive this shit? Absolutely. So, um, I think just focusing on making the system smarter and make it, uh, so that you can get to that predictive Holy grail where we can know that you're going to have a big incident before it impacts our customers. So you can actually prevent it and get ahead of it based on the leading indicators. So if we've seen this pattern before and last time it causes like an hour of downtime, let's try to catch it early this time and so that you can address it before it impacts for customers. >>So that's one big area of investment for us. And the other one I would say is more on the, uh, the realtime work outside of managing software systems. So, uh, security, customer support. There's all of these other use cases where people need to know, like, signals are, are being generated by machines. People need to know what's going on with those signals. And you want to be proactive and preventative around there. Like think a, a factory with lots and lots of sensors. You don't want to be surprised by something breaking. You want to like get proactive about the maintenance of those systems. If they don't have that, uh, you know, like say a multi-day outage in a factory, it can cost maybe millions of dollars. Right. >>All right, well, Alex, thanks a lot. Again, congratulations on the journey. We, uh, we're enjoying watching it and we'll continue to watch it evolve. So thank you for coming on. Alright, he's Alex. I'm Jeff. You're watching the cube. We're at PagerDuty summit 2019 in downtown San Francisco. Thanks for watching. We'll see you next time.

Published Date : Sep 24 2019

SUMMARY :

summit 2019 brought to you by PagerDuty. So I'm just curious, you know, we have a lot of entrepreneurs. I I, the word I like to use is surreal. And it's really changed a lot since you guys began this journey 10 years right time and um, you know, some, some experiential stuff and some just I mean, the integration lists that you guys have built. kind of in this multi system world, you know, where's the system of record? the one monitoring solution to the Rooney rule them all right. you know, say your cables down and you know, you call Comcast and it's a Zendesk ticket. we're working on a problem, you know, a backhoe clipped the cable down your street. And that's part of the incident life cycle is, is letting your customers know And you know, we talk a lot in tech about unplanned and the system could just do it for you so that you, you focus on innovating and not on fixing fires. So why don't we just, you know, The other thing that came up, which I thought, you know, probably a lot of people are thinking of, are, you know, on the hook for the SLA on some of the softer things. And how much, like are you actually losing over the next year or so that you kind of want to drive this shit? If they don't have that, uh, you know, like say a multi-day outage in a factory, it can cost maybe millions of dollars. So thank you for coming on.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
ComcastORGANIZATION

0.99+

Alex SolomonPERSON

0.99+

JeffPERSON

0.99+

2010DATE

0.99+

twoQUANTITY

0.99+

Jeff FrickPERSON

0.99+

AlexPERSON

0.99+

10 yearsQUANTITY

0.99+

last yearDATE

0.99+

San FranciscoLOCATION

0.99+

three yearsQUANTITY

0.99+

ABCORGANIZATION

0.99+

fourth yearQUANTITY

0.99+

90 dayQUANTITY

0.99+

10 yearQUANTITY

0.99+

NewRelicORGANIZATION

0.99+

fourQUANTITY

0.99+

this yearDATE

0.99+

JennPERSON

0.99+

MeganPERSON

0.99+

DatadogORGANIZATION

0.99+

JenPERSON

0.99+

10 years agoDATE

0.99+

20QUANTITY

0.98+

single playerQUANTITY

0.98+

three peopleQUANTITY

0.98+

todayDATE

0.98+

next yearDATE

0.98+

ZendeskORGANIZATION

0.97+

five different toolsQUANTITY

0.97+

a year agoDATE

0.97+

40QUANTITY

0.97+

OneQUANTITY

0.97+

PagerDuty summit 2019EVENT

0.96+

millions of dollarsQUANTITY

0.96+

five minuteQUANTITY

0.96+

AppDynamicsORGANIZATION

0.96+

SalesforceORGANIZATION

0.95+

PagerDutyEVENT

0.95+

oneQUANTITY

0.95+

one areaQUANTITY

0.94+

hundred peopleQUANTITY

0.93+

firstQUANTITY

0.93+

last 10 yearsDATE

0.93+

eachQUANTITY

0.92+

over 10 yearsQUANTITY

0.91+

PagerDuty Summit 2019EVENT

0.91+

RooneyPERSON

0.91+

SplunkORGANIZATION

0.9+

LupinORGANIZATION

0.88+

one single monitoring toolQUANTITY

0.88+

PagerDuty summit 2019EVENT

0.87+

APMORGANIZATION

0.86+

single paneQUANTITY

0.85+

NagiosTITLE

0.84+

next dayDATE

0.8+

PagerDutyORGANIZATION

0.79+

one bigQUANTITY

0.77+

one monitoring solutionQUANTITY

0.76+

RelicTITLE

0.75+

a week or so agoDATE

0.75+

RelicORGANIZATION

0.72+

Sumo logicORGANIZATION

0.72+

a few monthsDATE

0.72+

SwitzerlandLOCATION

0.68+

PagerDutyPERSON

0.67+

JIRATITLE

0.66+

NagiosORGANIZATION

0.64+

yearsQUANTITY

0.62+

Western Saint FrancisORGANIZATION

0.53+

ZenCOMMERCIAL_ITEM

0.52+

SplunkTITLE

0.47+

pagerTITLE

0.46+

dutyORGANIZATION

0.38+

Jon Hirschtick, Onshape Inc. | Actifio Data Driven 2019


 

>> from Boston, Massachusetts. It's the queue covering active eo 2019. Data driven you by activity. >> Welcome back to Boston. Everybody watching the Cube, the leader and on the ground tech coverage money was David wanted here with my co host. A student of John for is also in the house. This is active FiOS data driven 19 conference. They're second year, John. Her stick is here is the co founder and CEO of on shape John. Thanks for coming in the Cube. Great to have you great to be here. So love the cofounder. I always ask your father. Why did you start the company? Well, we found it on shape because >> we saw an opportunity to improve how every product on Earth gets developed. Let people who develop products do it faster, B'more, innovative, and do it through a new generation software platform based in the cloud. That's our vision for on shape, That's why. Okay, >> so that's great. You start with the widened. The what is just new generation software capabilities to build the great products visualized actually create >> way took the power of cloud web and mobile and used it to re implement a lot of the classic tools for product development. Three d cad Data management Workflow Bill of Materials. He's may not mean anything to you, but they mean a lot to product developers, and we believe by by moving in the cloud by rethinking them for the cloud we can give people capabilities they've never had before. >> John, bring us in tight a little bit. So you know, I think I've heard a lot the last few years. It's like, Well, I could just do everything a simulation computer simulation. We can have all these models. They could make their three D printings changing the way I build prototypes. So what's kind of state of the state and in your fields? So >> the state of the Art R field is to model product in three dimensions in the computer before you build it for lots of reasons. For simulation for three D printing, you have to have a CAD model to do it, to see how it'll look, how parts fit together, how much it will cost. Really, every product today is built twice. First, it's built in the computer in three dimensions, is a digital model, then it's built in the real world, and what we're trying to do is make those three D modeling and data management collaboration tools to take them to a whole nother level to turbo charge it, if you will, so that teams can can work together even if they're distribute around the world. They work faster. They don't have to pay a tax to install and Karen feed for these systems. You're very complicated, a whole bunch of other benefits. So we talk about the cloud model >> you're talking about a sass model, a subscription model of different customer experience, all of the above, all of the above. Yeah, it's definitely a sass model we do on Ly SAS Way >> hosted and, uh, Amazon. Eight of us were all in with Amazon. It's a it's a subscription model, and we provide a much better, much more modern, better, more productive experience for the user CIA disrupting the traditional >> cad business. Is that Is that right? I mean more than cat cat Plus because there's no such thing as a cad company anymore. We're essentially disrupting the systems that we built because I've been in this business 30 38 years now. I've been doing this. I feel like I'm about half done. Really, really talking about >> your career. Way to start out. Well, I grew up in Chicago. I went to M I t and majored in mechanical engineering and knew howto program computers. And I go to get an internship in 1981 and they say computers, mechanical injury. You need to work on CAD. And I haven't stopped since, you know, because Because we're not done, you know, still still working here. You would >> have me, right? You can't let your weight go dynamic way before we get off on the M I t. Thing you were part of, you know, quite well known group. And Emmet tell us a little bit >> about what you're talking about. The American society of Mechanical Engineer >> has may I was actually an officer and and as any I know your great great events, but the number 21 comes to >> mind you're talking about the MIT blackjack team? Yes, I was, ah, player on the MIT blackjack team, and it's the team featured in movies, TV shows and all that. Yeah, very exciting thing to be doing while I was working at the cath lab is a grad student, you know, doing pursuing my legitimate career. There is also also, uh, playing blackjack. Okay, so you got to add some color to that. So where is the goal of the M I T. Blackjack team? What did you guys do? The goal of the M I t blackjack team was honestly, to make money using legal means of skill to Teo obtain an edge playing blackjack. And that's what we did using. Guess what? The theme of data which ties into this data driven conference and what active Eo is doing. I wish we had some of the data tools of today. I wish we had those 30 years ago. We could have We could have done even more, but it really was to win money through skill. Okay, so So you you weren't wired. Is that right? I mean, it was all sort of No, at the time, you could not use a computer in the casino. Legally, it was illegal to use a computer, so we didn't use it. We use the computer to train ourselves to analyze data. To give a systems is very common. But in the casino itself, we were just operating with good old, you know, good. This computer. Okay. And this computer would what you would you would you would count cards you would try to predict using your yeah, count cards and predict in card. Very good observation there. Card counting is really essentially prediction. In a sense, it's knowing when the remaining cards to be dealt are favorable to the player. That's the goal card counting and other systems we used. We had some proprietary systems to that were very, very not very well known. But it was all about knowing when you had an edge and when you did betting a lot of money and when you didn't betting less double doubling down on high probability situations, so on, So did that proceed Or did that catalyze like, you know, four decks, eight decks, 12 12 decks or if they were already multiple decks. So I don't think we drove them to have more decks. But we did our team. Really. Some of the systems are team Pioneer did drive some changes in the game, which are somewhat subtle. I could get into it, you know, I don't know how much time we have that they were minor changes that our team drove. The multiple decks were already are already well established. By the time my team came up, how did you guys do you know it was your record? I like to say we won millions of dollars during the time I was associated with the team and pretty pretty consistently won. We didn't win every day or every weekend, but we'd run a project for, say, six months at a time. We called it a bank kind of like a fund, if you will, into no six months periods we never lost. We always won something, sometimes quite a bit, where it was part of your data model understanding of certain casinos where there's certain casinos that were more friendly to your methodology. Yes, certain casinos have either differences in rules or, more commonly, differences in what I just call conditions like, for instance, obviously there's a lot of people betting a lot of money. It's easier to blend in, and that's a good thing for us. It could be there there. Their aggressiveness about trying to find card counters right would vary from casino to casino, those kinds of factors and occasionally minor rule variations to help us out. So you're very welcome at because he knows is that well, I once that welcome, I've actually been been Bardet many facilities tell us about that. Well, you get, you get barred, you get usually quite politely asked toe leave by some big guy, sometimes a big person, but sometimes just just honestly, people who like you will just come over and say, Hey, John, we'd rather you not play blackjack here, you know that. You know, we only played in very upstanding professional kind of facilities, but still, the message was clear. You know, you're not welcome here in Las Vegas. They're allowed to bar you from the premises with no reason given in Las Vegas. It's just the law there in Atlantic City. That was not the law. But in Vegas they could bar you and just say you're not welcome. If you come back, we'll arrest you for trespassing. Yeah, And you really think you said everything you did was legal? You know, we kind of gaming the system, I guess through, you know, displaying well probabilities and playing well. But this interesting soothe casinos. Khun, rig the system, right? They could never lose, but the >> players has ever get a bet against the House. >> How did >> you did you at all apply that experience? Your affinity to data to you know, Let's fast forward to where you are now, so I think I learned a lot of lessons playing blackjack that apply to my career and design software tools. It's solid works my old company and now death. So System, who acquired solid words and nowt on shape I learned about data and rigor, could be very powerful tools to win. I learned that even when everyone you know will tell you you can't win, you still can win. You know that a lot of people told me Black Jack would never work. A lot of people told me solid works. We never worked. A lot of people told me on shape would be impossible to build. And you know, you learn that you can win even when other people tell you, Can't you learn that in the long run is a long time? People usually think of what you know, Black Jack. You have to play thousands of hands to really see the edge come out. So I've learned that in business sometimes. You know, sometimes you'll see something happened. You just say, Just stay the course. Everything's gonna work out, right? I've seen that happen. >> Well, they say in business oftentimes, if people tell you it's impossible, you're probably looking at a >> good thing to work on. Yeah. So what's made it? What? What? What was made it ostensibly impossible. How did you overcome that challenge? You mean, >> uh, on >> shape? Come on, Shake. A lot of people thought that that using cloud based tools to build all the product development tools people need would be impossible. Our software tools in product development were modeling three D objects to the precision of the real world. You know that a laptop computer, a wristwatch, a chair, it has to be perfect. It's an incredibly hard problem. We work with large amounts of data. We work with really complex mathematics, huge computing loads, huge graphic loads, interactive response times. All these things add up to people feeling Oh, well, that would never be possible in the cloud. But we believe the opposite is true. We believe we're going to show the world. And in the future, people say, you know We don't understand how you do it without the cloud because there's so much computing require. >> Yeah, right. It seems you know where we're heavy in the cloud space. And if you were talking about this 10 years ago, I could understand some skepticism in 10 2019. All of those things that you mentioned, if I could spin it up, I could do it faster. I can get the resources I need when I needed a good economics. But that's what the clouds built for, as opposed to having to build out. You know, all of these resource is yourself. So what >> was the what was the big technical challenge? Was it was it? Was it latent? See, was it was tooling. So performance is one of the big technical challenges, As you'd imagine, You know, we deliver with on shape we deliver a full set of tools, including CAD formal release management with work flow. If that makes sense to you. Building materials, configurations, industrial grade used by professional companies, thousands of companies around the world. We do that all in a Web browser on any Mac Windows machine. Chromebook Lennox's computer iPad. I look atyou. I mean, we're using. We run on all these devices where the on ly tools in our industry that will run on all these devices and we do that kind of magic. There's nothing install. I could go and run on shape right here in your browser. You don't need a 40 pound laptop, so no, you don't need a 40 pound laptop you don't need. You don't need to install anything. It runs like the way we took our inspiration from tools like I Work Day and Sales Force and Zen Desk and Nets. Sweet. It's just we have to do three D graphics and heavy duty released management. All these complexities that they didn't necessarily have to do. The other thing that was hard was not only a technical challenge like that, but way had to rethink how workflow would happen, how the tools could be better. We didn't just take the old tools and throw him up in a cloud window, we said, How could we make a better way of doing workflow, release management and collaboration than it's ever been done before? So we had to rethink the user experience in the paradigms of the systems. Well, you know, a lot of talk about the edge and if it's relevant for your business. But there's a lot of concerns about the cloud being able to support the edge. But just listening to you, John, it's It's like, Well, everybody says it's impossible. Maybe it's not impossible, but maybe you can solve the speed of light problem. Any thoughts on that? Well, I think all cloud solutions use edge to some degree. Like if you look at any of the systems. I just mentioned sales for us workday, Google Maps. They're using these devices. I mean, it's it's important that you have a good client device. You have better experience. They don't just do everything in the cloud. They say There, there. To me, they're like a carefully orchestrated symphony that says We'll do these things in the core of the cloud, these things near the engineer, the user, and then these things will do right in the client device. So when you're moving around your Google map or when you're looking this big report and sales force you're using the client to this is what are we have some amazing people on her team, like R. We have the fellow who was CTO of Blade Logic. Robbie Ready. And he explains these concepts to make John Russo from Hey came to us from Verizon. These are people who know about big systems, and they helped me understand how we would distribute these workloads. So there's there's no such thing is something that runs completely in the cloud. It has to send something down. So, uh, talk aboutthe company where you're at, you guys have done several raises. You've got thousands of customers. You maybe want to add a couple of zeros to that over time is what's the aspirations? Yeah, correct. We have 1000. The good news is we have thousands of customer cos designing everything you could imagine. Some things never would everything from drones two. We have a company doing nuclear counter terrorism equipment. Amazing stuff. Way have people doing special purpose electric vehicles. We have toys way, have furniture, everything you'd imagined. So that's very gratifying. You us. But thousands of companies is still a small part of the world. This is a $10,000,000,000 a year market with $100,000,000,000 in market cap and literally millions of users. So we have great aspirations to grow our number of users and to grow our tool set capability. So let's talk to him for a second. So $10,000,000,000 current tam are there. Jason sees emerging with all these things, like three D printing and machine intelligence, that that actually could significantly increase the tam when you break out your binoculars or even your telescope. Yes, there are. Jason sees their increasing the tam through. Like you say, new areas drive us So So obviously someone is doing more additive manufacturing. More generative design. They're goingto have more use for tools like ours. Cos the other thing that I observed, if I can add one, it's my own observations. I think design is becoming a greater component of GDP, if you will, like if you look at how much goods in the world are driven by design value versus a decade or two or when I was a child, you know, I just see this is incredible amount, like products are distinguished by design more and more, and so I think that we'll see growth also through through the growth in design as an element of GDP on >> Jonah. I love that observation actually felt like, you know, my tradition. Engineering education. Yeah, didn't get much. A lot of design thing. It wasn't until I was in industry for years. That had a lot of exposure to that. And it's something that we've seen huge explosion last 10 years. And if you talk about automation versus people, it's like the people that designed that creativity is what's going to drive into the >> absolutely, You know, we just surveyed almost 1000 professionals product development leaders. Honestly, I think we haven't published our results yet, So you're getting it. We're about to publish it online, and we found that top of mind is designed process improvements over any particular technology. Be a machine learning, You know, the machine learning is a school for the product development. How did it manufacturers a tool to develop new products, but ultimately they have to have a great process to be competitive in today's very competitive markets. Well, you've seen the effect of the impact that Apple has had on DH sort of awakening people to know the value of grace. Desire absolutely have to go back to the Sony Walkman. You know what happened when I first saw one, right? That's very interesting design. And then, you know, Dark Ages compared to today, you know, I hate to say it. Not a shot at Sony with Sony Wass was the apple? Yeah, era. And what happened? Did they drop the ball on manufacturing? Was it cost to shoot? No. They lost the design leadership poll position. They lost that ability to create a world in pox. Now it's apple. And it's not just apple. You've got Tesla who has lit up the world with exciting design. You've got Dyson. You know, you've got a lot of companies that air saying, you know, it's all about designing those cos it's not that they're cheaper products, certainly rethinking things, pushing. Yeah, the way you feel when you use these products, the senses. So >> that's what the brand experience is becoming. All right. All right, John, thanks >> so much for coming on. The Cuban sharing your experiences with our audience. Well, thank you for having me. It's been a pleasure, really? Our pleasure. All right, Keep right. Everybody stupid demand. A volonte, John Furry. We've been back active, eo active data driven 19 from Boston. You're watching the Cube. Thanks

Published Date : Jun 18 2019

SUMMARY :

Data driven you by activity. Great to have you great to be here. software platform based in the cloud. to build the great products visualized actually create of the classic tools for product development. So you know, I think I've heard a lot the last few years. the state of the Art R field is to model product in three dimensions in the computer before all of the above, all of the above. It's a it's a subscription model, and we provide a much better, We're essentially disrupting the systems that we built you know, because Because we're not done, you know, still still working here. before we get off on the M I t. Thing you were part of, about what you're talking about. By the time my team came up, how did you guys do you know it was your record? you know, Let's fast forward to where you are now, so I think I learned a lot of lessons playing blackjack that How did you overcome that challenge? And in the future, people say, you know We don't understand how you do it without All of those things that you that that actually could significantly increase the tam when you break out your binoculars I love that observation actually felt like, you know, my tradition. Yeah, the way you feel when you use these products, the senses. that's what the brand experience is becoming. Well, thank you for having me.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JasonPERSON

0.99+

DavidPERSON

0.99+

Atlantic CityLOCATION

0.99+

JohnPERSON

0.99+

ChicagoLOCATION

0.99+

$100,000,000,000QUANTITY

0.99+

$10,000,000,000QUANTITY

0.99+

12QUANTITY

0.99+

VegasLOCATION

0.99+

1981DATE

0.99+

six monthsQUANTITY

0.99+

AmazonORGANIZATION

0.99+

EightQUANTITY

0.99+

Jon HirschtickPERSON

0.99+

40 poundQUANTITY

0.99+

AppleORGANIZATION

0.99+

Las VegasLOCATION

0.99+

BostonLOCATION

0.99+

John FurryPERSON

0.99+

John RussoPERSON

0.99+

eight decksQUANTITY

0.99+

1000QUANTITY

0.99+

TeslaORGANIZATION

0.99+

SonyORGANIZATION

0.99+

four decksQUANTITY

0.99+

second yearQUANTITY

0.99+

twiceQUANTITY

0.99+

Blade LogicORGANIZATION

0.99+

iPadCOMMERCIAL_ITEM

0.99+

FirstQUANTITY

0.99+

VerizonORGANIZATION

0.99+

Boston, MassachusettsLOCATION

0.99+

CIAORGANIZATION

0.99+

EarthLOCATION

0.99+

Onshape Inc.ORGANIZATION

0.99+

Robbie ReadyPERSON

0.99+

FiOSORGANIZATION

0.99+

millions of dollarsQUANTITY

0.99+

EmmetPERSON

0.99+

todayDATE

0.98+

thousands of handsQUANTITY

0.98+

thousands of companiesQUANTITY

0.98+

10 years agoDATE

0.98+

ChromebookCOMMERCIAL_ITEM

0.98+

a decadeQUANTITY

0.97+

Google mapTITLE

0.97+

10 2019DATE

0.97+

KarenPERSON

0.97+

MacCOMMERCIAL_ITEM

0.97+

30 years agoDATE

0.97+

PioneerORGANIZATION

0.97+

firstQUANTITY

0.96+

almost 1000 professionalsQUANTITY

0.96+

appleORGANIZATION

0.96+

19QUANTITY

0.95+

oneQUANTITY

0.95+

blackjackTITLE

0.95+

millions of usersQUANTITY

0.94+

JonahPERSON

0.94+

thousands of customerQUANTITY

0.93+

WalkmanCOMMERCIAL_ITEM

0.92+

thousands of customersQUANTITY

0.91+

twoQUANTITY

0.9+

Sony WassORGANIZATION

0.89+

$10,000,000,000 a yearQUANTITY

0.89+

30 38 yearsQUANTITY

0.85+

12 decksQUANTITY

0.85+

R.PERSON

0.85+

three dimensionsQUANTITY

0.82+

M I T.ORGANIZATION

0.82+

DysonPERSON

0.81+

I Work DayTITLE

0.8+

cath labORGANIZATION

0.8+

M I tORGANIZATION

0.8+

CubeORGANIZATION

0.78+

zerosQUANTITY

0.78+

MIT blackjack teamORGANIZATION

0.77+

Rowan Trollope, Five9 | CUBEConversation, January 2019


 

>> Welcome to the special. Keep conversation. I'm John Furrier in the Palo Alto Studios of the Cube. We're a special guest Rowan Trollope of CEO of Five9, formerly of Cisco formerly CUBE alumni. Great to see you. Thanks for joining me today. >> Great to see you, John. So let's talk about >> the future. The Contact Center. You got a new role. CEO Five9. Looks like a great opportunity. Tell us about it. >> Well, the Contact Center is really where it's at right now in the UC space on in the collaboration space. And frankly, in the digitization trend for most companies, they're realizing that the experience that they give to their customers has got to transform. You know, customers are telling them that if they don't fix the experience they deliver, they're going to leave. The business is that they're doing business with him. So I think that's, you know, it's It's really emerging as this really hot space and interesting space in a place where businesses recognize they have to spend money and do a much better job. >> One of the things that we talked about the past, certainly that you're always on the wave of cloud data. You've always had that vision in our previous conversations. Five9 Now in this contact center, kind of an old legacy old way of doing things. Voice over i p. You know, managing customer relationships. Whether it's support or outbound seems to be changed with cloud computing in the role of data. And now, machine learning and eyes really been an accelerant. Yet what's your vision over the next five years as this starts to transform and people re imagine what that's going to look like for their businesses? Because certainly customer relationships are changing. People have multiple devices here on any platform, there, there horizontally, moving around different websites, different places different on the undergo. A lot of change happening. What's your vision? There >> is a lot of change happening, and it's changed. But that's primarily driven by consumer behavior and sort of enabled by technology. So the biggest factor, in my opinion, that's affecting businesses, that you have the age of this empowered consumer. You know, ten years ago, for example, my wife stuff is you're bugging the crap out of me about fixing cleaning up my garage and so at the time, the way that I did that as I ran down to Home Depot, I looked at what they had on the shelf. I picked a, you know, a shelving system, and I brought it home, and I set it up ten years later, and this is just about a year ago. We have moved since then and, you know, the garages yet again, a mess. And I've been giving getting a hard time about it. So she's finally I said, Okay, okay, I'll organize the garage. And so what do I do this time? I go out and I get on my phone and I search for garage organizing systems, and I get lots of different forms and people talking about things and reading customer reviews and so on and so forth. I do a whole bunch of research, actually call a couple of the companies that made three different calls just to get some details about their product that I couldn't get online and ultimately ordered one. And it shows up at my house. So ten years ago, you have sort of, you know, not very empowered consumer. I took what was on the shelf. That's what I got. Ten years later, you have zero trips to retail brick and mortar. You have a very empowered consumer. Me. It has lots of options, lots of choices and three calls to a contact center that happened in the span of ten years, powered by the Internet, >> power by my mobile phone, powered >> by connectivity and so on and so forth. So any business, who's in, you know, every business essentially is dealing with this challenge and my expectation in terms of who I'm gonna do. Businesses with heavily influenced by the quality of their website, the quality of the experience that they had the quality of their community, that user reviews, they were coming back and, you know, some of them. Some of the commentary, Like, I got this thing it was missing. Some stuff I couldn't get hold of them was super hard to deal with. I'm not going to do business with that company. So what? You know, part of that transformation over the last ten, twenty, thirty years has been the empowered consumer gets to make a choice, and they don't have to do business if you don't have a great experience. So that's moving the contact center industry from being a sort of an extension of the phone system that we really don't want to think about very often into some that's really, really important for businesses, and I was seeing that left and right coming from my previous job. >> It's interesting. It's an opportunity to its challenge on one hand, for company dealing with the old way to do it, it becomes an opportunity when the user expectations and experiences impacted. That's a buying decision or relationship. Emotional decision. What is this opportunity mean for companies? Because now this now flips to the potential sellers of services and products. They have now an opportunity to take advantage of this new dynamic where users are in charge of being empowered. What's the opportunity for companies? >> So so it's two things. One, if you're a disruptive company coming out, you know any or starting up a new company and you're going after this. You can look at the user experience as part of your differentiation value proposition. I'm not only going to have a great product, but I'm gonna wrap that in the great experience. And that's the expectation today that any new company company will take a company like Square, for example, Yes, they have a beautiful little card swipe reader, and they have a, you know, nice industrial design. But that's not just what you get. You get a team behind that coming from, you know, the company that provides great support and a great experience. And when you sign up for square, the first thing you get is an email from their CEO sort of welcoming you to the community, and you see that with a lot of modern companies. Tesla's another great example where you see a really tremendous experience being around what is fundamentally a great product. Uh, and that's not something that you would see with the the incumbents. I think if you're a disrupter or new company, or you're looking at transforming an industry, then the opportunity is think about the holistic customer experience. If you're an incumbent or you've been in the business for a while and you're facing one of these sort of digital disruptors, if you want to call them that, then your opportunity is to re imagine your customer experience end to end and put some time and effort into it. You know, the reality is still, and I was in the call center thirty years ago, almost. The customers at the call centre. In most businesses, most incumbent businesses today is a call is a cost center because it's something that you sort of essentially have to deal with once this product has been sold. And it's not a place that most executives in most businesses want to go. In fact, in many cases it's been sent to other countries. Your contact center is You don't even know where it is. It's in the Philippines or it's and you know, some other country or it's in India or it's it's in a state, you know, less expensive state, which is all fine. But it's not fine that executives and companies don't want to go and see where the frontline of their business is, which is the place where they that experience meets the customer. So if you're an incumbent, you really have to think about, you know, you have to think about putting your contact center as a priority for your business and re imagining the experience and look, go walk a day in their shoes and experience what it's like. >> One of the things that we've been reporting on over the years and and you know you've been following the Cube and it's looking angle is the talk of CX or custom experience been going on for many, many years, somewhat aspirational outside of the corner cases of companies that actually specialize in, you know, differentiating on customer satisfaction and user experience. And that's obvious than you check the box there. But as as the market changes its now attainable, we're seeing that the rial actionable execution for companies to modernize what was once a call. Soon, as you pointed out, >> how do they >> do that? What's what's happening? Certainly, cloud computing helps data, and I are kind of at the table. How does a company that wants to modernize and have a real advantage and change their business business approach? What do they do? What's the What's the plan? You guys seem to be position for that. What do I do? What's the playbook? >> Go to Five9.com. No. The reality is that the first thing you have to do is really believe that this is an important aspect of delivering your business to your end consumer and and look at what's making up your competitors offer not just their product, but their offer and and sort of internalize and get the idea that OK, yes, this turns out it is important and I care about it. I'm going to go spend time on it because, look, the reality is we know how to deliver any business. You don't have to be a genius to figure out how to deliver great customer service. You know, what customers want is actually really simple. When I call you answer the phone. Don't send me through some rigmarole of IVRs and other technology hurdles. Don't hide your phone number when I want to get a hold of you make it easy for me to contact you. And when I contact you, what I want, I want someone who understands me. Who knows the problem that I have, Who's an expert who can help me and who has empathy, you know who can really connect with me and relate with me. And if there's a problem, it's not just about I'm going to solve the problem. But it's like we understand and we're sorry, you know, and we're going to make this better for you, and we're going to follow up with you, so that's a big part of what you have to-turns out doing that is not hard. You don't have to be a genius to figure out how to do it. Now. There are lots of technology companies that are out there today that make that easy. And the history of the Contact Center, essentially over the last twenty five years, has been essentially kind of stuck in the, you know, in a phone closet, somewheres with some technology that has actually hindered what smart people knew. We knew how to do this. We knew how to deliver a great experience. The problem was, you had like this legacy technology, and you had to call somebody in the data center somewhere else, and they were like, That's going to be hard. It's going to cost millions of dollars and our system doesn't support that. And so there is a technology sort of shackles that were on customer service experts and executives in businesses was like, Wow, that sounds like it's going to be expensive. It's going to take a long time now. We're in a world with the cloud where within a few clicks and a few minutes, You can deploy a contact center >> so we go to >> our side or other sites, and you can instantly have, you know, very, very quickly have a contact center that is modern that is flexible, that is, you know, has all the latest features and functionality. And so technology is no longer the hindrance that has been taken off the table. Our company was born in the cloud. There's other companies out there people can use. The bottom line is this is not really a technology problem anymore. >> So people have multiple devices and a lot of different channels of how people engaged. That's expectation on the Cust company side variety of sets of resource is that could be deployed at any given time. So you kind of have this now integrated kind of philosophy with cloud. How What does Cloud and Data? And now Aye, aye, due to the context. And how's the contact center change? Yeah. Does it look like >> that? Yes. Of the real, most important thing that has happened with the cloud computing wave is, you know, first that it made technology easy to consume. You know, it used to be really hard and expensive like we just talked about just to get technology. And then once you've got it, you were stuck with it and didn't change ever. Okay, we're kind of beyond that now with the cloud and that those were the table stakes. But something else happened when we started moving technology to the cloud that was more important. That and that was that we started collecting data, and as we started to collect data >> that >> became really interesting because of one other thing that happened, which was the revolution that happened in machine learning, and it started about ten years ago with some very, you know, big scientific breakthroughs on deep learning, more specifically, and what that deep learning approach needed was lots and lots and lots of data in order to work. It was a great scientific breakthrough, but it kind of stalled a little bit at the beginning because you didn't. There wasn't a lot of data out there that could actually you could get the benefits. Well, as companies have more, more been moving to the cloud. What that's creating his centers of data and not just data for your company, because lots of businesses don't have enough data actually to power machine learning algorithms. Machine learning algorithms are famously data hungry. You know that there's a famous saying from, you know, a bunch of folks in the AI industry. But it's that more data is better data. You know, the more you have, the better you are. In fact, you can also say that you know if you have more data, it's better than having a great algorithm right. The more data will always win. So what the cloud has unlocked is massive amounts of data, and that data is important to actually get at the root cause of the problem of bad customer service and support, which is with that data and the breakthroughs in machine learning. And that data in our industry is customer conversations. What your customers are actually telling you, either by text or by voice or by email that information is really interesting and can be married with machine learning technology to provide automation. It's >> interesting you mentioned customer. I think that's, I think, a key point. And you know, as we look at the data world, people look at certainly from a tech perspective that supply technology to data great that could assist then things. But we tell what customers and you're in business to serve customers. That's probably most valuable data. So as you said earlier, people hide the phone Or is it that they want to shy away from engaging with customers to not support them or hope they go away? They might be indifferent of serving them. You're saying the reverse Be proactive, engaged the customer, get that data so you can iterated on that. So I get that I think that Israel innovation in terms of the direction but as you did with customers is also the human side of it. Yeah, customers want to know that there's someone on the other side. You brought your garage organizing system because that component, how is the role of humans and machines impacting this new transformation from customer center to custom contact Sent it to essentially customer center. Yeah, what is the What is that piece of human? Super important? >> Yeah, we don't see technology replacing all the humans, actually, because and this goes back to my experience in the in the contact center many years ago. And, you know, many years ago. And my observation was and I, in fact, my first job I said, you know, in between two different agents and one of them was named Dave and one was named Ken. And Ken was really warm and effusive, and he got, I remember, he used to get gifts on his desk from customers. They would send him flowers and chocolates and, you know, like their products and so on. And he could tell a customer to shut up in a nice way, and they would love him after it. I mean, it was amazing that he could do this. It was all about empathy. He didn't. He didn't actually know all the answers to all of the questions. But he created these, like, incredible fans amongst the customers. The guy to my right, Dave, he was super smart. He just had, like, as much empathy as a rock. And he could answer all the questions really fast. Okay, And I So I use that cause I would learn things from him, but customers didn't like him. And the answer, you know, what I saw in those two folks, was that you can't do one or the other. You need both And what computers, are and what machine learning specifically. But now that we're getting all this data through the cloud is is able to do is we're able to predict the answers to what customers. You know what the questions from customers will to predict those things really quickly. So that's a sort of a mastery, so machines can help with mastery. They can help with being able to answer every question instantly or know the best thing to say at it to a customer at any given time. But what machines can't do is empathy. Humans are the ones that have to bring the heart. So what we're working on at Five9 is using machines to help agents. Human agents give them mastery, and we're letting the humans then focus on what they do really well, which is bringing the heart to the customer. And that creates a a bond between a brand and a customer that is like, unbreakable. >> I think you're onto something big here because we look a digital, the impact of digital technologies And you could look at variety examples mainstream media to technology companies to any kind of industry of vertical. There's a lot lack of emotional I Q or emotional quotient, and this seems to be what people are looking at you. I'm just looking further than some of the polarization in with digital in terms of media coverage, politics or whatnot. You started to see this focus on how to bring Mohr empathy and Mohr emotional like, yeah, two systems. And I think users are responding to that. Can you comment on your reaction to that? >> Yeah, part of this starts with confusion that the contact that is rampant in the contact centre industry, which is that people don't really want to talk anymore. And, you know, this has been observed because of the fact that, you know, we have new generations entering the work force like millennials. You know, we'll have our kids out there who would prefer to text us than talkto us often. But the reality is, and we surveyed this that actually even millennials still prefer voice as the primary form of communication and and that what has happened, that is the mistake. What is the error that people made? The error that people made is assuming that no, if it were actually conflating a bad voice experience with the fact that voice is bad and that's just not true, and it's observable. Not too. We've gone and actually proven this. So So what we've sort of realised is that what you need to fix is the bad voice experience. What is that? It's like, Wait, going into an Ivy are Okay, That's frustrating. You know what's >> a G are real quick to >> find the interactive voice response. So it's the push one for this push to for that. Everybody hates everyone hates, you know, every company uses it, and it's like a stain on humanity. We need to get rid of those things because they're just awful. So you go into this tree and all that, Okay, so get rid of it. By the way, everybody, you know, five years ago said, Oh, we can fix that problem with bots. Oh, and that actually is almost worse. You know, I've been trying to use bots for the last three months. I've been doing my own little test on this and communicating, you know, using only using text and whenever I hit a But it's like the last thing I want to do is talk to a computer. I want to get to a human. So my first question now is Are you human, which is my version of push zero to get through the I v. R. Gets again to an agent. Okay, so you know, there's been a confusion about this, and when you go back to what you had said earlier, this notion that users that, you know, the empathy is what has tend to be lost. Well, turns out it's much harder to make a emotional connection on text. Then it is with voice, and people just in general are not as good at communicating that emotional content on text because they're not very good writers generally, and they don't have time, Whereas they're excellent at doing that with their voice. You know, I'm not happy verses. I'm not happy, you know, there's a huge range of emotion that commune can be communicated with the human voice, which is extremely powerful. So if we can fix the bad voice experience, take away all that crap so that when you get someone they know, you know they know who you are. It's a you know, if they understand you, they can get to the root cause of your problem very quickly. Then it turns out that the human voice is extremely useful and and we're in now entering into an era where we can use the computer to talk to humans in unique and interesting ways now that I believe is actually still a little bit further out because of a variety of reasons. But in the meantime, computers and a I can help agents master their craft and let them focus on the embassy side of >> things. So in terms of Five9, the core problem that you're solving is what. >> So we provide a flexible, easy to configure, easy to deploy, cloud based contact center. OK, and it's it's it's it's minutes or hours before you can have this technology deployed. You don't need to have a phone system. So you look at a call center that sort of from the old days, and it's like lots of phones on desks in our world. You sweep those away. You have a computer in a Web browser. You plug in a headset, your agent could be sitting anywhere in the world. They get a beautiful web UI that's deeply integrated into Sales Force or Zen desk or service. Now >> our Oracle or >> any CRM system that you have, and we give you this really, really tightly integrated end to end experience. And we just make all of that easy and it handles any kind of contact, whether it's voice or text or email, it all goes through our system. It's all in the cloud. It's really easy and it's affordable. >> And the data management is pretty straight forward. Is that going to be flexible and agile enough to use with other things as people start having different touchpoints? >> Absolutely. In fact, with our system, all your calls are recorded into the cloud, as are all of your contacts. All of that is stored securely in our servers and is accessible to you. You can. There's a whole range of APS in the contact center. You can plug in on >> top of our platform >> and including things like variant Collab Rios. You know this whole area of workforce optimization and and so on, so lots and lots of technologies are actually built on Five9. So when you, by our technology, really banged up technology platform with ah rich ecosystem of APS that plug in on top of it and where we sit really in that value chain, you know, is the core platform that delivers that delivers the data and the pipes, and we sort of provide the intelligence. Also, that runs on top of that data, and that's where we're heading >> and that's your core innovation. Pretty much get that cloud based in it up fast. Get the focus on >> that part of it, and I'd say the second part of it that's sort of product on platform. The second part is really the offer. So it turns out that if you go to most companies, the things that make their customer experience poor that they want to fix, ah, solvable through capabilities that are already available in the platforms that they generally already have. What they're missing is a partner who can help them make that happen because it turns out it's not easy. You know, we've got a very flexible platform. It's been built over more than a decade, so it's like, really rich and in features. But the question and more and more what we see our customers wanting from us is a complete offer, and that includes professional services on site support, you know, people to help you, you know, handhold walk you through that process so well, kind of go the extra mile for our customers and give them in end end solution to their problem, not just a piece of technology. Now, if just technology is what you wanted, Our technology works for businesses with two support center reps. So it's, you know, weeks scale >> all the >> way down to folks. But we also have context are running that have for thousand reps. So we run that entire that entire spectrum for the small customers. They want something easy pre configured off the shelf. Just go. Okay, There's nobody coming on site for those customers. You have four thousand reps, We've got people on site. We darken the skies with our support people and our our engineers and everyone else actually provide a complete solution to our customers. >> That's great. We'll congratulate. I think having that innovation and having the cloud approach gets it up fast, gets the value delivered. And then as they grow, you can flex it, flex with flex the size, the organization not limited. So I want to get Teo. You're doing a panel discussion. Enterprise connect coming up in Orlando That's where we first met. This has been a show that's been talking to the enterprise customers who are been evolving from voice over i. P. Integrated communications, unified communications. Though that world of voice, data and systems to now and open cloud based data A. I So should be exciting. Yeah, panel want to get I don't want to give it away, but what are you talking about? The title is why customer engagements leading the Enterprise communications conversation Give us a quick teaser. What? >> I'm going to be focused on what's coming next, and one of the big reasons that drove me to this company that's attracted some top talent in the industry is that many of us see that the era of the cloud has actually opened these golden doors to a new land, which is powered by artificial intelligence and machine learning, and that we see that solving some of the root cause problems that we talked about earlier, bad customer service and experience that have essentially been talked about for a long time but haven't been solved. That finally, the technology is actually caught up to the problem. And so our big play at Five9 is to become the world's best self learning intelligent contact center platform. And we see that the Contact Center has--is shifting from being less a contact center and more a center of customer data and that that is the key insight that that is the key and said that we had is that wow, this is a lot of really interesting data, you know? Turns out what your customers say to you. It's really, really important. And today, in almost all contact centers, almost everywhere that data goes nowhere. It goes away because it's not very useful. Most of the most of what customers they're telling you is actually voice traffic, and that sits in wave files. If you recorded at all, which many customers don't and then they're not very useful, so they get thrown away. We figured out that that information is ridiculously valuable, but it's only become valuable recently because of advances in machine learning that allow us to do speech to text reliably as good as humans. Okay, speech to text has been around for a while. It's just been really crappy. Now it's really good. And now that it's gotten really good and affordable. Every customer can take advantage of it. So because all of our customers have all of their data stored on our cloud and all calls get recorded, we can now start to translate those voice wave files into text and provide that as insight back to the customer. We signed a partnership with Google to leverage their technology to help us make sense of all of those spoken conversations and then, ultimately all of the text. So we believe the next generation of the Contact Center is going to be less about a contact center and more about a center of customer data, which can be used to drive automation and insight back into the business. That's the big transformation for the next decade in the Contact Center. >> taking the Contact center making gets a customer center. This is kind of compatible with >> Hostage Data Center. That's runner of customer data. >> I mean, it's it's really kind of in line with how dev ops change cloud computing, where you had Devon ops coming together and you're taking that concept, that ethos to the context center, You know? Look, >> um, >> I'm not sure that it's exactly like Deb ops, but I guess you could draw that correlation. I think what you do see in businesses that there's new functions >> popping up all >> the time. A recent function that's popped up his customer, uh, customer success and what his customers success. It's all about reaching out to your customer to help make them successful in the insight that led to customers successes. When you have a services business, if you engage with your customer proactively, you actually can make more money and drive higher value both for the customer and for the business. And you know, I relate this back to my first experience in business, and I remember and I was in support and we're on the twelfth floor. We had a >> whole floor of people, >> and I remember our boss came down one day and they said something really interesting. They said Every time you guys pick up the phone, we lose money. I mean, if you can believe that is that it is now. It sounds crazy, but that's what happened in America. I felt kind of bad about that was like, Wow, I don't want to answer the phone, but it's ringing all the time. So what am I going to do? Well, the answer was we hired someone, not me. But the team hired someone to hide the phone number, which is sort of logical if you're told that when you pick up the phone, you're going to lose money. What do you want to get less phone calls? Well, how are you going to do that? Well, the company's customers can't find about Guess what tons of customers did this other thing we did? Was we employment in an i. V. R. Let's try and give themselves service. So they really the motivation >> of hiding the customer experience that we were running away from the customer experience, >> that iron Iwas and this is in hindsight. I see this that right on the floor above me was it wasn't the thirteenth. It was the fourteenth floor. It was a sales floor and they were doing everything they could tow proactively reach out and contact customers who didn't want to really hear from here from the sales people. So you had this situation. We had a floor of people, my floor, which were sort of running away from customers and a floor of people they were trying to run towards customers and like we're both missing them. It was insane. And what's now transpired in businesses that people get this and go? Wow. If I can deliver a great experience, it actually increases loyalty. It increases the amount of services that my customer will get. They get more value, I get more value. We want to run way, want to run towards customers. We want to reduce the distance between a business and their customer to zero. We want that to be like this kind of connection. We want our businesses, you know, their customers, toe love them. And the way that you get that love actually often comes through the contact Center. So it's becoming much more >> strategic, connecting in, engaging with customers. We're only going >> to be powered by machine learning like you can't do this. Okay? Just by going, I mean, you could do it by hiring lots and lots of humans, but it's really expensive. OK? Does not scale. So the only answer to this problem, which we know how to solve, is toe leverage technology. And it starts in the cloud. >> Right? Great stuff. We'll see you at Enterprise Connect the Cube will be there and great degree to see you. Thanks for coming and great to see this's The Cube Conversation Special Cube comes here in Palo Alto. Grown trial of CEO of Five9. Solving the contact problem. Bring it in. Modernizing it. Running towards customers Customer engagement and big panel coming up. Enterprise connect. I'm John for here in Palo Alto. Thanks for watching.

Published Date : Jan 25 2019

SUMMARY :

I'm John Furrier in the Palo Alto Studios of the Cube. You got a new role. So I think that's, you know, it's It's really emerging as this really hot One of the things that we talked about the past, certainly that you're always on the wave of cloud data. actually call a couple of the companies that made three different calls just to get some details about their product that that user reviews, they were coming back and, you know, some of them. It's an opportunity to its challenge on one hand, for company dealing with the old way to do it, It's in the Philippines or it's and you know, some other country or it's in India or it's it's in a state, One of the things that we've been reporting on over the years and and you know you've been following the Cube and it's looking angle is the Certainly, cloud computing helps data, and I are kind of at the table. the first thing you have to do is really believe that this is an important aspect of delivering your center that is modern that is flexible, that is, you know, has all the latest features and functionality. So you kind of have this with the cloud computing wave is, you know, first that it made You know that there's a famous saying from, you know, a bunch of folks in the AI industry. So I get that I think that Israel innovation in terms of the direction but as you And the answer, you know, what I saw in those two folks, was that you can't do one or the other. and this seems to be what people are looking at you. that what you need to fix is the bad voice experience. So it's the push one for this push to for that. So in terms of Five9, the core problem that you're solving is what. So you look at a call center that sort of from the old days, any CRM system that you have, and we give you this really, really tightly integrated end to end experience. Is that going to be flexible and agile enough to use with other All of that is stored securely in our servers and is accessible to you. you know, is the core platform that delivers that delivers the data and the pipes, Get the focus on and that includes professional services on site support, you know, We darken the skies with our support people and our our engineers And then as they grow, you can flex it, flex with flex the size, Most of the most of what customers they're telling you is actually voice traffic, taking the Contact center making gets a customer center. That's runner of customer data. I'm not sure that it's exactly like Deb ops, but I guess you could draw that correlation. in the insight that led to customers successes. But the team hired someone to hide the phone number, which is And the way that you get that love actually often strategic, connecting in, engaging with customers. So the only answer to this problem, which we know how to solve, We'll see you at Enterprise Connect the Cube will be there and great degree to see you.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Rowan TrollopePERSON

0.99+

DavePERSON

0.99+

KenPERSON

0.99+

GoogleORGANIZATION

0.99+

John FurrierPERSON

0.99+

AmericaLOCATION

0.99+

JohnPERSON

0.99+

IndiaLOCATION

0.99+

Palo AltoLOCATION

0.99+

January 2019DATE

0.99+

CiscoORGANIZATION

0.99+

firstQUANTITY

0.99+

OrlandoLOCATION

0.99+

SquareORGANIZATION

0.99+

Five9ORGANIZATION

0.99+

PhilippinesLOCATION

0.99+

ten yearsQUANTITY

0.99+

twelfth floorQUANTITY

0.99+

thirteenthQUANTITY

0.99+

fourteenth floorQUANTITY

0.99+

CUBEORGANIZATION

0.99+

TeslaORGANIZATION

0.99+

two different agentsQUANTITY

0.99+

oneQUANTITY

0.99+

two folksQUANTITY

0.99+

first questionQUANTITY

0.99+

todayDATE

0.99+

first experienceQUANTITY

0.99+

thirty years agoDATE

0.99+

ten years laterDATE

0.99+

ten years agoDATE

0.99+

bothQUANTITY

0.99+

second partQUANTITY

0.99+

two thingsQUANTITY

0.99+

OneQUANTITY

0.99+

Home DepotORGANIZATION

0.99+

first jobQUANTITY

0.98+

Enterprise ConnectORGANIZATION

0.98+

four thousand repsQUANTITY

0.98+

Enterprise connectORGANIZATION

0.98+

Ten years laterDATE

0.98+

two support center reps.QUANTITY

0.98+

three callsQUANTITY

0.97+

OracleORGANIZATION

0.97+

squareORGANIZATION

0.97+

MohrPERSON

0.97+

five years agoDATE

0.97+

more than a decadeQUANTITY

0.97+

three different callsQUANTITY

0.96+

zeroQUANTITY

0.96+

many years agoDATE

0.96+

next decadeDATE

0.95+

about a year agoDATE

0.94+

about ten years agoDATE

0.93+

thousand reps.QUANTITY

0.92+

twentyQUANTITY

0.91+

CubeORGANIZATION

0.9+

Five9TITLE

0.9+

IsraelLOCATION

0.9+

one other thingQUANTITY

0.89+

two systemsQUANTITY

0.88+

thirty yearsQUANTITY

0.87+

last three monthsDATE

0.84+

tenQUANTITY

0.84+

TeoPERSON

0.83+

Contact CenterORGANIZATION

0.83+

last twenty five yearsDATE

0.81+

a dayQUANTITY

0.81+

Sales ForceTITLE

0.81+

one of themQUANTITY

0.79+

ZenTITLE

0.79+

Mike Ferris, Red Hat | AWS re:Invent 2018


 

>> Live, from Las Vegas, it's the Cube, covering AWS re:Invent 2018, brought to you by Amazon Web Services, Intel, and their ecosystem partners. >> Hey welcome back everyone, live here in Las Vegas for AWS re:Invent 2018, all the action is happening for Amazon Web Services. I'm John Furrier, Dave Vellante, Dave six years covering Amazon, great opportunity, a lot of news, Red Hat is a big part of it, Mike Ferris is here. Vice President, Technical Business Development for Red Hat, welcome back, good to see you. >> Likewise. >> A lot's going with you guys since our Red Hat Summit days in San Francisco just a few months ago. >> Yeah. >> Big news hit. >> Yeah. >> The bomb around the world, the rock that hit the ground really hard, shook everyone up, surprised everyone including me, I'm like "Wow, IBM and Red Hat". What an interesting relationship, obviously the history with IBM has been good. Talk about the announcement with IBM because this is huge. Of course, big numbers, but also impact wise pretty big. >> Yeah, it's exciting times right? And if you kind of look at, you know, from the perspective of Red Hat in this, this will allow us to really scale and accelerate what we've already been doing for the past, you know, since really the 1994 era when Red Hat was founded and, you know, it kind of validates a lot of what we've put into open source and enterprise customers since then. You know, we really see a couple of key outtakes from this, one of which is, certainly it's going to give us the resources to be able to really grow with the scale that we need to. It's also going to allow us to invest more in open source in emerging areas, bring the value of scale and certainly choice and flexibility to more customers, and then ultimately kind of the global advantage of hybrid and multi cloud, we'll be able to reach more partners and customers everywhere, and it puts us several years ahead of where we have been and where we would have been frankly, and ultimately our intent is that with IBM we'll become the leading hybrid and multi cloud provider overall. >> Yeah, Jimmy and Jim Whitehurst kind of ruined our Sunday, we were sitting down to watch football and he's like got the announcement. And then Jimmy kept saying "It's not backend loaded, it's not backend loaded" and then you start to realize, wow, IBM has an enormous business of managing applications that need to be modernized and OpenShift is obviously a great place to do that so, it's got to be super exciting for you guys to have that giant new opportunity to go after as well as global scale that you didn't have before. >> And, you know, this extends the stuff that we did, announced in May at Red Hat Summit with IBM where we really focused on how do we take WebSphere DB2MQ, running on IBM cloud private, running on OpenShift, and make that the hybrid choice. And so it's a natural extension of what we've already been doing and it gives us a lot more resources than we would have otherwise. >> This is good, coming into the next segment I want to chat about is RHEL, and what people might not understand from the announcment is the synergy you guys have with IBM because, being a student of Red Hat, being just in the industry when you guys were rebels, open source, second tier citizen, and the enterprise, the adoption then became tier one service. I mean you guys have, level of service, 17 years or something, huge numbers, but remember where it all started. And then you became a tier one supplier to almost all the enterprises, so you're actually a product company as well as a huge open source player. That's powerful and unique. >> Absolutely, even if you look at kind of what Amazon is doing this week and have been doing over the years, they're a huge value ad provider of open source technology as well, and one of the statements that we've always made is, the public cloud would not exist if not for Linux and open source, and so everything has been based upon that. There's one provider that doesn't use Linux as the base of their platform but certainly as we've taken the in roads into the enterprise, you know, I was there when it started with just turning Red Hat Enterprise Linux on and then bringing it from the edge of network into the data center and talking about major providers like Oracle, HP, Del, IBM as part of that. Now, we're looking at "Is it a de facto standard?", and everyone including Amazon and all of it's competitors are really invested heavily in the open source world. >> And so, let's talk about the impact to the products, okay so one of the things that has come up, at least on my Twitter feed and the conversations is, okay, it's going to take some time to close the deal, you're still Red Hat, you're still doing your things, what's the impact to the customers and to the ecosystem in your mind? How are you guys talking about that right now? Obviously, it's more of the same, keep the Red Hat same, unique, independent, what new thing is going to come out of it? >> So, to be clear, the deal has not closed, right, so there's not a lot we're going to say otherwise. >> A year away, you got a lot of work to do. >> Our focus is what it always has been. Let's build the best enterprise products using the open source development model and make those available across all public and hybrid cloud environments. >> At a certain level, that's enterprise, multi-year, old Red Hat, same Red Hat model, alright. >> But let me follow up on that, because you're a believer in multi cloud, we're a believer in, whatever you call it, multiple clouds, customers are going to use multiple clouds. We believe that, you believe that, it seems like Amazon has a slightly different perspective on that, >> Cause they're one cloud. >> in that this greater value, right cause they're one cloud, there's greater value, but it seems like the reality when you talk to customers is, we're not just one company, we've got different divisions, and eventually we've got to bring those together in some kind of extraction layer. That's what you guys want to be, right? So, your perspectives on multi cloud? >> Absolutely, so, each individual department, each project, each developer, in all of these major enterprises, you know, has a different vantage, and yes, there are corporate standards, golden masters of RHEL that get produced, everybody's supposed to be using, but you know, the practicalities of how you develop software, especially in the age of dev ops and containers and moving forward is actually, you have to have the choice necessary to meet your specific needs, and while we will absolutely do everything we can to make sure that things are consistent, I mean, we started this with RHEL consistency, on and off premise, when we did the original Amazon relationship. The point is, you need to be able to give people the flexibility and choice that they desire, regardless of what area of the company that they're in, and that's going to be the focus, regardless of whether it's Microsoft, Amazon, Google, IBM clouds, international clouds with Alibaba, it's all the same to us and we have to make sure it's there. >> What's always great about the cloud shows, especially this one, it's one of my favorites, because it really is dev ops deep in the mindset culture. As you see AI and machine learning start to get powered by all these great resources, computer, et cetera, the developer is going crazy, there's going to be another renaissance in software development, and then you got things like Kubernetes and containers now mainstream. Kubernetes almost, I say, de facto standard. >> Yeah. >> Absolutely happened, you guys had a big part of making that happen. People are now agreeing on things, so the formation's coming together pretty quickly, you're seeing the growth, we're hearing terms like "co-creation", "co-opetition", those are signals for a large rising tide, your thoughts? >> So, it's interesting, we were an early investor in Kubernetes, we actually launched OpenShift prior to Kubernetes, and then we adopted it and made a shift of our platform before it was too late. We did the same thing with hypervisors when we moved from Zen to KBM, but this overall approach is, once we see the energy happening both in the community and the early customers, then you see the partners start to come on board, it becomes the de facto standard, it's really crucial for us as an open source company to make sure we follow those trends, and then we help mature them across the business ecosystem, and that's something we've loved being able to engage with. I mean, Google certainly instigated the Kubernetes movement, but then it starts to propagate, just like on the open stack side, it came out of Rackspace and Nasa and then moved on to different areas and so, you know, our focus is, how do we continue that choice and that evolution overall? >> How would you talk about the impact of Kubernetes if someone says "Hey Mike, what's the real impact, what is it going to accomplish at the end of the day?" What's your view of that? >> It will have the same impact that the Linux current standardization has had, you know, but in this case for micro services and application packaging and being able to do dev ops much more efficiently across heterogeneous platforms. >> Does it make it easier or less painful or does it go away? Is it automated under the covers? I mean, this is a big, awesome opportunity. >> So the orchestration capabilities of Kubernetes combined with all the other tools that surround key container platforms like OpenShift, really give that developer the full life cycle environment to be able to take something from concept through deployment, and onto the maintenance phases, and you know, what we end up doing is we look at, okay the technologies are there, what value ads to we have around that to make sure that a customer and a developer cn actually maintain this thing long term and keep their enterprise applications up? >> So, security for example. >> Security is a great example, right? How do we make sure that every container that gets deployed on Kubernetes platforms or by Kubernetes platforms, that every container that's deployed which, keep in mind, is an operating system, it has an operating system in the container itself, how is that kept up to date? How do you make sure that when the next security errata is released, from us or a different vendor possibly, how do you make sure that that container is secure? And, you know, we've done a lot in our registry as well as our catalog to make sure that all of our partners and customers can see their containers, know what grade they have in a security context, and be able to grow that. That's one of the core things that we see adding into this Kubernetes value and authorization level. >> It's not a trivial technical problem either. >> No. >> Sometimes micro services aren't so micro. >> It's been part of what we've for RHEL from the start, it's been how do we bring that enterprise value into technology that is maturing out of the open source community and make that available to customers? >> Yeah, one of the key things you guys, first of all, OpenShift has been phenomenal, you guys did a great job with that, been watching that grow, but I think a real seminal moment was the CoreOS acquisition. >> Sure. >> That was a real turbo boost for you guys, great acquisition, fits in with the culture, and then Kubernetes just lifted from that, that was the point but, at the timing of all this, Kubernetes gets mainstream lift, people recognize that the standardization it is a good thing, and then, boom, developers are getting engaged. >> Yeah, and if you see what the CoreOS environment has brought us from over their updates for our platforms, to being able to talk about a registry in the environment. Being able to say that, is kind of additive to this overall messaging, it really rounds out the offering for us, and allows us to participate even more deeply in the communities as well. >> Well, we're looking forward to keeping you covered, we love Kubernetes, we've got a special report called "Kubernetes Special Report" on siliconangle.com, it's called "The Rise Of Kubernetes", it's a dedicated set of content, we're publishing a lot on Kubernetes. Final question I want to get to you because I think it's super important, what's the relationship you have with AWS? And take some time to explain the partnership, how many years, what you guys are doing together, I know you're actively involved, so take a minute. >> It is somewhat blurry, it's been a long time, so 2007 era is when we started in depth with them, and I can remember the early days, actually in the development of S3, prior to EC2, being able to say alright, what is this thing and how does Red Hat participate in this? And I think, yesterday Terry Wiese even mentioned that we were one of the first partners to actually engage in the consumption model and, you know, claiming partial credit for out 34 billion dollar valuation that we just got announced. But, you know, overall the relationship really spawned out of that, how do we help build a cloud and how do we help offer our products to our customers in a more flexible way? And so that snowballed over the years from just early adopters being able to play with it to now where you see it's many many millions of dollars that are being generated in customers and they think, in the hundreds of millions of hours of our products being consumed, at least within a month if not shorter timeframes, every time period we have. >> You know that's an unsung benefit that people might not know about with Red Hat is that, you guys are in early markets because, one, everyone uses Linux pretty much these days for anything core, meaningful. And you listen to community, and so you guys are always involved in big moving things, cloud, Amazon, 2007, it was command line back then. >> Yeah. >> It wasn't even, I think RightScale just came online that year, so you remember. You guys are always in all these markets so it's a good indicator, you guys are a bellwether, I think it's a good beacon to look at. >> And we do this, certainly on the container space, and the middleware space, and the storage space, you know, we replicate this model and, including in management, about how do we actually invest in the right places where we see the industry and communities going so we can actually help those? >> And you're very partner friendly, you bring a lot to the table, I love the open source ethos, I think that's the future. The future of that ethos of contributing to get value downstream is going to be a business practice, not just software, so you guys are a big part of the industry on that and I want to give you guys props for that. Okay, more Cube coverage here in Las Vegas, AWS Reinvent, after this short break, more live coverage, I'm John Furrier, Dave Vellante, we'll be right back. (electronic music)

Published Date : Nov 28 2018

SUMMARY :

AWS re:Invent 2018, brought to you by re:Invent 2018, all the action is A lot's going with you guys since our Red Hat Summit days Talk about the announcement with IBM because this is huge. and, you know, it kind of validates a lot of what we've place to do that so, it's got to be super exciting for you and make that the hybrid choice. the announcment is the synergy you guys have with IBM into the enterprise, you know, I was there when it started So, to be clear, the deal has not closed, right, so Let's build the best enterprise products using the open At a certain level, that's enterprise, multi-year, old in multi cloud, we're a believer in, whatever you call it, That's what you guys want to be, right? it's all the same to us and we have to make sure it's there. the developer is going crazy, there's going to be another Absolutely happened, you guys had a and then moved on to different areas and so, you know, our standardization has had, you know, but in this case I mean, this is a big, awesome opportunity. That's one of the core things that we see adding into Yeah, one of the key things you guys, first of all, people recognize that the standardization it is a good Yeah, and if you see what the CoreOS environment has years, what you guys are doing together, I know you're adopters being able to play with it to now where you see know about with Red Hat is that, you guys are in early came online that year, so you remember. that and I want to give you guys props for that.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

Amazon Web ServicesORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

IBMORGANIZATION

0.99+

AWSORGANIZATION

0.99+

Mike FerrisPERSON

0.99+

OracleORGANIZATION

0.99+

JimmyPERSON

0.99+

John FurrierPERSON

0.99+

Terry WiesePERSON

0.99+

MicrosoftORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

HPORGANIZATION

0.99+

AlibabaORGANIZATION

0.99+

NasaORGANIZATION

0.99+

Red HatORGANIZATION

0.99+

MikePERSON

0.99+

DavePERSON

0.99+

San FranciscoLOCATION

0.99+

2007DATE

0.99+

Las VegasLOCATION

0.99+

Jim WhitehurstPERSON

0.99+

17 yearsQUANTITY

0.99+

MayDATE

0.99+

IntelORGANIZATION

0.99+

RHELTITLE

0.99+

SundayDATE

0.99+

oneQUANTITY

0.99+

each projectQUANTITY

0.99+

Red Hat Enterprise LinuxTITLE

0.99+

LinuxTITLE

0.99+

hundreds of millions of hoursQUANTITY

0.99+

each developerQUANTITY

0.98+

RackspaceORGANIZATION

0.98+

OpenShiftTITLE

0.98+

yesterdayDATE

0.98+

six yearsQUANTITY

0.98+

Red Hat SummitEVENT

0.98+

KubernetesTITLE

0.98+

DelORGANIZATION

0.98+

The Rise Of KubernetesTITLE

0.97+

first partnersQUANTITY

0.97+

one providerQUANTITY

0.97+

a monthQUANTITY

0.97+

CoreOSTITLE

0.96+

RightScaleORGANIZATION

0.96+

Kubernetes Special ReportTITLE

0.95+

bothQUANTITY

0.95+

one companyQUANTITY

0.94+

each individualQUANTITY

0.94+

this weekDATE

0.94+

second tierQUANTITY

0.94+

millions of dollarsQUANTITY

0.92+

one cloudQUANTITY

0.92+

one cloudQUANTITY

0.92+

A yearQUANTITY

0.92+

Paul Cormier, Red Hat | Red Hat Summit 2018


 

live from San Francisco it's the cube covering Red Hat summit 2018 brought to you by Red Hat hey welcome back everyone we're here live in San Francisco red hat summit 2018 s cubes exclusive coverage we're out in the open in the middle of floor here as open source has always done out in the open it's the cube doing our part extracting the cylinders I'm John for the co-host of the cube with John Troy you might coast analyst this week he's the co-founder of a firm advisory firm our guest case is Paul Comey a president and products on technology of Red Hat architecting the future of red hat and products and technologies all open source great to see you again major see you so thank you coming on so great keynote today you guys have done a great job here I thought the messaging was great but the excitement was strong we just came back off of a week in Copenhagen coop con where kubernetes clearly sees the de facto standard around kubernetes the core kubernetes with a lot of room to differentiate around you got sto service meshes a lot of exciting things for application developers and then under the hood and the new life being brought into OpenStack so there's clear visibility now into what's going on swim lanes whatever we call it people kind of see it so congratulations thank you magical moment Lucky Strike all on the cards give us some color you guys been working on this for a while go back and where did it all start and when did things start clicking together for you guys well I know I sometimes sound like a broken record here but I mean the key to our success is the commercialization of Linux I mean you know Linux we started Linux as a commodity play you know it was cheaper cheaper almost as good et cetera but it became such a powerful platform all the innovation you just talked about is built around Linux it's all tied into Linux so once we lay down the Linux base and the customer and the customer data centers which is such the logical extension to go to these new technologies because it really you really need to be a Linux vendor in order to be able to do a kubernetes to release to be able to support our containers release any of these things it's all just intertwining the Linux and your model is working honestly the open source is no secret that that's open open it's over proprietary and closed but you also have a community model that's feeding into the price of technologies Jim Weider zzyx you know went into detail on hey you don't you know you have a crystal ball and technology because you're smart guys but ultimately the users in the communities give you direct feedback of what's relevant and cool at the right time this is really where kubernetes Lucky Strike for you guys was really there you saw it so the commitment you jumped in can you explain that dynamic of how the products get fed in from the communities I'll give you actually a better example of OpenShift itself so we originally started OpenShift back in 2011 and we started it as a marketing project we started it as a as a cloud-based platform to get developers out there building to our platform and a lot of our customer base saw it and came to us and said I want this as a product this is really really powerful so we made a product out of it first one kubernetes wasn't around containers weren't around we'd built it on virtual machines we had we had what we called gears to lock in and and then containers started to morph in and read by release three we transformed it to containers then we brought in kubernetes because we had worked with the Google folks earlier on that so we really listened to our customers we started at something we thought was going to be an expense and it turns out to be you know one of our hottest our hottest platform right now based on what our customers in the community told us timings everything - and the good timing is as the clouds took the scale started also becoming relevant you see Amazon success now you got as your IBM and everyone's kind of seeing that opportunity how are you guys looking at the container piece because you know we can look at the history and docker you know trying to monetize too early we've been you know we've documented that it went well in the cube many times core OS a recent acquisition big one for you guys and strategic but also a great team containers are super important talk about the role of containers specifically not so much as a business model but as a lynchpin right between how orchestration is moving and how these service messages are coming out I mean you think just real quickly what containers are first containers are just Linux carved up in a different way you still have a kernel still have user space the difference is you take just the user space you want with the application and you run it that way so all the same life cycles security issues you have to fix etc they have to do on a standard Linux have to do it in containers the first thing containers been around forever they were in units if we all if we all remember but the killer app for containers was because now when I can bring just enough two of the OS with the application I can run that to the cloud that's how we get the app out to the cloud that's how we get it onto the private cloud out to any of the public clouds how we reverse the clouds so even though they've been around for a while it's the killer app for containers so you mentioned hybrid cloud hybrid cloud multi-cloud are there are the terms it's this week we hear them a lot that been up on stage one way of putting it is is thinking about that different places of deploying but in one you are really saying that it doesn't matter where you deploy there's there's layers of an especially openshift can take you to different clouds it location doesn't matter anymore can you drill down on that a little bit absolutely I mean our our whole we took a bed I mean it sounds obvious now it always does right we took a bed on hybrid cloud I've been talking about it for six or seven years and what it means is customers are going to have applications that run on bare metal they're going to have running as virtual machines probably on VMware they're going to maybe run their private clouds maybe containers maybe across multiple clouds end of the day it's it's Linux underneath that what customers don't want is five different operating because every Linux is slightly different they don't want five different operating environments they want and want one so what we do with rel and with openshift is we give you that abstraction layer for your application to code once and you can move that app anywhere I mean the clouds the public clouds have brought a tremendous amount of innovation and I don't want to say this in a derogatory way but in some sense they're like a mainframe because they have their stack all the way up to there a flick their products are their services and so you start you start up you start up a service of server lists of lambda that's never leaving Amazon never so so it's great in many cases if that's ok for that app but there's a lot of cases you might want to run the app here one day and there the next day so you really need an abstraction layer to ensure that you have that portability and that's what shift and containers are so important right I hear things like de-facto standard and abstraction layers the bells go up opportunity because you now that's where complexity can be reduced down when you have good at rational layers but we've been interviewing folks here and the some themes have come up about the sea change that we're facing this cloud scale new Internet infrastructure going on globally and the two points are tcp/ip moment you know during that time that was networking even and that disrupted decnet today and others and then HTTP both are different HTV was all new capabilities the web disrupt the Direct Mail and other things analog leaving but he stupid created inter inter networking basis right Cisco and everything else here what containers what's interesting and I want to get your reaction to this is that with containers I don't have to kill the old to bring in the new I can do the new and then let the lifecycle of those workloads take a natural natural course this is a good thing for enterprise they don't have to rush in do a rip and replace they don't have to react attacked hire new people at massive scale talk about that dynamic is that seems to be what's happening it's exactly what's heavy you know we did a bunch of demos on stage this week I think nine of them live the coolest demo was the one where we showed we actually took a Windows virtual machine with a Windows SQL based virtual machine from VMware with tools we brought that over to a KVM environment which is it's a different format for the VM brought that to a KVM environment we then use tools to slice it up into two containers one being the app itself the other being at SQL and we deployed it out to openshift and we could eventually have deployed out to any public cloud that's significant for two reasons first of all you're now seeing kubernetes orchestrating VMs right beside containers so you can kind of see where that's going right so that's really that's interesting for the operators now because now they get they bring whittled down some of that complex it's really interest interesting for the developers because from a perspective they're going to be asked to bring these traditional virtual machines into containers in the old world they have to go to a VMware front-end to do that then they have to come over here to a route to a to a rev or rel front-end to do it now they can just bring their VM with tools over work on it split it up into containers and deploy it it's it's its efficiency adds at its best and shift without any effort without any effort really how about the impact of the customers because this is to me that the big money moment because that means an enterprise can actually progress and accelerate their digital transformation or whatever they got going on to a new architecture a new internet infrastructure we hear things like Network effect decentralized storage with with blockchain new capabilities that aren't measured by traditional older stacks that we've seen an e-commerce DNS and other things so a shifts happening the shifts have a cloud scale I say synchronous the pile are these cars with a scalable whole new way let's see what does that mean for customers what it means for customers is two things that are important the shift is happening you're getting tools and you're getting tools and platforms to make that shift more seamless and you know I'd love to say it's all red head engineers that are giving you this but the reason why it's moving so fast is because it's open so the innovation comes from anywhere it's way too big of a problem for any one customer to solve where we're just helping our customers consume it that's one thing but I think the other thing is important is that's important is not every application is going to be suited to go to a container based application so because it's all on that rel common layer our customers can still have one operating environment and have have compatibility as they do the shift but still keep their business going over here maybe forever these apps may never come off a bare-metal for example Paul I wanted to talk a little bit about Red Hat scope inside IT I love the the connection that between you know the container layer that is just Linux but and also the standards layer but you know now that we're up at threat you know with the with open shift and with multi cloud you know global huge scale operation there's a lot there's a lot more involved right cloud layout level ops is you now at Red Hat is involved with with process and and culture and you have a lot more than just you have a lot more that you're involved in helping IT with than just a Linux and some and connection to the back to the machine so can you talk a little bit about about what you're trying to do with the customer great it's a great point then when I started with a company 17 years ago we weren't talking to CIOs in fact the CIOs we were in that we were coming in the back door the operations people were bringing Linux in the back door and they the CIOs didn't even know it was run in there and but now as you said we're CIOs are trying to figure out how does public cloud fit into my IT environment how does a multiple public cloud fit in out of containers fit in what do I do with my older applications where there re architecting that's at the cio level now you know they're having to re architecture architect for the next generation computing so we've had to build services around that we have labs we have innovation labs where we bring our customers in and work with them and help them you know figure out and help them map out where they're going for the first time we actually I've had cut many customers tell me so is this is the with openshift it's the first time I've got my developers in my Ops people in the same room and we've facilitated that discussion because no one's right it's gotta be one one motion and so that's that's the interesting part for us we've really moved up the chain and our customer base because we're almost a consultative sale now to help them get to the next generation talk about the enabling aspect of this because I referenced tcp/ip and HTTP but now if you go forward and say okay we're gonna have this new environment it's not just about redheads by Linux it's about the operating system which you guys obviously offer for free and then have services around it and have stopped software how is Linux with the new capability of open shift and standards like kubernetes with containers how in your opinion is that an enabling an opportunity for ecosystem new startups and enterprises themselves because we see if this happens and continues to happen oh yeah it's going to be a new names gonna come out of the woodwork new startups gonna happen you see you see it every day I mean you wouldn't do a start-up today that wasn't software wise it wasn't based on Linux and and and that's why in all the innovation today because all the innovation today is based on Linux you know one of the things we and that we released last week a cube con is I don't know if you saw it or not we released a kubernetes SDK and it can track or OS it couldn't came with the core OS guys we put that out into the community it's really an SDK for ISVs and software but vendors to build into the api's of kubernetes in an open way so that once they get out into the commercial world they're ready that's how significant we all think that kubernetes is going to be i we think that's where the services are going to hang in the infrastructure but but having said that I think it also tells you that you know the impact that these open technologies are having on the future I wanna get into the chorus in a minute but I want to ask you about the white spaces so if someone who's that in charge of the troops inside Red Hat products and technologies where's the white space opportunities that people can dig in and and build out innovation around this major shift that you guys are on this wave where's the opportunity for the channel partners the integrators globe last night's developers anyone where's the key areas I mean with our platforms of open shift and OpenStack we have we have certified entry points via api's in storage networking management so we've got hybrid management but certainly we don't think we're gonna do everything in management by any stretch we have it we have a set of api's from management partners to plug in and by the way what I tell my my management R&D folks no hidden api same api's we use they use so so storage is another area new storage solutions networking certainly AI is one of the areas one of the things we showcased here was AI permeating through our entire product line I don't know if you saw the face recognition demo out there but it was it was pretty cool in and even if you want to consume that AI through one of the cloud providers we can pass you straight through from openshift to consume it that way as well on automation I want to get your thoughts on something we've talked a few days ago here on the cube was automation is great so let's give an example I'm automating a service you know if it's a coop with kubernetes and containers and as a memory leak right and every boots but automates I don't know so you got to have a new level of instrumentation down at the code level how do you see that playing out because now we got to be smarter about what's working and not working because I might not never know just reboots intermittently give me some mystery was a memory link could be something else but but that's so this is one of the places where using AI so we've been we've been our first stint with AI came out of our support group so we've been supporting Linux and open source for 25 years got a massive database of what failures were what the fixes were we started using AI in a support group to point our reps at a particular article based symptoms that they were hearing from our we realized we had about an 80% hit rate on you know on getting to our reps to the right to the right article so now we've built that into the products and so we use that AI like for example OpenShift IO which is at one of our developer platforms developers trying to link in a library we can tell them you know what there's a new there's a newer version of that library you know what that library has a security flaw in and at this line of code maybe you want to consider using another one but it's from our years and years of doing this that we're building that day database I mean oai is only so good as the data that you fed it and so have a certain level of granularity down to do you do it and then also ai it also is a reason why all our services are now on open shift because you're absolutely right if I've got a raw JBoss service running on raw Amazon I can't instrument that underneath because Amazon's got that layer closed if I have open shifts there and it's in the infrastructure is open shift even running in Amazon or sure anywhere we can now instrument that to look at some of the things we need to look at to recognize an event a week or or whatever Paul talk about their journey with kora West obviously we've been super excited by that we've been following Korres from the beginning great technical team pure open-source guys and in that container part of the evolution in time everyone's trying to force a business model and you're really hard to force a business model is something that too early or might not even be relevant to build the business around it might be a feature not a company kind of thing so you guys put a big price tag on them sizable chunk of cash how did it all play out you guys just like hey wow we're gonna we wait like these guys they're super technical meeting of the minds and then how that has fit in from a product and technology stand a little bit a little of all of that of course the benefit of being you know having the open source development in in your DNA as we knew them all right so we knew how good they were because they we work our guys work with them every day so that was something when they decided early on like us to go to kubernetes they became a big part of the of the community of kubernetes in our model from day one you can't be an open-source provider if you're not strong in the upstream community because how can you affect what your customers are asking you to do if you and effect upstream they were big in the upstream big and kubernetes and so at that point we that's what we just said they had done some interesting things that we hadn't got to they did a lot of the automation they were doing over-the-air updates of the container platforms a week which we hadn't got to yet they had a really good following in the community so we decided you know we paid a we paid a hefty price but at this stage of the game we really feel that we took an early bit in kubernetes we really feel that that's gonna be the future in containers if there's gonna be a place a place that you pay maybe a little more this is the place well Paul I think another example is ansible a year or two back right and that's been a remained a huge success and I can say you haven't messed it up right and it's it's it's been powerful accomplished well most acquisitions you know and in end in tears so it seems like RedHat seems to be good at this kind of an open source acquisition we we get to interview them for two years before we bring them in based on well how we work in the community but you know we're very where we're bringing in people I don't I hate to say the word M&A or acquisition I just hate that word because we're just joining forces here you know it just took a a big check to do it yeah and you guys have the business model kneel down this is good was good for court at the time for them to they didn't have to worry about having to figure out a go to market and monetize right an upstream presence which was very valuable and then trying to shoehorn a business all around it and which is difficult companies died doing it yeah I mean I can't think of many that have been that successful at it I mean it's a hard thing to do I mean look we've had a great advantage you know we've had rail in the market for 16 years and it built a base for us I'm not gonna try to I'm not gonna try to kid you on that and it's the it's the Linux base that everything's getting built around and so we just keep those those principles we've used for the last 16 years we stay true to weak true to them we could not do a proprietary piece of software now if our lives depended on it that's the DNA well how do you handle the growth you get hiring new people - that's a challenge we've been we were talking to folks about on your team and across RedHat around hiring people and and you got to maintain that eco so you have to maintain that DNA way how do you guys do that what's the is there like a special three three day you know hypnotic class a you know this is how we do it I have to tell you it's a bit easier on the engineering side because you know it's typically engineers that have been working in the community etc but you know our business unit side and other pieces where people have been coming out of big companies and they're used to a hierarchical environment we really take that into account in the interview process I'll be frank not everyone makes it through I mean RedHat is you know titles really don't matter a ringlet company yeah totally engineering as all should be by the way if biased opinion fit okay so great to have you on thanks for spending the time I know you're super busy a couple questions before we wrap up what are you most proud of as you look back now I mean someone again it's almost hindsight's 2020 looks obvious these calls but you know I interviewed Diane at OpenStack many years ago took a lot of heat for that kubernetes movement people weren't it wasn't obvious to a lot of people at that time the kubernetes bet you guys make good bets looking back what are you most proud of that's most significant or or you think people should know well those were that was a seminal moment in redhead history that decision what take us through some of the key milestones in your opinion the for the first one there's probably three or four the first one was gonna Ralph because you have to understand what we did we were in we were a completely retail when I joined the company with 50 million dollars in revenue losing two hundred and so we had a retail product we stopped it to go to route literally literally stop the product bet the company move second one was JBoss we were about 300 million in revenue we paid 425 million for JBoss now that was a big one the third one you might not recognize this one moving from Xen to KVM because Xen was going off down the the VC world trying to figure out how to monetize as a company somebody in Israel came up with a with a better model with KVM the rest of the industry was on Zen we said as a single player we're going this way that was a big bet that I don't even know we recognized the significance of at the time and in kubernetes as I said we pivoted on that in 2012 or so and I've got a lot of R&D money in that and paying on what made you go to kubernetes just curious was the has the Borg success how software is being done at Google was it the role of containers did you guys have that foresight at that time saying containers gonna have a critical role we don't want to screw that up we can bring this in we're looking at from a stack perspective or was it more of a future scenario it was a lot of it was its its heritage out of Borg and knowing the talent in Google and engineering and we talked to we had we had many many discussions we all we continually do with those guys so I think it was mostly a technical decision and what we said was at that point putting our weight behind it we just need to make the community successful so I mean we quickly figure with us in Google it was a it was a fairly good bad not as sure bet but a good bet and that's what made us go there it was really it was really a technology decision possible final question as we wrap up for the folks watching who couldn't make it here in San Francisco for Red Hat summit 2018 what's the big takeaway what's the present technology what's the North Star for you and your team and what are you guys putting as a priority what's the focus I think I think the takeaway from here is you know I think it's I think it's a pretty solid couple things are really solid it's going to be the future is going to be open source period end of story especially in the infrastructure and application development world third thing is hybrid cloud is the model it's the only practical way not every application is moving to one public cloud tomorrow and the third thing is for Red Hat that's the architecture that we build around every day we guide what are what products we build what M&A we do everything we do is around that model and open if I see a centerpiece of all the piece without all that coming thank you for coming on president of Protestant technology at Red Hat I'm John ferry with John Moyer stay with us for more live covers our third day of three days of live coverage here out in the open like open source we're doing our share bringing you the content you right back with more after this short break you

Published Date : May 12 2018

**Summary and Sentiment Analysis are not been shown because of improper transcript**

ENTITIES

EntityCategoryConfidence
Paul ComeyPERSON

0.99+

2011DATE

0.99+

2012DATE

0.99+

Paul CormierPERSON

0.99+

Red HatORGANIZATION

0.99+

DianePERSON

0.99+

25 yearsQUANTITY

0.99+

IsraelLOCATION

0.99+

PaulPERSON

0.99+

425 millionQUANTITY

0.99+

two yearsQUANTITY

0.99+

John MoyerPERSON

0.99+

AmazonORGANIZATION

0.99+

three daysQUANTITY

0.99+

16 yearsQUANTITY

0.99+

sixQUANTITY

0.99+

LinuxTITLE

0.99+

John TroyPERSON

0.99+

JohnPERSON

0.99+

seven yearsQUANTITY

0.99+

San FranciscoLOCATION

0.99+

50 million dollarsQUANTITY

0.99+

last weekDATE

0.99+

two hundredQUANTITY

0.99+

IBMORGANIZATION

0.99+

WindowsTITLE

0.99+

threeQUANTITY

0.99+

two thingsQUANTITY

0.98+

first timeQUANTITY

0.98+

CiscoORGANIZATION

0.98+

first timeQUANTITY

0.98+

GoogleORGANIZATION

0.98+

SQLTITLE

0.98+

oneQUANTITY

0.98+

third oneQUANTITY

0.98+

one thingQUANTITY

0.98+

about 300 millionQUANTITY

0.97+

firstQUANTITY

0.97+

first oneQUANTITY

0.97+

last nightDATE

0.97+

OpenShiftTITLE

0.97+

twoQUANTITY

0.97+

single playerQUANTITY

0.97+

a yearQUANTITY

0.96+

Red HatTITLE

0.96+

todayDATE

0.96+

this weekDATE

0.96+

first stintQUANTITY

0.96+

two reasonsQUANTITY

0.96+

80%QUANTITY

0.96+

second oneQUANTITY

0.96+

CopenhagenLOCATION

0.96+

third thingQUANTITY

0.95+

17 years agoDATE

0.95+

this weekDATE

0.95+

fourQUANTITY

0.95+

Red Hat summit 2018EVENT

0.95+

JBossTITLE

0.94+

two pointsQUANTITY

0.94+

San FranciscoLOCATION

0.94+

OpenStackTITLE

0.94+

third dayQUANTITY

0.94+

red hatORGANIZATION

0.94+

Jody Rebak & Alex Shih, CryptoKitties | Polycon 2018


 

(upbeat music) >> Announcer: Live, from Nassau in the Bahamas. It's the Cube, covering Polycon '18, brought to you by Polymath. >> Welcome back to exclusive Cube coverage in the Bahamas, for Polycon '18. This is the show about cryptocurrency, token economics, and the future of work, and the economies and digital nations. And the Cube's here for two days of wall-to-wall coverage. And we're excited to have the CryptoKitties team here. The phenomenon that took over the blockchain, really started to show the value of smart contracts, in a really cool, playful, fun way, really important story. Jody Rebak and Alex Shih, welcome to the Cube. >> Hello, welcome! >> Thanks for having us. >> So, love the shirts, love to get one for myself someday, but you got any extra shirts, I'll take one. >> We do, we do. >> Okay, great. >> We'll give you one. >> Love to have one. Okay, so CryptoKitties, for the folks that living under a rock for the past year, has been a real phenomenon where people were actually, you know, creating--well you describe it. >> Okay so CryptoKitties, the purpose is to bring the first billion people to the blockchain, and CryptoKitties was the first endeavor in order to do that. So it's really a gaming and collectible game on the blockchain. It's kind of straightforward. You buy a kitty, and with your kitty, you can sell it, breed it, there's a whole marketplace. You know, since the initial launch, we've had a bunch of special kitties get released. We recently launched in China, so there's a ton of kitties coming down the pipeline, so breed your kitties. >> So the fun game turned into quite an interesting experiment, because people love fun, around tech, when tech is kind of boring sometimes. You blockchain, you know, what does that actually mean? What happens under the hood? So you guys kind of brought a fun piece of it. Was it by design, the growth? Was it more like, was it what you expected? Take us through some of the inside the ropes, the company, like, was there a moment where you said, "Oh my god, can you believe what's happening? Like, this is really taking off." Or was it planned? Take us through that. >> Sure, so I think inherently, the blockchain technology-- like, blockchain is not something that's inherently easy to explain, so we wanted to do that in a fun, simple way, so that people could learn about smart contracts, and they could learn about all the benefits, about being decentralized, and sort of putting trust on the network. So that was our initial, sort of, goal. We have an amazing team behind us, so the creative team just said, like, "We want to bring kitties to the blockchain." So the group with Axiom Zen and CryptoKitties has been working on blockchain technology since 2014. CryptoKitties was our first public project. And I think, you know, the team came together very quickly. I think we built this in four to six months. You know, I think we were all surprised with the success of it, and bringing down the Ethereum network, slowing it down, so I remember the team launched, and I woke up one morning with hundreds of emails from media outlets just saying, "we want to do a story on you" It was really, really exciting, and the team worked really hard, so we're really proud about it. >> It's one of those, it is kind of a pinch-me moment, because you're like, "oh my god, this is like highly successful." And that's really fun, and I think it's a great example of how you can use this fun technology in a way that people can relate to but it also brought up some technical challenges because, I think, at the time, and even now, it takes a lot, probably the number one use case of Ethereum blockchain. I mean, I don't know. Is there another use case that's actually as pervasive as CryptoKitties? >> You know what? I'm not sure, but I think one of the really interesting things for us was we learned a lot with the scalability and it's been interesting to see, sort of, other teams reach out to us and sort of sharing our learnings, because, I think, in order to continue sort of, building the ecosystem, we really need to share learnings, and you know, not hoard information So, you know, we're definitely looking at scaling, I think you know one way we've addressed it is sort of building a lot off-chain to speed up transactions. But, I think, that there's a lot to learn, and it's going to take, sort of, the ecosystem working together and sharing idea and knowledge to-- >> What have you learned? What are some of the things you guys learned on the experience, both on the kind of, business-side, integration-side, developer-side, to some of the really hard-core, you know tech, infrastructure pieces? What are the learnings? >> I think the reality is neither of us are technical people, so it's probably do a disservice to try and speak to that. >> Is there one, is there a couple things that jumped out at you, was it performance, was it just--? >> I think because there, you know, we're only starting to see applications built on the blockchain, you don't know what you don't know. And the team behind CryptoKitties and Axiom Zen, we built a number of products, we have a bunch of projects that we've worked, and we have sort of our developer process But when you're working with a new technology, and you don't know what you're dealing with, it's hard to anticipate, and I think, following best practices, leveraging, you know, other teams and working with the community, that's I think what we learned most, is you need to, you know, rely on the community and share learnings. >> Take a minute to talk about what you guys do there, each of you, what your role is at CryptoKitties, what you're focused on, and what's going on in the company--without giving away all of the trade-secrets of course. >> Yeah I know, there's a lot we can't talk about. >> I mean, what's your role? What are you guys overseeing? What are you building? >> Perhaps we should explain a bit about Axiom Zen, first and kind of the set up amongst CryptoKitties, if you want to take a stab at that. >> Sure, so Axiom Zen is a venture studio. We've been around for five years. So, we have a part of the business that's focused on consumer blockchain technologies. We have quite a big enterprise SAAS business. So one of our companies is called ZenHub, if you've ever heard of them, ZenHub, and then we have sort of a joint venture part of the business where we work with companies to build and launch amazing and impactful tech companies. So, CryptoKitties was born out of our consumer blockchain, and specifically, our foundry. So my role with Axiom Zen is, I'm Chief of Staff, and I work with the Executive team on strategy, I work with the Operations team, a lot of special projects and a lot of, you know, with tech companies, there's always things, special initiatives that are coming up, and I really love to focus on that. >> And you've got to be always learning, right? I mean, like you said, you're trying new things. >> Yeah >> So it's kind of like, a studio, meets venture incubator, meets R and D, meets builder culture. >> Everything, yeah, and I think sort of, I'll let Alex speak to his role, but one of the reasons, our team is 80 people. CryptoKitties' team is 30, but everyone who comes and joins the team is an entrepreneur at heart, so I think that's why we've been able to accomplish so much. >> Alex, your role. >> So, I recently joined the company, and I came on as CFO, so-- >> So you run all the numbers, man, those gas prices on less than twenty. So, I mean, you got to keep the trains running, making sure the lights are on >> Sure. >> 80 people total, including the 30 in the CryptoKitties? >> Yeah, inclusive of the 30. >> Okay, so, what's the outlook? What's the objective of the firm? You guys continue to experiment? Do new, more projects? >> Yeah, I think, so one of our, we wrote and published the ERC721 token. We didn't do it, but someone--Dieter Shirley did, within the team, so I think we want to continue to focus on that. You know, we have a number of projects in the pipeline that we're not able to talk about publicly, but definitely exciting this happening. Continuing to build CryptoKitties, I mentioned we did a China launch, so we're continuing to expand and just build, so yeah, we've got a lot of exciting things to do. >> China must be really big. (Alex laughs) >> Yeah. >> Must be huge. >> Yeah, one of-- >> Asia must be hot. >> Yeah, well, I think sort of-- >> Mobile phone usage, just incredible. >> They're leaders in the collectible gaming space, so it naturally made sense for us to go there. >> Great, well, hey, congratulations. What's going to be the outlook for CryptoKitties? You mentioned the marketplace. What are some of the cool things that are going on, that you can point out, that people might not know? >> Do you want to talk about our China launch, and the special kitties, or? >> I think you're doing a great job, keep going. >> (laughs) Okay so you know, we recently launched in China, it's Chinese New Year, so we have a bunch of sort of special kitties being released. We just had a company call earlier today, where we saw a preview of some of the kitties that are being bred. There's some dragon kitty, there's dog kitty, so I guess, look at the cool kitties. Building a story of your kitties. >> I think it's just been great stuff, again, like I said, at Sundance, we saw the VR/AR world starting to really go down this road of some new creative digital, and I think this is a great example of where I see it going which is you know taking this new decentralized infrastructure and creating new experiences, so congratulations, CryptoKitties. >> Thank you. >> Thank you. >> Alright, the Cube bringing new crypto experiences to our footage again we're going to do a lot of blockchain cryptocurrency shows. This is the first one of the year, been covering bitcoin since 2010, so we're proud of that, and look for the cube at events, but here at the Bahamas, we'll be here for a whole nother day tomorrow, keep on watching, got a few more interviews to wrap up day one, got some big guests coming, be right back after this short break. (techno music)

Published Date : Mar 2 2018

SUMMARY :

brought to you by Polymath. and the economies and digital nations. So, love the shirts, love to get one for myself someday, Okay, so CryptoKitties, for the folks and collectible game on the blockchain. So the fun game turned into quite and the team worked really hard, it takes a lot, probably the number one use case and you know, not hoard information so it's probably do a disservice to try and speak to that. on the blockchain, you don't know what you don't know. Take a minute to talk about what you guys do there, and kind of the set up amongst CryptoKitties, and I really love to focus on that. I mean, like you said, you're trying new things. So it's kind of like, a studio, meets venture incubator, and joins the team is an entrepreneur at heart, making sure the lights are on You know, we have a number of projects in the pipeline China must be really big. They're leaders in the collectible gaming space, What are some of the cool things so I guess, look at the cool kitties. of where I see it going which is you know This is the first one of the year,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Alex ShihPERSON

0.99+

Jody RebakPERSON

0.99+

ChinaLOCATION

0.99+

Axiom ZenORGANIZATION

0.99+

BahamasLOCATION

0.99+

30QUANTITY

0.99+

AlexPERSON

0.99+

ZenHubORGANIZATION

0.99+

fourQUANTITY

0.99+

two daysQUANTITY

0.99+

CryptoKittiesORGANIZATION

0.99+

CryptoKitties'ORGANIZATION

0.99+

80 peopleQUANTITY

0.99+

Dieter ShirleyPERSON

0.99+

six monthsQUANTITY

0.99+

tomorrowDATE

0.99+

less than twentyQUANTITY

0.99+

2014DATE

0.99+

oneQUANTITY

0.99+

first endeavorQUANTITY

0.99+

day oneQUANTITY

0.98+

PolymathORGANIZATION

0.98+

five yearsQUANTITY

0.98+

NassauLOCATION

0.98+

one morningQUANTITY

0.98+

first billion peopleQUANTITY

0.98+

hundreds of emailsQUANTITY

0.97+

eachQUANTITY

0.97+

2010DATE

0.97+

firstQUANTITY

0.95+

first oneQUANTITY

0.94+

PolyconEVENT

0.92+

AsiaLOCATION

0.91+

bothQUANTITY

0.9+

first public projectQUANTITY

0.89+

Chinese New YearEVENT

0.86+

CubeORGANIZATION

0.86+

earlier todayDATE

0.85+

past yearDATE

0.83+

ERC721OTHER

0.81+

a ton of kittiesQUANTITY

0.78+

SundanceEVENT

0.7+

CubeCOMMERCIAL_ITEM

0.68+

more interviewsQUANTITY

0.66+

coupleQUANTITY

0.54+

2018DATE

0.54+

EthereumTITLE

0.53+

SAASORGANIZATION

0.51+

EthereumORGANIZATION

0.45+

PolyconORGANIZATION

0.44+

CubeTITLE

0.42+

'18EVENT

0.34+

'18TITLE

0.33+