Image Title

Search Results for HPA:

Patrick Bergstrom & Yasmin Rajabi | KubeCon + CloudNativeCon NA 2022


 

>>Good morning and welcome back to the Cube where we are excited to be broadcasting live all week from Detroit to Michigan at Cuban slash cloud Native con. Depending on who you're asking, Lisa, it's day two things are buzzing. How are you feeling? >>Good, excited. Ready for day two, ready to have more great conversations to see how this community is expanding, how it's evolving, and how it's really supporting it itself. >>Yeah, Yeah. This is a very supportive community. Something we talked a lot about. And speaking of community, we've got some very bold and brave folks over here. We've got this CTO and the head of product from Storm Forge, and they are on a mission to automate Kubernetes. Now automatic and Kubernetes are not words that go in the same sentence very often, so please welcome Patrick and Yasmin. Thank you both for being here. Hello. How you doing? >>Thanks for having us. >>Thanks for having us. >>Talk about what you guys are doing. Cause as you said, Kubernetes auto spelling is anything but auto. >>Yeah. >>The, what are some of the challenges? How do you help >>Eliminate this? Yeah, so the mission at Storm Forge is primarily automatic resource configuration and optimization essentially. So we started as a machine learning company first. And it's kind of an interesting story cuz we're one of those startups that has pivoted a few times. And so we were running our machine learning workloads. Most >>Have, I think, >>Right? Yeah. Yeah. We were, we started out running our machine learning workloads and moving them into Kubernetes. And then we weren't quite sure how to correctly adjust and size our containers. And so our ML team, we've got three PhDs and applied mathematics. They said, Well, hang on, we could write an algorithm for that. And so they did. And then, Oh, I love this. Yeah. And then we said, Well holy cow, that's actually really useful. I wonder if other people would like that. And that's kind of where we got our start. >>You solved your own problem and then you built a business >>Around it. Yeah, exactly. >>That is fantastic. Is, is that driving product development at Storm Forge still? That kind of attitude? >>I mean that kind of attitude definitely drives product development, but we're, you know, balancing that with what the users are, the challenges that they have, especially at large scale. We deal with a lot of large enterprises and for us as a startup, we can relate to the problems that come with Kubernetes when you're trying to scale it. But when you're talking about the scale of some of these larger enterprises, it's just a different mentality. So we're trying to balance that of how we take that input into how we build our product. Talk >>About that, like the, the end user input and how you're taking that in, because of course it's only going to be a, you know, more of a symbiotic relationship when that customer feedback is taken and >>Acted on. Yeah, totally. And for us, because we use machine learning, it's a lot of building confidence with our users. So making sure that they understand how we look at the data, how we come up with the recommendations, and actually deploy those changes in their environment. There's a lot of trust that needs to be built there. So being able to go back to our users and say, Okay, we're presenting you this type of data, give us your feedback and building it alongside them has helped a lot in these >>Relationships. Absolutely. You said the word trust, and that's something that we talk about at every >>Show. I was gonna jump on that too. It's >>Not, Yeah, it's not a buzzword. It's not, It shouldn't be. Yeah. It really should be, I wanna say lived and breathed, but that's probably grammatically incorrect. >>We're not a gram show. It's okay darling. Yeah, thank >>You. It should be truly embodied. >>Yeah. And I, I think it's, it's not even unique to just what we do, but across tech in general, right? Like when I talk about SRE and building SRE teams, one of the things I mentioned is you have to build that trust first. And with machine learning, I think it can be really difficult too for a couple different reasons. Like one, it tends to be a black box if it's actually true machine learning. Totally. Which ours is. But the other piece that we run into. Yeah. And the other piece we run into though is, is what I was an executive at United Health Group before I joined Storm Forge. And I would get companies that would come to me and try to sell me machine learning and I would kind of look at it and say, Well no, that's just a basic decision tree. Or like, that's a super basic whole winter forecast, right? Like that's not actually machine learning. And that's one of the things that we actually find ourselves kind of battling a little bit when we talk about what we do in building that trust. >>Talk a little bit about the latest release as you guys had a very active September. Here we are. And towards the, I think end of October. Yeah. What are some of the, the new things that have come out? New integrations, new partnerships. Give us a scoop on that. >>Yeah, well I guess I'll start and then I'll probably hand it over to you. But like the, the big thing for us is we talked about automating Kubernetes in the very beginning, right? Like Kubernetes has got a vpa it's >>A wild sentence anyway. Yeah, yeah. >>It it >>Has. We're not gonna get over at the whole show. Yeah. >>It as a VPA built in, it has an HPA built in and, and when you look at the data and even when you read the documentation from Google, it explicitly says never the two should meet. Right. Because you'll end up thrashing and they'll fight each other. Well the big release we just announced is with our machine learning, we can now do both. And so we vertically scale your pods to the correct up. Yeah. >>Follow status. I love that. >>Yeah, we can, we can scale your pods to the correct size and still allow you to enable the HPA and we'll make recommendations for your scaling points and your thresholds on the HPA as well so that they can work together to really truly maximize your efficiency that without sacrificing your performance and your reliability of the applications that you're running. That >>Sounds like a massive differentiator for >>Storm launch, which I would say it is. Yeah. I think as far as I know, we're the first in the industry that can do this. Yeah. >>And >>From very singularity vibes too. You know, the machines are learning, teaching themselves and doing it all automatically. Yep. Gets me very >>Excited. >>Yeah, absolutely. And from a customer demand perspective, what's the feedback been? Yeah, it's been a few >>Weeks. Yeah, it's been really great actually. And a lot of why we went down this path was user driven because they're doing horizontal scale and they want to be able to vertically size as they're scaling. So if you put yourself in the shoes of someone that's configuring Kubernetes, you're usually guessing on what you're setting your CPU requests and limits do. But horizontal scale makes sense. You're either adding more things or removing more things. And so once they actually are scaled out as a large environment and they have to rethink, how am I gonna resize this now? It's just not possible. It's so many thousands of settings across all the different environments and you're only thinking about CPU memory, You're not thinking about a lot of things. It's just, but once you scale that out, it's a big challenge. So they came to us and said, Okay, you're doing, cuz we were doing vertical scaling before and now we enable vertical and horizontal. And so they came to us and said, I love what you're doing about right sizing, but we wanna be able to do this while also horizontally scaling. And so the way that our software works is we give you the recommendations for what the setting should be and then allow Kubernetes to continue to add and remove replicas as needed. So it's not like we're going in and making changes to Kubernetes, but we make changes to the configuration settings so that it's the most optimal from a resource perspective. >>Efficiency has been a real big theme of the show. Yeah. And it's clear that that's a focus for you. Everyone here wants to do more faster Of course. And innovation, that's the thing to do that sometimes we need partners. You just announced an integration with Datadog. Tell us about that. Yeah, >>Absolutely. Yeah. So the way our platform works is we need data of course, right? So they're, they're a great partner for us and we use them both as an input and an output. So we pull in metrics from Datadog to provide recommendations and we'll actually display all those within the Datadog portal. Cause we have a lot of users that are like, Look, Datadog's my single pane of glass and I hate using that word, but they get all their insights there. They can see their recommendations and then actually go deploy those. Whether they wanna automatically have the recommendations deployed or go in and actually push a button. >>So give me an example of a customer that is using the, the new release and some of the business outcomes they're achieving. I imagine one of the things that you're enabling is just closing that ES skills gap. But from a business level perspective, how are they gaining like competitive advantages to be able to get products to market faster, for example? >>Yeah, so one of the customers that was actually part of our press release and launch and spoke about us at a webinar, they are a SaaS product and deal with really bursty workloads. And so their cloud costs have been growing 40% year over year. And their platform engineering team is basically enabled to provide the automation for developers and in their environment, but also to reduce those costs. So they want to, it's that trade off of resiliency and cost performance. And so they came to us and said, Look, we know we're over provisioned, but we don't know how to tackle that problem without throwing tons of humans at the problem. And so we worked with them and just on a single app found 60% savings and we're working now to kind of deploy that across their entire production workload. But that allows them to then go back and get more out of the, the budget that they already have and they can kind of reallocate that in other areas, >>Right? So there can be chop line and bottom >>Line impact. Yeah. And I, I think there's some really direct impact to the carbon emissions of an organization as well. That's a good point. When you can reduce your compute consumption by 60%. >>I love this. We haven't talked about this at all during the show. Yeah. And I'm really glad that you brought this up. All of the things that power this use energy. Yeah. >>What is it like seven to 8% of all electricity in the world is consumed by data centers. Like it's crazy. Yeah. Yeah. And so like that's wild. Yeah. Yeah. So being able to make a reduction in impact there too, especially with organizations that are trying to sign green pledges and everything else. >>It's hard. Yeah. ESG initiatives are huge. >>Absolut, >>It's >>A whole lot. A lot of companies have ESG initiatives where they can't even go out and do an RFP with a business, Right. If they don't have an actual active starting, impactful ESG program. Yes. Yeah. >>And the RFPs that we have to fill out, we have to tell them how they'll help. >>Yeah. Yes. It's so, yeah, I mean I was really struck when I looked on your website and I saw 54% average cost reduction for Yeah. For your cloud operations. I hadn't even thought about it from a power perspective. Yeah. I mean, imagine if we cut that to 3% of the world's power grid. That is just, that is very compelling. Speaking of compelling and exciting future things, talk to us about what's next? What's got you pumped for 2023 and and what lies >>Ahead? Oh man. Well that seems like a product conversation for sure. >>Well, we're super excited about extending what we do to other platforms, other metrics. So we optimize a lot right now around CPU and memory, but we can also give people insights into, you know, limiting kills, limiting CPU throttling, so extending the metrics. And when you look at hba and horizontal scale today, most of it is done with cpu, but there are some organizations out there that are scaling on custom metrics. So being able to take in more data to provide more recommendations and kind of extend what we can do from an optimization standpoint. >>That's, yeah, that's cool. And what house you most excited on the show floor? Anything? Anything that you've seen? Any keynotes? >>There's, Well, I haven't had a lot of time to go to the keynotes unfortunately, but it's, >>Well, I'm shock you've been busy or something, right? Much your time here. >>I can't imagine why. But no, there's, it's really interesting to see all the vendors that are popping up around Kubernetes focus specifically with security is always something that's really interesting to me. And automating CICD and how they continue to dive into that automation devs, SEC ops continues to be a big thing for a lot of organizations. Yeah. Yeah. >>I I do, I think it's interesting when we marry, Were you guys here last year? >>I was not here. >>No. So at, at the smaller version of this in Los Angeles. Yeah. I, I was really struck because there was still a conversation of whether or not we were all in on Kubernetes as, as kind of a community and a society this year. And I'm curious if you feel this way too. Everyone feels committed. Yeah. Yeah. I I I feel like there's no question that Kubernetes is the tool that we are gonna be using. >>Yeah. I I think so. And I think a lot of that is actually being unlocked by some of these vendors that are being partners and helping people get the most outta Kubernetes, you know, especially at the larger enterprise organizations. Like they want to do it, but the skills gap is a very real problem. Right. And so figuring out, like Jasmine talked about figuring out how do we, you know, optimize or set up the correct settings without throwing thousands of humans at it. Never mind the fact you'll never find a thousand people that wanna do that all day every day. >>I was gonna, It's a fold endeavor for those >>People study, right? Yeah. And, and being able to close some of those gaps, whether it's optimization, security, DevOps, C I C D. As we get more of those partners like I just talked about on the floor, then you see more and more enterprises being more open to leaning into Kubernetes a little bit. >>Yeah. Yeah. We've seen, we've had some great conversations the last day and, and today as well with organizations that are history companies like Ford Motor Companies for >>Example. Yeah. Right. >>Just right behind us. One of their EVs and, and it's, they're becoming technology companies that happen to do cars or home >>Here. I had a nice job with 'em this morning. Yes. With that storyline, honestly. >>Yes. That when we now have such a different lens into these organizations, how they're using technologies, advanced technologies, Kubernetes, et cetera, to really become data companies. Yeah. Because they have to be, well the consumers on the other end expect a Home Depot or a Ford or whomever or your bank Yeah. To know who you are. I want the information right here whenever I need it so I can do the transaction I need and I want you to also deliver me information that is relevant to me. Yeah. Because there, there's no patience anymore. Yeah. >>And we partner with a lot of big FinTech companies and it's, it's very much that. It's like how do we continue to optimize? But then as they look at transitioning off of older organizations and capabilities, whether that's, they have a physical data center that's racked to the gills and they can't do anything about that, so they wanna move to cloud or they're just dipping their toe into even private cloud with Kubernetes in their own instances. A lot of it is how do we do this right? Like how do we lean in and, Yeah. >>Yeah. Well I think you said it really well that the debate seems to be over in terms of do we go in on Kubernetes? That that was a theme that I think we felt that yesterday, even on on day one of the keynotes. The community seems to be just craving more. I think that was another thing that we felt yesterday was all of the contributors and the collaborators, people want to be able to help drive this community forward because it's, it's a flywheel of symbiosis for all of the vendors here. The maintainers and, and really businesses in any industry can benefit. >>Yeah. It's super validating. I mean if you just look at the floor, there's like 20 different booths that talk about cost reporting for Kubernetes. So not only have people moved, but now they're dealing with those challenges at scale. And I think for us it's very validating because there's so many vendors that are looking into the reporting of this and showing you the problem that you have. And then where we can help is, okay, now you know, you have a problem, here's how we can fix it for you. >>Yeah. Yeah. That, that sort of dealing with challenges at scale that you set, I think that's also what we're hearing. Yeah. And seeing and feeling on the show floor. >>Yeah, absolutely. >>What can folks see and, and touch and feel in your booth? >>We have some demos there you can play around with the product. We're giving away a Lego set so we've let >>Gotta gets >>Are right now we're gonna have to get some Lego, We do a swag segment at the end of the day every day. Now we've >>Some cool socks. >>Yep. Socks are hot. Let's, let's actually talk about scale internally as our closing question. What's going on at Storm Forge? If someone's watching right now, they're excited. Are you hiring? We are hiring. Yeah. How can they stalk you? What's the >>School? Absolutely. So you can check us out on Storm forge.io. We're certainly hiring across the engineering organization. We're hiring across the UX a product organization. We're dealing, like I said, we've got some really big customers that we're, we're working through with some really fun challenges. And we're looking to continue to build on what we do and do new innovative things like especially cuz like I said, we are a machine learning organization first. And so for me it's like how do I collect all the data that I can and then let's find out what's interesting in there that we can help people with. Whether that's cpu, memory, custom metrics, like as said, preventing kills, driving availability, reliability, What can we do to, to kind of make a little bit more transparent the stuff that's going on underneath the covers in Kubernetes for the decision makers in these organizations. >>Yes. Transparency is a goal of >>Many. >>Yeah, absolutely. Well, and you mentioned fun. If this conversation is any representation, it would be very fun to be working on both of your teams. We, we have a lot of fun Ya. Patrick, thank you so much for joining. Thanks for having us, Lisa, As usual, thanks for being here with me. My pleasure. And thank you to all of you for turning into the Cubes live show from Detroit. My name's Savannah Peterson and we'll be back in a few.

Published Date : Oct 27 2022

SUMMARY :

How are you feeling? community is expanding, how it's evolving, and how it's really supporting it itself. Forge, and they are on a mission to automate Kubernetes. Talk about what you guys are doing. And so we were running our machine learning workloads. And then we weren't quite sure how to correctly adjust and size our containers. Yeah, exactly. Is, is that driving product development at Storm Forge still? I mean that kind of attitude definitely drives product development, but we're, you know, balancing that with what the users are, So making sure that they understand how we look at the data, You said the word trust, and that's something that we talk about at every It's Yeah. Yeah, thank And that's one of the things that we actually find ourselves kind of battling Talk a little bit about the latest release as you guys had a very active September. But like the, the big thing for us is we talked about automating Yeah, yeah. Yeah. And so we vertically scale your pods to the correct up. I love that. Yeah, we can, we can scale your pods to the correct size and still allow you to enable the HPA Yeah. You know, the machines are learning, teaching themselves and doing it all automatically. And from a customer demand perspective, what's the feedback been? And so they came to us and said, I love what you're doing about right sizing, And innovation, that's the thing to do that sometimes we they're a great partner for us and we use them both as an input and an output. I imagine one of the things that you're And so they came to us and said, Look, we know we're over provisioned, When you can reduce your compute consumption by 60%. And I'm really glad that you brought this up. And so like that's wild. It's hard. Yeah. I mean, imagine if we cut that to 3% of the world's power grid. Well that seems like a product conversation for sure. And when you look at hba and horizontal scale today, most of it is done with cpu, And what house you most excited on the show floor? Much your time here. And automating CICD and how they continue to dive into that automation devs, And I'm curious if you feel this way too. And I think a lot of that is actually being unlocked by some of these vendors that are being partners and DevOps, C I C D. As we get more of those partners like I just talked about on the floor, and today as well with organizations that are history companies like Ford Motor Companies for happen to do cars or home With that storyline, honestly. do the transaction I need and I want you to also deliver me information that is relevant to me. And we partner with a lot of big FinTech companies and it's, it's very much that. I think that was another thing that we felt yesterday was all of the contributors and And I think for us it's very validating because there's so many vendors that And seeing and feeling on the show floor. We have some demos there you can play around with the product. Are right now we're gonna have to get some Lego, We do a swag segment at the end of the day every day. Yeah. And so for me it's like how do I collect all the data And thank you to all of

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
PatrickPERSON

0.99+

DetroitLOCATION

0.99+

FordORGANIZATION

0.99+

Savannah PetersonPERSON

0.99+

MichiganLOCATION

0.99+

40%QUANTITY

0.99+

LisaPERSON

0.99+

DatadogORGANIZATION

0.99+

54%QUANTITY

0.99+

Yasmin RajabiPERSON

0.99+

Storm ForgeORGANIZATION

0.99+

sevenQUANTITY

0.99+

60%QUANTITY

0.99+

YasminPERSON

0.99+

United Health GroupORGANIZATION

0.99+

Patrick BergstromPERSON

0.99+

Los AngelesLOCATION

0.99+

JasminePERSON

0.99+

yesterdayDATE

0.99+

firstQUANTITY

0.99+

last yearDATE

0.99+

20 different boothsQUANTITY

0.99+

3%QUANTITY

0.99+

GoogleORGANIZATION

0.99+

Storm ForgeORGANIZATION

0.99+

SeptemberDATE

0.99+

Home DepotORGANIZATION

0.99+

KubeConEVENT

0.99+

2023DATE

0.99+

bothQUANTITY

0.99+

CloudNativeConEVENT

0.99+

twoQUANTITY

0.98+

LegoORGANIZATION

0.98+

Ford Motor CompaniesORGANIZATION

0.98+

oneQUANTITY

0.98+

KubernetesTITLE

0.98+

day twoQUANTITY

0.97+

this yearDATE

0.97+

8%QUANTITY

0.97+

HPAORGANIZATION

0.96+

single appQUANTITY

0.96+

todayDATE

0.95+

ESGTITLE

0.94+

single paneQUANTITY

0.93+

thousands of humansQUANTITY

0.92+

end of OctoberDATE

0.91+

SREORGANIZATION

0.9+

three PhDsQUANTITY

0.9+

OneQUANTITY

0.89+

NA 2022EVENT

0.87+

thousandsQUANTITY

0.85+

thousand peopleQUANTITY

0.8+

CubesORGANIZATION

0.79+

CubeLOCATION

0.78+

this morningDATE

0.78+

day oneQUANTITY

0.75+

KubernetesORGANIZATION

0.73+

*****NEEDS TO STAY UNLISTED FOR REVIEW***** Ricky Cooper & Joseph George | VMware Explore 2022


 

(light corporate music) >> Welcome back, everyone, to VMware Explore 22. I'm John Furrier, host of theCUBE with Dave Vellante. Our 12th year covering VMware's User Conference, formerly known as VMworld, now rebranded as VMware Explore. Two great cube alumnus coming down the cube. Ricky Cooper, SVP, Worldwide Partner Commercials VMware, great to see you. Thanks for coming on. >> Thank you. >> We just had a great chat- >> Good to see you again. >> With the Discovery and, of course, Joseph George, vice president of Compute Industry Alliances. Great to have you on. Great to see you. >> Great to see you, John. >> So guys this year is very curious in VMware. A lot goin' on, the name change, the event. Big, big move. Bold move. And then they changed the name of the event. Then Broadcom buys them. A lot of speculation, but at the end of the day, this conference kind of, people were wondering what would be the barometer of the event. We're reporting this morning on the keynote analysis. Very good mojo in the keynote. Very transparent about the Broadcom relationship. The expo floor last night was buzzing. >> Mhm. >> I mean, this is not a show that's lookin' like it's going to be, ya' know, going down. >> Yeah. >> This is clearly a wave. We're calling it Super Cloud. Multi-Cloud's their theme. Clearly the cloud's happenin'. We not to date ourselves, but 2013 we were discussing on theCUBE- >> We talked about that. Yeah. Yeah. >> Discover about DevOps infrastructure as code- >> Mhm. >> We're full realization now of that. >> Yep. >> This is where we're at. You guys had a great partnership with VMware and HPE. Talk about where you guys see this coming together because customers are refactoring. They are lookin' at Cloud Native. The whole Broadcom visibility to the VMware customer bases activated them. They're here and they're leaning in. >> Yeah. >> What's going on? >> Yeah. Absolutely. We're seeing a renewed interest now as customers are looking at their entire infrastructure, bottoms up, all the way up the stack, and the notion of a hybrid cloud, where you've got some visibility and control of your data and your infrastructure and your applications, customers want to live in that sort of a cloud environment and so we're seeing a renewed interest. A lot of conversations we're having with customers now, a lot of customers committing to that model where they have applications and workloads running at the Edge, in their data center, and in the public cloud in a lot of cases, but having that mobility, having that control, being able to have security in their own, you know, in their control. There's a lot that you can do there and, obviously, partnering with VMware. We've been partners for so long. >> 20 years about. Yeah. Yeah. >> Yeah. At least 20 years, back when they invented stuff, they were inventing way- >> Yeah. Yeah. Yeah. >> VMware's got a very technical culture, but Ricky, I got to say that, you know, we commented earlier when Raghu was on, the CEO, now CEO, I mean, legendary product. I sent the trajectory to VMware. Everyone knows that. VMware, I can't know whether to tell it was VMware or HP, HP before HPE, coined hybrid- >> Yeah. >> 'Cause you guys were both on. I can't recall, Dave, which company coined it first, but it was either one of you guys. Nobody else was there. >> It was the partnership. >> Yes. I- (cross talking) >> They had a big thing with Pat Gelsinger. Dave, remember when he said, you know, he got in my grill on theCUBE live? But now you see- >> But if you focus on that Multi-Cloud aspect, right? So you've got a situation where our customers are looking at Multi-Cloud and they're looking at it not just as a flash in the pan. This is here for five years, 10 years, 20 years. Okay. So what does that mean then to our partners and to our distributors? You're seeing a whole seed change. You're seeing partners now looking at this. So, look at the OEMs, you know, the ones that have historically been vSphere customers are now saying, they're coming in droves saying, okay, what is the next step? Well, how can I be a Multi-Cloud partner with you? >> Yep. Right. >> How can I look at other aspects that we're driving here together? So, you know, GreenLake is a great example. We keep going back to GreenLake and we are partaking in GreenLake at the moment. The real big thing for us is going to be, right, let's make sure that we've got the agreements in place that support this SaaS and subscription motion going forward and then the sky's the limit for us. >> You're pluggin' that right into GreenLake, right? >> Well, here's why. Here's why. So customers are loving the fact that they can go to a public cloud and they can get an SLA. They come to a, you know, an On-Premise. You've got the hardware, you've got the software, you've got the, you know, the guys on board to maintain this through its life cycle. >> Right. I mean, this is complicated stuff. >> Yeah. >> Now we've got a situation where you can say, hey, we can get an SLA On-Premise. >> Yeah. And I think what you're seeing is it's very analogous to having a financial advisor just manage your portfolio. You're taking care of just submitting money. That's really a lot of what the customers have done with the public cloud, but now, a lot of these customers are getting savvy and they have been working with VMware Technologies and HPE for so long. They've got expertise. They know how they want their workloads architected. Now, we've given them a model where they can leverage the Cloud platform to be able to do this, whether it's On-Premise, The Edge, or in the public cloud, leveraging HPE GreenLake and VMware. >> Is it predominantly or exclusively a managed service or do you find some customers saying, hey, we want to manage ourself? How, what are you seeing is the mix there? >> It is not predominantly managed services right now. We're actually, as we are growing, last time we talked to HPE Discover we talked about a whole bunch of new services that we've added to our catalog. It's growing by leaps and bounds. A lot of folks are definitely interested in the pay as you go, obviously, the financial model, but are now getting exposed to all the other management that can happen. There are managed services capabilities, but actually running it as a service with your systems On-Prem is a phenomenal idea for all these customers and they're opening their eyes to some new ways to service their customers better. >> And another phenomenon we're seeing there is where partners, such as HPA, using other partners for various areas of their services implementation as well. So that's another phenomenon, you know? You're seeing the resale motion now going into a lot more of the services motion. >> It's interesting too, you know, I mean, the digital modernization that's goin' on. The transformation, whatever you want to call it, is complicated. >> Yeah. >> That's clear. One of the things I liked about the keynote today was the concept of cloud chaos. >> Yeah. >> Because we've been saying, you know, quoting Andy Grove at Intel, "Let chaos rain and rain in the chaos." >> Mhm. >> And when you have inflection points, complexity, which is the chaos, needs to be solved and whoever solves it kicks the inflection point, that's up into the right. So- >> Prime idea right here. Yeah. >> So GreenLake is- >> Well, also look at the distribution model and how that's changed. A couple of points on a deal. Now they're saying, "I'll be your aggregator. I'll take the strain and I'll give you scale." You know? "I'll give you VMware Scale for all, you know, for all of the various different partners, et cetera." >> Yeah. So let's break this down because this is, I think, a key point. So complexity is good, but the old model in the Enterprise market was- >> Sure. >> You solve complexity with more complexity. >> Yeah. >> And everybody wins. Oh, yeah! We're locked in! That's not what the market wants. They want some self-service. They want, as a service, they want easy. Developer first security data ops, DevOps, is already in the cycle, so they're going to want simpler. >> Yeah. >> Easier. Faster. >> And this is kind of why I'll say, for the big announcement today here at VMware Explore, around the VMware vSphere Distributed Services Engine, Project Monterey- >> Yeah. >> That we've talked about for so long, HPE and VMware and AMD, with the Pensando DPU, actually work together to engineer a solution for exactly that. The capabilities are fairly straightforward in terms of the technologies, but actually doing the work to do integration, joint engineering, make sure that this is simple and easy and able to be running HPE GreenLake, that's- >> That's invested in Pensando, right? >> We are. >> We're all investors. Yeah. >> What's the benefit of that? What's, that's a great point you made. What's the value to the customer, bottom line? That deep co-engineering, co-partnering, what does it deliver that others don't do? >> Yeah. Well, I think one example would be, you know, a lot of vendors can say we support it. >> Yep. >> That's great. That's actually a really good move, supporting it. It can be resold. That's another great move. I'm not mechanically inclined to where I would go build my own car. I'll go to a dealership and actually buy one that I can press the button and I can start it and I can do what I need to do with my car and that's really what this does is the engineering work that's gone on between our two companies and AMD Pensando, as well as the business work to make that simple and easy, that transaction to work, and then to be able to make it available as a service, is really what made, it's, that's why it's such a winner winner with our- >> But it's also a lower cost out of the box. >> Yep. >> Right. >> So you get in whatever. Let's call it 20%. Okay? But there's, it's nuanced because you're also on a new technology curve- >> Right. >> And you're able to absorb modern apps, like, you know, we use that term as a bromide, but when I say modern apps, I mean data-rich apps, you know, things that are more AI-driven not the conventional, not that people aren't doing, you know, SAP and CRM, they are, but there's a whole slew of new apps that are coming in that, you know, traditional architectures aren't well-suited to handle from a price performance standpoint. This changes that doesn't it? >> Well, you think also of, you know, going to the next stage, which is to go to market between the two organizations that before. At the moment, you know, HPE's running off doing various different things. We were running off to it again, it's that chaos that you're talking about. In cloud chaos, you got to go to market chaos. >> Yeah. >> But by simplifying four or five things, what are we going to do really well together? How do we embed those in GreenLake- >> Mhm. >> And be known in the marketplace for these solutions? Then you get a, you know, an organization that's really behind the go to market. You can help with sales activation the enablement, you know, and then we benefit from the scale of HPE. >> Yeah. >> What are those solutions I mean? Is it just, is it I.S.? Is it, you know, compute storage? >> Yeah. >> Is it, you know, specific, you know, SAP? Is it VDI? What are you seeing out there? >> So right now, for this specific technology, we're educating our customers on what that could be and, at its core, this solution allows customers to take services that normally and traditionally run on the compute system and run on a DPU now with Project Monterey, and this is now allowing customers to think about, okay, where are their use cases. So I'm, rather than going and, say, use it for this, we're allowing our customers to explore and say, okay, here's where it makes sense. Where do I have workloads that are using a lot of compute cycles on services at the compute level that could be somewhere else like networking as a great example, right? And allowing more of those compute cycles to be available. So where there are performance requirements for an application, where there is timely response that's needed for, you know, for results to be able to take action on, to be able to get insight from data really quick, those are places where we're starting to see those services moving onto something like a DPU and that's where this makes a whole lot more sense. >> Okay. So, to get this right, you got the hybrid cloud, right? >> [Ricky And Joseph] Yes. >> You got GreenLake and you got the distributed engine. What's that called the- >> For, it's HPE ProLiant- >> ProLiant with- >> The VMware- >> With vSphere. >> That's the compute- >> Distributed. >> Okay. So does the customer, how do you guys implement that with the customer? All three at the same time or they mix and match? What's that? How does that work? >> All three of those components. Yeah. So the beauty of the HP ProLiant with VMware vSphere-distributed services engine- >> Mhm. >> Also known as Project Monterey for those that are keeping notes at home- >> Mhm. >> It's, again, already pre-engineered. So we've already worked through all the mechanics of how you would have to do this. So it's not something you have to go figure out how you build, get deployment, you know, work through those details. That's already done. It is available through HPE GreenLake. So you can go and actually get it as a service in partnership with our customer, our friends here at VMware, and because, if you're familiar and comfortable with all the things that HP ProLiant has done from a security perspective, from a reliability perspective, trusted supply chain, all those sorts of things, you're getting all of that with this particular (indistinct). >> Sumit Dhawan had a great quote on theCUBE just an hour or so ago. He said you have to be early to be first. >> Yeah. (laughing) >> I love that quote. Okay. So you were- >> I fought the urge. >> You were first. You were probably a little early, but do you have a lead? I know you're going to say yes, okay. Let's just- >> Okay. >> Let's just assume that. >> Okay. Yeah. >> Relative to the competition, how do you know? How do you determine that? >> If we have a lead or not? >> Yeah. If you lead. If you're the best. >> We go to the source of the truth which is our customers. >> And what do they tell you? What do you look at and say, okay, now, I mean, when you have that honest conversation and say, okay, we are, we're first, we're early. We're keeping our lead. What are the things that you- >> I'll say it this way. I'll say it this way. We've been in a lot of businesses where there, where we do compete head-to-head in a lot of places. >> Mhm. >> And we know how that sales process normally works. We're seeing a different motion from our customers. When we talk about HPE GreenLake, there's not a lot of back and forth on, okay, well, let me go shop around. It is HP Green. Let's talk about how we actually build this solution. >> And I can tell you, from a VMware perspective, our customers are asking us for this the other way around. So that's a great sign is that, hey, we need to see this partnership come together in GreenLake. >> Yeah. >> It's the old adage that Amazon used to coin and Andy Jassy, you know, they do the undifferentiated heavy lifting. >> [Ricky And Joseph] Yeah. >> A lot of that's now Cloud operations. >> Mhm. >> Underneath it is infrastructure's code to the developer. >> That's right. >> That's at scale. >> That's right. >> And so you got a lot of heavy lifting being done with GreenLake- >> Right. >> Which is why there's no objections probably. >> Right. >> What's the choice? What are you going to shop? >> Yeah. >> There's nothing to shop around. >> Yeah, exactly. And then we've got, you know, that is really icing on the cake that we've, you know, that we've been building for quite some time and there is an understanding in the market that what we do with our infrastructure is hardened from a reliability and quality perspective. Like, times are tough right now. Supply chain issues, all that stuff. We've talked, all talked about it, but at HPE, we don't skimp on quality. We're going to spend the dollars and time on making sure we got reliability and security built in. It's really important to us. >> We had a great use case. The storage team, they were provisioning with containers. >> Yes. >> Storage is a service instantly we're seeing with you guys with VMware. Your customers' bringing in a lot of that into the mix as well. I got to ask 'cause every event we talk about AI and machine learning- >> Mhm. >> Automation and DevOps are now infiltrating in with the CICD pipeline. Security and data become a big conversation. >> [Ricky And Joseph] Agreed. >> Okay. So how do you guys look at that? Okay. You sold me on Green. Like, I've been a big fan from day one. Now, it's got maturity on it. I know it's going to get a lot more headroom to do. There's still a lot of work to do, but directionally it's pretty accurate, you know? It's going to be a success. There's still concern about security, the data layer. That's agnostic of environment, private cloud, hybrid, public, and Edge. So that's important and security- >> Great. >> Has got a huge service area. >> Yeah. >> These are on working progress. >> Yeah. Yeah. >> How do you guys view those? >> I think you've just hit the net on the head. I mean, I was in the press and journalist meetings yesterday and our answer was exactly the same. There is still so much work that can be done here and, you know, I don't think anybody is really emerging as a true leader. It's just a continuation of, you know, tryin' to get that right because it is what is the most important thing to our customers. >> Right. >> And the industry is really sort of catching up to that. >> And, you know, when you start talking about privacy and when you, it's not just about company information. It's about individuals' information. It's about, you know, information that, if exposed, actually could have real impact on people. >> Mhm. >> So it's more than just an I.T. problem. It is actually, and from HPE's perspective, security starts from when we're picking our suppliers for our components. Like, there are processes that we put into our entire trusted supply chain from the factory on the way up. I liken it to my golf swing. My golf swing. I slice right like you wouldn't believe. (John laughing) But when I go to the golf pros, they start me back at the mechanics, the foundational pieces. Here's where the problems are and start workin' on that. So my view is, our view is, if your infrastructure is not secure, you're goin' to have troubles with security as you go further up. >> Stay in the sandbox. >> Yeah. >> Yeah. So to speak, you know, they're driving range on the golf analogy there. I love that. Talk about supply chain security real quick because you mentioned supply chain on the hardware side. You're seeing a lot of open source and supply chain in software, trusted software. >> Yep. >> How does GreenLake look at that? How do you guys view that piece of it? That's an important part. >> Yeah. Security is one of the key pillars that we're actually driving as a company right now. As I said, it's important to our customers as they're making purchasing decisions and we're looking at it from the infrastructure all the way up to the actual service itself and that's the beauty of having something like HPE GreenLake. We don't have to pick, is the infrastructure or the middle where, or the top of stack application- >> It's (indistinct), right? >> It's all of it. >> Yeah. >> It's all of it. That matters. >> Quick question on the ecosystem posture. So- >> Sure. >> I remember when HP was, you know, one company and then the GSIs were a little weird with HP because of EDS, you know? You had data protector so we weren't really chatting up Veeam at the time, right? And as soon as the split happened, ecosystem exploded. Now you have a situation where you, Broadcom, is acquiring VMware. You guys, big Broadcom customer. Has your attitude changed or has it not because, oh, we meet with the customers already. Well, you've always said that, but have you have leaned in more? I mean, culturally, is HPE now saying, hmm, now we have some real opportunities to partner in new ways that we don't have to sleep with one eye open, maybe. (John laughing) >> So first of all, VMware and HPE, we've got a variety of different partners. We always have. >> Mhm. >> Well before any Broadcom announcement came along. >> Yeah, sure. >> We've been working with a variety of partners. >> And that hasn't changed. >> And that hasn't changed. And, if your question is, has our posture toward VMware changed at all, the answer's absolutely not. We believe in what VMware is doing. We believe in what our customers are doing with VMware and we're going to continue to work with VMware and partner with the (indistinct). >> And of course, you know, we had to spin out ourselves in November of last year, which I worked on, you know, the whole Dell thing. >> Yeah. We still had the same chairman. >> Yeah. There- (Dave chuckling) >> Yeah, but since then, I think what's really become very apparent and not, it's not just with HPE, but with many of our partners, many of the OEM partners, the opportunity in front of us is vast and we need to rely on each other to help us as, you know, solve the customer problems that are out there. So there's a willingness to overlook some things that, in the past, may have been, you know, barriers. >> But it's important to note also that it's not that we have not had history- >> Yeah. >> Right? Over, we've got over 200,000 customers join- >> Hundreds of millions of dollars of business- >> 100,000, over 10,000, or 100,000 channel partners that we all have in common. >> Yeah. Yeah. >> Yep. >> There's numerous- >> And independent of the whole Broadcom overhang there. >> Yeah. >> There's the ecosystem floor. >> Yeah. >> The expo floor. >> Right. >> I mean, it's vibrant. I mean, there's clearly a wave coming, Ricky. We talked about this briefly at HPE Discover. I want to get an update from your perspectives, both of you, if you don't mind weighing in on this. Clearly, the wave, we're calling it the Super Cloud, 'cause it's not just Multi-Cloud. It's completely different looking successes- >> Smart Cloud. >> It's not just vendors. It's also the customers turning into clouds themselves. You look at Goldman Sachs and- >> Yep. >> You know, I think every vertical will have its own power law of Cloud players in the future. We believe that to be true. We're still testing that assumption, but it's trending in when you got OPEX- >> [Ricky And Joseph] Right. >> Has to go to in-fund statement- >> Yeah. >> CapEx goes too. Thanks for the Cloud. All that's good, but there's a wave coming- >> Yeah. >> And we're trying to identify it. What do you guys see as this wave 'cause beyond Multi-Cloud and the obvious nature of that will end up happening as a state and what happens beyond that interoperability piece, that's a whole other story, and that's what everyone's fighting for, but everyone out in that ecosystem, it's a big wave coming. They've got their surfboards. They're ready to go. So what do you guys see? What is the next wave that everyone's jacked up about here? >> Well, I think that the Multi-Cloud is obviously at the epicenter. You know, if you look at the results that are coming in, a lot of our customers, this is what's leading the discussion and now we're in a position where, you know, we've brought many companies over the last few years. They're starting to come to fruition. They're starting to play a role in, you know, how we're moving forward. >> Yeah. >> Some of those are a bit more applicable to the commercial space. We're finding commercial customers that never bought from us before. Never. Hundreds and hundreds are coming through our partner networks every single quarter, you know? So brand new to VMware. The trick then is how do you nurture them? How do you encourage them? >> So new logos are comin' in. >> New logos are coming in all the time, all the time, from, you know, from across the ecosystem. It's not just the OEMs. It's all the way back- >> So the ecosystem's back of VMware. >> Unbelievably. So what are we doing to help that? There's two big things that we've announced in the recent weeks is that Partner Connect 2.0. When I talked to you about Multi-Cloud and what the (indistinct), you know, the customers are doing, you see that trend. Four, five different separate clouds that we've got here. The next piece is that they're changing their business models with the partners. Their services is becoming more and more apparent, et cetera, you know? And the use of other partners to do other services, deployment, or this stuff is becoming prevalent. Then you've got the distributors that I talked about with their, you know, their, then you route to market, then you route to business. So how do you encapsulate all of that and ensure your rewarding partners on all aspects of that? Whether it's deployment, whether it's test and depth, it's a points-based system we've put in place now- >> It's a big pie that's developing. The market's getting bigger. >> It's getting so much bigger. And then you help- >> I know you agree, obviously, with that. >> Yeah. Absolutely. In fact, I think for a long time we were asking the question of, is it going to be there or is it going to be here? Which was the wrong question. (indistinct cross talking) Now it's everything. >> Yeah. >> And what I think that, what we're seeing in the ecosystem, is that people are finding the spots that, where they're going to play. Am I going to be on the Edge? >> Yeah. >> Am I going to be on Analytics Play? Am I going to be, you know, Cloud Transition Play? There's a lot of players are now emerging and saying, we're- >> Yeah. >> We're, we now have a place, a part to play. And having that industry view not just of, you know, a commercial customer at that level, but the two of us are lookin' at Teleco, are looking at financial services, at healthcare, at manufacturing. How do these new ecosystem players fit into the- >> (indistinct) lifting. Everyone can see their position there. >> Right. >> We're now being asked for simplicity and talk to me about partner profitability. >> Yes. >> How do I know where to focus my efforts? Am I spread too thin? And, you know, that's, and my advice that the partner ecosystem out there is, hey, let's pick out spots together. Let's really go to, and then strategic solutions that we were talking about is a good example of that. >> Yeah. >> Sounds like composability to me, but not to go back- (laughing) Guys, thanks for comin' on. I think there's a big market there. I think the fog is lifted. People seeing their spot. There's value there. Value creation equals reward. >> Yeah. >> Simplicity. Ease of use. This is the new normal. Great job. Thanks for coming on and sharing. (cross talking) Okay. Back to live coverage after this short break with more day one coverage here from the blue set here in Moscone. (light corporate music)

Published Date : Sep 6 2022

SUMMARY :

coming down the cube. Great to have you on. A lot goin' on, the it's going to be, ya' know, going down. Clearly the cloud's happenin'. Yeah. Talk about where you guys There's a lot that you can Yeah. Yeah. Yeah. I got to say that, you know, but it was either one of you guys. (cross talking) Dave, remember when he said, you know, So, look at the OEMs, you know, So, you know, GreenLake They come to a, you know, an On-Premise. I mean, this is complicated stuff. where you can say, hey, Edge, or in the public cloud, as you go, obviously, the financial model, So that's another phenomenon, you know? It's interesting too, you know, I mean, One of the things I liked Because we've been saying, you know, And when you have Yeah. for all of the various but the old model in the with more complexity. is already in the cycle, so of the technologies, Yeah. What's, that's a great point you made. would be, you know, that I can press the cost out of the box. So you get in whatever. that are coming in that, you know, At the moment, you know, the enablement, you know, it, you know, compute storage? that's needed for, you know, So, to get this right, you You got GreenLake and you So does the customer, So the beauty of the HP ProLiant of how you would have to do this. He said you have to be early to be first. Yeah. So you were- early, but do you have a lead? If you're the best. We go to the source of the What do you look at and We've been in a lot of And we know how that And I can tell you, and Andy Jassy, you know, code to the developer. Which is why there's cake that we've, you know, provisioning with containers. a lot of that into the mix in with the CICD pipeline. I know it's going to get It's just a continuation of, you know, And the industry is really It's about, you know, I slice right like you wouldn't believe. So to speak, you know, How do you guys view that piece of it? is the infrastructure or the middle where, It's all of it. Quick question on the I remember when HP was, you know, So first of all, VMware and HPE, Well before any Broadcom a variety of partners. the answer's absolutely not. And of course, you know, on each other to help us as, you know, that we all have in common. And independent of the Clearly, the wave, we're It's also the customers We believe that to be true. Thanks for the Cloud. So what do you guys see? in a position where, you know, How do you encourage them? you know, from across the ecosystem. and what the (indistinct), you know, It's a big pie that's developing. And then you help- or is it going to be here? is that people are finding the spots that, view not just of, you know, Everyone can see their position there. simplicity and talk to me and my advice that the partner to me, but not to go back- This is the new normal.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

Ricky CooperPERSON

0.99+

HPORGANIZATION

0.99+

DavePERSON

0.99+

Joseph GeorgePERSON

0.99+

AmazonORGANIZATION

0.99+

Sumit DhawanPERSON

0.99+

Pat GelsingerPERSON

0.99+

RickyPERSON

0.99+

five yearsQUANTITY

0.99+

AMDORGANIZATION

0.99+

FourQUANTITY

0.99+

Andy GrovePERSON

0.99+

TelecoORGANIZATION

0.99+

GreenLakeORGANIZATION

0.99+

20 yearsQUANTITY

0.99+

Goldman SachsORGANIZATION

0.99+

Andy JassyPERSON

0.99+

VMwareORGANIZATION

0.99+

20%QUANTITY

0.99+

2013DATE

0.99+

BroadcomORGANIZATION

0.99+

10 yearsQUANTITY

0.99+

John FurrierPERSON

0.99+

HPAORGANIZATION

0.99+

two companiesQUANTITY

0.99+

two organizationsQUANTITY

0.99+

twoQUANTITY

0.99+

HPEORGANIZATION

0.99+

yesterdayDATE

0.99+

JohnPERSON

0.99+

fourQUANTITY

0.99+

CapExORGANIZATION

0.99+

DellORGANIZATION

0.99+

VMware TechnologiesORGANIZATION

0.99+

MosconeLOCATION

0.99+

bothQUANTITY

0.99+

OPEXORGANIZATION

0.99+

Compute Industry AlliancesORGANIZATION

0.99+

HP GreenORGANIZATION

0.99+

Project MontereyORGANIZATION

0.98+

two big thingsQUANTITY

0.98+

five thingsQUANTITY

0.98+

todayDATE

0.98+

oneQUANTITY

0.98+

AMD PensandoORGANIZATION

0.98+

RaghuPERSON

0.98+

firstQUANTITY

0.98+

IntelORGANIZATION

0.98+

HPE DiscoverORGANIZATION

0.97+

over 200,000 customersQUANTITY

0.97+

vSphereORGANIZATION

0.97+

100,000QUANTITY

0.97+

VMware ExploreORGANIZATION

0.97+

one exampleQUANTITY

0.97+

this yearDATE

0.97+

Kumaran Siva, AMD | VMware Explore 2022


 

>>Good morning, everyone. Welcome to the cubes day two coverage of VMware Explorer, 2022 live from San Francisco. Lisa Martin here with Dave Nicholson. We're excited to kick off day two of great conversations with VMware partners, customers it's ecosystem. We've got a V an alumni back with us Kumer on Siva corporate VP of business development from AMD joins us. Great to have you on the program in person. Great >>To be here. Yes. In person. Indeed. Welcome. >>So the great thing yesterday, a lot of announcements and B had an announcement with VMware, which we will unpack that, but there's about 7,000 to 10,000 people here. People are excited, ready to be back, ready to be hearing from this community, which is so nice. Yesterday am B announced. It is optimizing AMD PON distributed services card to run on VMware. Bsphere eight B for eight was announced yesterday. Tell us a little bit about that. Yeah, >>No, absolutely. The Ben Sando smart neck DPU. What it allows you to do is it, it provides a whole bunch of capabilities, including offloads, including encryption DEC description. We can even do functions like compression, but with, with the combination of VMware project Monterey and, and Ben Sando, we we're able to do is even do some of the vSphere, actual offloads integration of the hypervisor into the DPU card. It's, it's pretty interesting and pretty powerful technology. We're we're pretty excited about it. I think this, this, this could, you know, potentially, you know, bring some of the cloud value into, in terms of manageability, in terms of being able to take care of bare metal servers and also, you know, better secure infrastructure, you know, cloudlike techniques into the, into the mainstream on-premises enterprise. >>Okay. Talk a little bit about the DPU data processing unit. They talked about it on stage yesterday, but help me understand that versus the CPU GPU. >>Yeah. So it's, it's, it's a different, it's a different point, right? So normally you'd, you'd have the CPU you'd have we call it dumb networking card. Right. And I say dumb, but it's, it's, you know, it's just designed to go process packets, you know, put and put them onto PCI and have the, the CPU do all of the, kind of the, the packet processing, the, the virtual switching, all of those functions inside the CPU. What the DPU allows you to do is, is actually offload a bunch of those functions directly onto the, onto the deep view card. So it has a combination of these special purpose processors that are programmable with the language called P four, which is one, one of the key things that pan Sando brings. Here's a, it's a, it's a real easy to program, easy to use, you know, kind of set so that not some of, some of our larger enterprise customers can actually go in and, you know, do some custom coding depending on what their network infrastructure looks like. But you can do things like the V switch in, in the, in the DPU, not having to all have that done on the CPU. So you freeze up some of the CPU course, make sure, make sure infrastructure run more efficiently, but probably even more importantly, it provides you with more, with greater security, greater separation between the, between the networking side and the, the CPU side. >>So, so that's, that's a key point because a lot of us remember the era of the tonic TCP, I P offload engine, Nick, this isn't simply offloading CPU cycles. This is actually providing a sort of isolation. So that the network that's right, is the network has intelligence that is separate from the server. Is that, is that absolutely key? Is that absolutely >>Key? Yeah. That's, that's a good way of looking at it. Yeah. And that's, that's, I mean, if you look at some of the, the, the techniques used in the cloud, the, you know, this, this, this in fact brings some of those technologies into, into the enterprise, right. So where you are wanting to have that level of separation and management, you're able to now utilize the DPU card. So that's, that's a really big, big, big part of the value proposition, the manageability manageability, not just offload, but you know, kind of a better network for enterprise. Right. >>Right. >>Can you expand on that value proposition? If I'm a customer what's in this for me, how does this help power my multi-cloud organization? >>Yeah. >>So I think we have some, we actually have a number of these in real customer use cases today. And so, you know, folks will use, for example, the compression and the, sorry, the compression and decompression, that's, that's definitely an application in the storage side, but also on the, just on the, just, just as a, as a DPU card in the mainstream general purpose, general purpose server server infrastructure fleet, they're able to use the encryption and decryption to make sure that their, their, their infrastructure is, is kind of safe, you know, from point to point within the network. So every, every connected, every connection there is actually encrypted and that, that, you know, managing those policies and orchestrating all of that, that's done to the DPU card. >>So, so what you're saying is if you have DPU involved, then the server itself and the CPUs become completely irrelevant. And basically it's just a box of sheet metal at that point. That's, that's a good way of looking at that. That's my segue talking about the value proposition of the actual AMD. >>No, absolutely. No, no. I think, I think, I think the, the, the CPUs are always going to be central in this and look. And so, so I think, I think having, having the, the DPU is extremely powerful and, and it does allow you to have better infrastructure, but the key to having better infrastructure is to have the best CPU. Well, tell >>Us, tell >>Us that's what, tell us us about that. So, so I, you know, this is, this is where a lot of the, the great value proposition between VMware and AMD come together. So VMware really allows enterprises to take advantage of these high core count, really modern, you know, CPU, our, our, our, our epic, especially our Milan, our 7,003 product line. So to be able to take advantage of 64 course, you know, VMware is critical for that. And, and so what they, what they've been able to do is, you know, know, for example, if you have workloads running on legacy, you know, like five year old servers, you're able to take a whole bunch of those servers and consolidate down, down into a single node, right. And the power that VMware gives you is the manageability, the reliability brings all of that factors and allows you to take advantage of, of the, the, the latest, latest generation CPUs. >>You know, we've actually done some TCO modeling where we can show, even if you have fully depreciated hardware, like, so it's like five years old plus, right. And so, you know, the actual cost, you know, it's already been written off, but the cost just the cost of running it in terms of the power and the administration, you know, the OPEX costs that, that are associated with it are greater than the cost of acquiring a new set of, you know, a smaller set of AMD servers. Yeah. And, and being able to consolidate those workloads, run VMware, to provide you with that great, great user experience, especially with vSphere 8.0 and the, and the hooks that VMware have built in for AMD AMD processors, you actually see really, really good. It's just a great user experience. It's also a more efficient, you know, it's just better for the planet. And it's also better on the pocketbook, which is, which is, which is a really cool thing these days, cuz our value in TCO translates directly into a value in terms of sustainability. Right. And so, you know, from, from energy consumption, from, you know, just, just the cost of having that there, it's just a whole lot better >>Talk about on the sustainability front, how AMD is helping its customers achieve their sustainability goals. And are you seeing more and more customers coming to you saying, we wanna understand what AMD is doing for sustainability because it's important for us to work with vendors who have a core focus on it. >>Yeah, absolutely. You know, I think, look, I'll be perfectly honest when we first designed our CPU, we're just trying to build the biggest baddest thing that, you know, that, that comes out in terms of having the, the, the best, the, the number, the, the largest number of cores and the best TCO for our customers, but what it's actually turned out that TCO involves energy consumption. Right. And, and it involves, you know, the whole process of bringing down a whole bunch of nodes, whole bunch of servers. For example, we have one calculation where we showed 27, you know, like I think like five year old servers can be consolidated down into five AMD servers that, that ratio you can see already, you know, huge gains in terms of sustainability. Now you asked about the sustainability conversation. This I'd say not a week goes by where I'm not having a conversation with, with a, a, a CTO or CIO who is, you know, who's got that as part of their corporate, you know, is part of their corporate brand. And they want to find out how to make their, their infrastructure, their data center, more green. Right. And so that's, that's where we come in. Yeah. And it's interesting because at least in the us money is also green. So when you talk about the cost of power, especially in places like California, that's right. There's, there's a, there's a natural incentive to >>Drive in that direction. >>Let's talk about security. You know, the, the, the threat landscape has changed so dramatically in the last couple of years, ransomware is a household word. Yes. Ransomware attacks happened like one every 11 seconds, older technology, a little bit more vulnerable to internal threats, external threats. How is AMD helping customers address the security fund, which is the board level conversation >>That that's, that's, that's a, that's a great, great question. Look, I look at security as being, you know, it's a layered thing, right? I mean, if you talk to any security experts, security, doesn't, you know, there's not one component and we are an ingredient within the, the greater, you know, the greater scheme of things. A few things. One is we have partnered very closely with the VMware. They have enabled our SUV technology, secure encrypted virtualization technology into, into the vSphere. So such that all of the memory transactions. So you have, you have security, you know, at, you know, security, when you store store on disks, you have security over the network and you also have security in the compute. And when you go out to memory, that's what this SUV technology gives you. It gives you that, that security going, going in your, in your actual virtual machine as it's running. And so the, the, we take security extremely seriously. I mean, one of the things, every generation that you see from, from AMD and, and, you know, you have seen us hit our cadence. We do upgrade all of the security features and we address all of the sort of known threats that are out there. And obviously this threats, you know, kind of coming at us all the time, but our CPUs just get better and better from, from a, a security stance. >>So shifting gears for a minute, obviously we know the pending impossible acquisition, the announced acquisition of VMware by Broadcom, AMD's got a relationship with Broadcom independently, right? No, of course. What is, how's that relationship? >>Oh, it's a great relationship. I mean, we, we, you know, they, they have certified their, their, their niche products, their HPA products, which are utilized in, you know, for, for storage systems, sand systems, those, those type of architectures, the hardcore storage architectures. We, we work with them very closely. So they, they, they've been a great partner with us for years. >>And you've got, I know, you know, we are, we're talking about current generation available on the shelf, Milan based architecture, is that right? That's right. Yeah. But if I understand correctly, maybe sometime this year, you're, you're gonna be that's right. Rolling out the, rolling out the new stuff. >>Yeah, absolutely. So later this year, we've already, you know, we already talked about this publicly. We have a 96 core gen platform up to 96 cores gen platform. So we're just, we're just taking that TCO value just to the next level, increasing performance DDR, five CXL with, with memory expansion capability. Very, very neat leading edge technology. So that that's gonna be available. >>Is that NextGen P C I E, or has that shift already been made? It's >>Been it's NextGen. P C I E P C E gen five. Okay. So we'll have, we'll have that capability. That'll be, that'll be out by the end of this year. >>Okay. So those components you talk about. Yeah. You know, you talk about the, the Broadcom VMware universe, those components that are going into those new slots are also factors in performance and >>Yeah, absolutely. You need the balance, right? You, you need to have networking storage and the CPU. We're very cognizant of how to make sure that these cores are fed appropriately. Okay. Cuz if you've just put out a lot of cores, you don't have enough memory, you don't have enough iOS. That's, that's the key to, to, to, you know, our approach to, to enabling performance in the enterprise, make sure that the systems are balanced. So you get the experience that you've had with, let's say your, you know, your 12 core, your 16 core, you can have that same experience in the 96 core in a node or 96 core socket. So maybe a 192 cores total, right? So you can have that same experience in, in a tune node in a much denser, you know, package server today or, or using Melan technology, you know, 128 cores, super, super good performance. You know, its super good experience it's, it's designed to scale. Right. And especially with VMware as, as our infrastructure, it works >>Great. I'm gonna, Lisa, Lisa's got a question to ask. I know, but bear with me one bear >>With me. Yes, sir. >>We've actually initiated coverage of this question of, you know, just hardware matter right anymore. Does it matter anymore? Yeah. So I put to you the question, do you think hardware still matters? >>Oh, I think, I think it's gonna matter even more and more going forward. I mean just, but it's all cloud who cares just in this conversation today. Right? >>Who cares? It's all cloud. Yeah. >>So, so, so definitely their workloads moving to the cloud and we love our cloud partners don't get me wrong. Right. But there are, you know, just, I've had so many conversations at this show this week about customers who cannot move to the cloud because of regulatory reasons. Yeah. You know, the other thing that you don't realize too, that's new to me is that people have depreciated their data centers. So the cost for them to just go put in new AMD servers is actually very low compared to the cost of having to go buy, buy public cloud service. They still want to go buy public cloud services and that, by the way, we have great, great, great AMD instances on, on AWS, on Google, on Azure, Oracle, like all of our major, all of the major cloud providers, support AMD and have, have great, you know, TCO instances that they've, they've put out there with good performance. Yeah. >>What >>Are some of the key use cases that customers are coming to AMD for? And, and what have you seen change in the last couple of years with respect to every customer needing to become a data company needing to really be data driven? >>No, that's, that's also great question. So, you know, I used to get this question a lot. >>She only asks great questions. Yeah. Yeah. I go down and like all around in the weeds and get excited about the bits and the bites she asks. >>But no, I think, look, I think the, you know, a few years ago and I, I think I, I used to get this question all the time. What workloads run best on AMD? My answer today is unequivocally all the workloads. Okay. Cuz we have processors that run, you know, run at the highest performance per thread per per core that you can get. And then we have processors that have the highest throughput and, and sometimes they're one in the same, right. And Ilan 64 configured the right way using using VMware vSphere, you can actually get extremely good per core performance and extremely good throughput performance. It works well across, just as you said, like a database to data management, all of those kinds of capabilities, DevOps, you know, E R P like there's just been a whole slew slew of applications use cases. We have design wins in, in major customers, in every single industry in every, and these, these are big, you know, the big guys, right? >>And they're, they're, they're using AMD they're successfully moving over their workloads without, without issue. For the most part. In some cases, customers tell us they just, they just move the workload on, turn it on. It runs great. Right. And, and they're, they're fully happy with it. You know, there are other cases where, where we've actually gotten involved and we figured out, you know, there's this configuration of that configuration, but it's typically not a, not a huge lift to move to AMD. And that's that I think is a, is a key, it's a key point. And we're working together with almost all of the major ISV partners. Right. And so just to make sure that, that, that they have run tested certified, I think we have over 250 world record benchmarks, you know, running in all sorts of, you know, like Oracle database, SAP business suite, all of those, those types of applications run, run extremely well on AMD. >>Is there a particular customer story that you think really articulates the value of running on AMD in terms of enabling bus, big business outcome, safer a financial services organization or healthcare organization? Yeah. >>I mean we, yeah, there's certainly been, I mean, across the board. So in, in healthcare we've seen customers actually do the, the server consolidation very effectively and then, you know, take advantage of the, the lower cost of operation because in some cases they're, they're trying to run servers on each floor of a hospital. For example, we've had use cases where customers have been able to do that because of the density that we provide and to be able to, to actually, you know, take, take their compute more even to the edge than, than actually have it in the, in those use cases in, in a centralized matter. The another, another interesting case FSI in financial services, we have customers that use us for general purpose. It, we have customers that use this for kind of the, the high performance we call it grid computing. So, you know, you have guys that, you know, do all this trading during the day, they collect tons and tons of data, and then they use our computers to, or our CPUs to just crunch to that data overnight. >>And it's just like this big, super computer that just crunches it's, it's pretty incredible. They're the, the, the density of the CPUs, the value that we bring really shines, but in, in their general purpose fleet as well. Right? So they're able to use VMware, a lot of VMware customers in that space. We love our, we love our VMware customers and they're able to, to, to utilize this, they use use us with HCI. So hyperconverge infrastructure with V VSAN and that's that that's, that's worked works extremely well. And, and, and our, our enterprise customers are extremely happy with that. >>Talk about, as we wrap things up here, what's next for AMD, especially AMD with VMwares VMware undergoes its potential change. >>Yeah. So there there's a lot that we have going on. I mean, I gotta say VMware is one of the, let's say premier companies in terms of, you know, being innovative and being, being able to drive new, new, interesting pieces of technology and, and they're very experimentive right. So they, we have, we have a ton of things going with them, but certainly, you know, driving pin Sando is, is very, it is very, very important to us. Yeah. I think that the whole, we're just in the, the cusp, I believe of, you know, server consolidation becoming a big thing for us. So driving that together with VMware and, you know, into some of these enterprises where we can show, you know, save the earth while we, you know, in terms of reducing power, reducing and, and saving money in terms of TCO, but also being able to enable new capabilities. >>You know, the other part of it too, is this new infrastructure enables new workloads. So things like machine learning, you know, more data analytics, more sophisticated processing, you know, that, that is all enabled by this new infrastructure. So we, we were excited. We think that we're on the precipice of, you know, going a lot of industries moving forward to, to having, you know, the next level of it. It's no longer about just payroll or, or, or enterprise business management. It's about, you know, how do you make your, you know, your, your knowledge workers more productive, right. And how do you give them more capabilities? And that, that is really, what's exciting for us. >>Awesome Cooper. And thank you so much for joining Dave and me on the program today, talking about what AMD, what you're doing to supercharge customers, your partnership with VMware and what is exciting. What's on the, the forefront, the frontier, we appreciate your time and your insights. >>Great. Thank you very much for having me. >>Thank you for our guest and Dave Nicholson. I'm Lisa Martin. You're watching the cube live from VMware Explorer, 22 from San Francisco, but don't go anywhere, Dave and I will be right back with our next guest.

Published Date : Aug 31 2022

SUMMARY :

Great to have you on the program in person. So the great thing yesterday, a lot of announcements and B had an announcement with VMware, I think this, this, this could, you know, potentially, you know, bring some of the cloud value into, but help me understand that versus the CPU GPU. And I say dumb, but it's, it's, you know, it's just designed to go process So that the network that's right, not just offload, but you know, kind of a better network for enterprise. And so, you know, folks will use, for example, the compression and the, And basically it's just a box of sheet metal at that point. the DPU is extremely powerful and, and it does allow you to have better infrastructure, And the power that VMware gives you is the manageability, the reliability brings all of that factors the administration, you know, the OPEX costs that, that are associated with it are greater than And are you seeing more and more customers coming to you saying, And, and it involves, you know, the whole process of bringing down a whole bunch of nodes, How is AMD helping customers address the security fund, which is the board level conversation And obviously this threats, you know, kind of coming at us all the time, So shifting gears for a minute, obviously we I mean, we, we, you know, they, they have certified their, their, their niche products, available on the shelf, Milan based architecture, is that right? So later this year, we've already, you know, we already talked about this publicly. That'll be, that'll be out by the end of this year. You know, you talk about the, the Broadcom VMware universe, that's the key to, to, to, you know, our approach to, to enabling performance in the enterprise, I know, but bear with me one So I put to you the question, do you think hardware still matters? but it's all cloud who cares just in this conversation today. Yeah. But there are, you know, just, I've had so many conversations at this show this week about So, you know, I used to get this question a lot. around in the weeds and get excited about the bits and the bites she asks. Cuz we have processors that run, you know, run at the highest performance you know, running in all sorts of, you know, like Oracle database, SAP business Is there a particular customer story that you think really articulates the value of running on AMD density that we provide and to be able to, to actually, you know, take, take their compute more even So they're able to use VMware, a lot of VMware customers in Talk about, as we wrap things up here, what's next for AMD, especially AMD with VMwares So driving that together with VMware and, you know, into some of these enterprises where learning, you know, more data analytics, more sophisticated processing, you know, And thank you so much for joining Dave and me on the program today, talking about what AMD, Thank you very much for having me. Thank you for our guest and Dave Nicholson.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Lisa MartinPERSON

0.99+

Dave NicholsonPERSON

0.99+

BroadcomORGANIZATION

0.99+

AMDORGANIZATION

0.99+

DavePERSON

0.99+

San FranciscoLOCATION

0.99+

Kumaran SivaPERSON

0.99+

five yearQUANTITY

0.99+

12 coreQUANTITY

0.99+

VMwareORGANIZATION

0.99+

192 coresQUANTITY

0.99+

16 coreQUANTITY

0.99+

96 coreQUANTITY

0.99+

CaliforniaLOCATION

0.99+

five yearsQUANTITY

0.99+

CooperPERSON

0.99+

iOSTITLE

0.99+

7,003QUANTITY

0.99+

OracleORGANIZATION

0.99+

LisaPERSON

0.99+

128 coresQUANTITY

0.99+

yesterdayDATE

0.99+

AWSORGANIZATION

0.99+

MilanLOCATION

0.99+

todayDATE

0.99+

GoogleORGANIZATION

0.99+

this yearDATE

0.98+

Yesterday amDATE

0.98+

fiveQUANTITY

0.98+

one componentQUANTITY

0.98+

eightQUANTITY

0.98+

HPAORGANIZATION

0.98+

each floorQUANTITY

0.98+

oneQUANTITY

0.97+

this weekDATE

0.97+

vSphere 8.0TITLE

0.97+

later this yearDATE

0.97+

day twoQUANTITY

0.97+

10,000 peopleQUANTITY

0.96+

96 coreQUANTITY

0.95+

TCOORGANIZATION

0.95+

2022DATE

0.95+

OneQUANTITY

0.95+

27QUANTITY

0.94+

64 courseQUANTITY

0.94+

SandoORGANIZATION

0.94+

one calculationQUANTITY

0.94+

end of this yearDATE

0.93+

VMwaresORGANIZATION

0.93+

*****NEEDS TO STAY UNLISTED FOR REVIEW***** Ricky Cooper & Joseph George | VMware Explore 2022


 

(bright intro music) >> Welcome back everyone to VMware Explore '22. I'm John Furrier, host of the key with David Lante, our 12th year covering VMware's user conference, formerly known as VM-World now rebranded as VMware Explore. You got two great Cube alumni coming on the Cube. Ricky Cooper, SVP worldwide partner commercial VMware. Great to see you, thanks for coming on. >> Thank you. >> We just had a great chat-- >> Good to see you again. >> At HPE discover. And of course, Joseph George, Vice President of Compute Industry Alliances. Great to have you on. Great to see you. >> Great to see you, John. >> So guys, this year is very curious, VMware, a lot going on. The name change of the event. Big move, Bold move. And then they changed the name of the event. Then Broadcom buys them. A lot of speculation, but at the end of the day, this conference... Kind of people were wondering what would be the barometer of the event. We were reporting this morning on the keynote analysis. Very good mojo in the keynote. Very transparent about the Broadcom relationship. The expo floor last night was buzzing. I mean, this is not a show that's looking like it's going to be, you know, going down. This is clearly a wave. We're calling it super cloud, multi-cloud's their theme. Clearly the cloud's happening. Not to date ourselves, but 2013 we were discussing on the-- >> We talked about that, yeah. >> HPE Discover about DevOps infrastructure as code. We're full realization now of that. This is where we're at. You guys had a great partnership with VMware and HPE. Talk about where you guys see this coming together because the customers are refactoring, they are looking at cloud native, the whole Broadcom visibility to the VMware customer bases activated them. They're here and they're leaning in. What's going on? >> Yeah absolutely, we're seeing a renewed interest now as customers are looking at their entire infrastructure, bottoms up all the way up the stack and the notion of a hybrid cloud, where you've got some visibility and control of your data and your infrastructure and applications. Customers want to live in that sort of a cloud environment. And so we're seeing a renewed interest, a lot of conversations we're having with customers now, a lot of customers committing to that model, where they have applications and workloads running at the edge in their data center and in the public cloud in a lot of cases. But having that mobility, having that control, being able to have security in their own control. There's a lot that you can do there. And obviously partnering with VMware. We've been partners for so long. >> 20 years, at least. >> At least 20 years. Back when they invented stuff. They were inventing way-- >> VMware's got a very technical culture, but Ricky, I got to say that we commented earlier when Ragu was on the CEO now CEO, I mean legendary product guy, set the trajectory to VMware, everyone knows that. I can't know whether it was VMware or HP, HP before HPE coined Hybrid. Cause you guys were both on, I can't recall Dave, which company coined it first, but it was either one of you guys. Nobody else was there. >> It was the partnership. (men chuckle) >> Hybrid Cloud I had a big thing with Pat Gelsinger, Dave. Remember when he said he got in my grill on theCube, live, but now you see. >> You focus on that multi-cloud aspect. So you've got a situation where our customers are looking at multi-cloud and they're looking at it, not just as a flash in the pan. This is here for five years, 10 years, 20 years. Okay. So what does that mean then to our partners and to our distributors, you're seeing a whole seed change. You're seeing partners now looking at this. So look at the OEMs, the ones that have historically been vSphere customers and now saying they're coming in, drove saying, okay, what is the next step? Well, how can I be a multi-cloud partner with you? How can I look at other aspects that we're driving here together? So GreenLake is a great example. We keep going back to GreenLake and we are partaking in GreenLake at the moment. The real big thing for us is going to be right. Let's make sure that we've got the agreements in place that support this Sasson subscription motion going forward. And then the sky's the limit for us. >> You're plugging that right into. >> Well, here's why, here's why, so customers are loving the fact that they can go to a public cloud and they can get an SLA. They come to an on-premise, you've got the hardware, you've got the software, you've got the guys on board to maintain this through its life cycle. I mean, this is complicated stuff. Now we've got a situation where you can say, Hey, we can get an SLA on premise. >> And I think what you're seeing is it's very analogous to having a financial advisor, just manage your portfolio. You're taking care of just submitting money. That's really a lot of what a lot of the customers have done with the public cloud. But now a lot of these customers are getting savvy. They have been working with VMware technologies and HPE for so long. they've got expertise. They know how they want their workloads architected. Now we've given them a model where they can leverage the cloud platform to be able to do this, whether it's on premise, the edge or in the public cloud, leveraging HPE GreenLake and VMware. >> Is it predominantly or exclusively a managed service or do you find some customers saying, hey, we want to manage ourself. What are you seeing is the mix there? >> It is not predominantly managed services right now. We're actually, as we are growing last time we talked at HPE discover. We talked about a whole bunch of new services that we've added to our catalog. It's growing by leaps and bounds. A lot of folks are definitely interested in the pay as you go, obviously the financial model, but are now getting exposed to all the other management that can happen. There are managed services capabilities, but actually running it as a service with your systems on-prem is a phenomenal idea for all these customers. And they're opening their eyes to some new ways to service their customers better. >> And another phenomenon we're seeing there is where partners such as HPA, using other partners for various areas of the services implementation as well. So that's another phenomenon. You're seeing the resale motion now going into a lot more of the services motion. >> It's interesting too. I mean the digital modernization that's going on, the transformation whatever you want to call it, is complicated, that's clear. One of the things I liked about the keynote today was the concept of cloud chaos, because we've been saying quoting Andy Grove, Next Intel, let chaos rain and rain in the chaos. And when you have inflection points, complexity, which is the chaos, needs to be solved and whoever solves it and kicks the inflection point, that's up and to the right. >> So prime idea right here. So. >> GreenLake is, well. >> Also look at the distribution model and how that's changed a couple of points on a deal. Now they're saying I'll be your aggregator. I'll take the strain and I'll give you scale. I'll give you VMware scale for all of the various different partners, et cetera. >> Yeah. So let's break this down because this is, I think a key point. So complexity is good, but the old model in the enterprise market was, you solve complexity with more complexity and everybody wins. Oh yeah, we're locked in. That's not what the market wants. They want self- service, they want as a service, they want easy, developer first security data ops. DevOps is already in the cycle. So they're going to want simpler, easier, faster. >> And this is kind of why I I'll say for the big announcement today here at VMware Explorer around the VMware vSphere distributed services engine, project Monterey that we've talked about for so long, HPE and VMware and AMD with the Pensando DPU actually work together to engineer a solution for exactly that. The capabilities are fairly straightforward in terms of the technologies, but actually doing the work to do integration, joint engineering, make sure that this is simple and easy and able to be running HPE GreenLake. >> We invested in Pensando right, we are investors. >> What's the benefit of that. That's a great point. You made what's the value to the customer bottom line, that deep, co-engineering, co-partnering, what is it deliver that others don't do? >> Yeah. Well, I think one example would be a lot of vendors can say we support it. >> Yep. That's great. That's actually a really good move, supporting it. It can be resold. That's another great move. I'm not mechanically inclined to where I would go build my own car. I'll go to a dealership and actually buy one that I can press the button and I can start it and I can do what I need to do with my car. And that's really what this does is the engineering work that's gone on between our two companies and AMD Pensando as well as the business work to make that simple and easy that transaction to work. And then to be able to make it available as a service is really what made, that's why it's such a winner here... >> But, it's also a lower cost out of the box. Yes. So you get in whatever it's called a 20%. Okay. But there's nuance because you're also on a new technology curve and you're able to absorb modern apps. We use that term as a promo, but when I say modern apps, I mean data, rich apps, things that are more AI driven. Not the conventional, not that people aren't doing, you know, SAP and CRM, they are. But, there's a whole slew of new apps that are coming in that traditional architectures aren't well suited to handle from a price performance standpoint. This changes that doesn't it? >> Well, you think also of going to the next stage, which is the go to market between the two organizations that before at the moment, HPE is running off doing various different things. We were running off to. Again, that chaos that you're talking about in cloud chaos, you got to go to market chaos, but by simplifying four or five things, what are we going to do really well together? How do we embed those in GreenLake and be known in the marketplace for these solutions? Then you get an organization that's really behind the go to market. You can help with sales, activation, the enablement. And then we benefit from the scale of HPE. >> Yeah. What are those solutions, I mean... Is it just, is it IS? Is it compute storage? Is it specific SAP? Is it VDI? What are you seeing out there? >> So right now for this specific technology, we're educating our customers on what that could be. And at its core, this solution allows customers to take services that normally and traditionally run on the compute system and run on a DPU now with project Monterey. And this is now allowing customers to think about where are their use cases. So I'm rather than going and say, use it for this. We're allowing our customers to explore and say, okay, here's where it makes sense. Where do I have workloads that are using a lot of compute cycles on services at the compute level? That could be somewhere else like networking as a great example, and allowing more of those compute cycles to be available. So where there are performance requirements for an application where there are timely response that's needed for results to be able to take action on, to be able to get insight from data really quick. Those are places where we're starting to see the services moving onto something like a DPU. And that's where this makes a whole lot more sense. >> Okay, so to get this right? You got the hybrid cloud, right? You got GreenLake and you got the distributed engine. What's that called? >> It's HPE Proliant Proliant with the VMware, VSphere. >> VSphere. That's the compute distributed. Okay. So does the customer, how do you guys implement that with the customer all three at the same time or they mix and match? How's that work? >> All three of those components. So the beauty of the HP Proliant with VMware vSphere distributed services engine also now is project Monterey for those that are keeping notes at home. Again already pre-engineered so we've already worked through all the mechanics of how you would have to do this. So it's not something you have to go figure out how you build, get deployment, work through those details. That's already done. It is available through HPE GreenLake. So you can go and actually get it as a service in partnership with our customer, our friends here at VMware. And because if you're familiar and comfortable with all the things that HP Proliant has done from a security perspective, from a reliability perspective, trusted supply chain, all those sorts of things, you're getting all of that with this particular solution. >> Sumit Dhawan had a great quote on theCube just a hour or so ago. He said you have to be early to be first. Love that quote. Okay. So you were first, you were probably a little early, but do you have a lead? I know you're going to say yes. Okay. Let's just assume that okay. Relative to the competition, how do you know? How do you determine that? >> If we have a lead or not? >> Yeah, if you lead, if you're the best. >> We go to the source of the truth, which is our customers. >> And what do they tell you? What do you look at and say, okay, now, I mean, when you have that honest conversation and say, okay, we are, we're first, we're early, we're keeping our lead. What are the things that you look at, as indicators? >> I'll say it this way. We've been in a lot of businesses where we do compete head-to-head in a lot of places and we know how that sales process normally works. We're seeing a different motion from our customers. When we talk about HPE GreenLake, there's not a lot of back and forth on, okay, well let me go shop around. It is HP GreenLake, let's talk about how we actually build this solution. >> And I can tell you from a VMware perspective, our customers are asking us for this the other way around. So that's a great sign. Is that, Hey, we need to see this partnership come together in GreenLake. >> Yeah. Okay. So you would concur with that? >> Absolutely. So third party validation. >> From Switzerland. Yeah. >> Bring it with you over here. >> We're talking about this earlier on, I mean, of course with I mentioned earlier on there's some contractual things that you've got to get in place as you are going through this migration into Sasson subscription, et cetera. And so we are working as hard as we can to make sure, Hey, let's really get this contract in place as quickly as possible, it's what the customers are asking us. >> We've been talking about this for years, you know, see containers being so popular. Now, Kubernetes becoming that layer of bringing people to bringing things together. It's the old adage that Amazon used to coin and Andy Jassy, they do the undifferentiated, heavy lifting. A lot of that's now that's now cloud operations. Underneath is infrastructure's code to the developer, right. That's at scale. >> That's right. >> And so you got a lot of heavy lifting being done with GreenLake. Which is why there's no objections probably. >> Right absolutely. >> What's the choice. What do you even shop? >> Yeah. There's nothing to shop around. >> Yeah, exactly. And then we've, that is really icing on the cake that we've, we've been building for quite some time. There is an understanding in the market that what we do with our infrastructure is hardened from a reliability and quality perspective. Times are tough right now, supply chain issues, all that stuff, we've talked about it. But at HPE, we don't skimp on quality. We're going to spend the dollars and time on making sure we got reliability and security built in. It's really important to us. >> We get a great use case, the storage team, they were provisioning with containers. Storage is a service, instantly. We're seeing with you guys with VMware, your customers bringing in a lot of that into the mix as well. I got to ask. Cause every event we talk about AI and machine learning, automation and DevOps are now infiltrating in with the Ci/CD pipeline security and data become a big conversation. >> Agreed. >> Okay. So how do you guys look at that? Okay. You sold me on green. I've been a big fan from day one. Now it's got maturity on it. I know it's going to get a lot more headroom to do there. It's still a lot of work to do, but directionally it's pretty accurate. It's going to be going to be success. There's still concerns about security, the data layer. That's agnostic of environment, private cloud hybrid, public and edge. So that's important and security has got a huge service area. These are a work in progress. How do you guys view those? >> I think you've just hit the nail on the head. I mean, I was in the press and journalist meetings yesterday and our answer was exactly the same. There is still so much work that can be done here. And I don't think anybody is really emerging as a true leader. It's just a continuation of trying to get that right. Because it is what is the most important thing to our customers. And the industry is really sort of catching up to that. >> And when you start talking about privacy and when you... It's not just about company information, it's about individuals information. It's about information that if exposed actually could have real impact on people. So it's more than just an IT problem. It is actually, and from HP's perspective, security starts from when we're picking our suppliers for our components. There are processes that we put into our entire trusted supply chain from the factory on the way up. I liken it to my golf swing, my golf swinging. I slice, right lik you wouldn't believe. But when I go to the golf pros, they start me back at the mechanics, the foundational pieces, here's where the problems are and start working on that. So my view is our view is if your infrastructure is not secure, you're going to have troubles with security as you go further up. >> Stay in the sandbox, so to speak, they're driving range on the golf analogy there. I love that. Talk about supply chain security real quick. Because you mentioned supply chain on the hardware side, you're seeing a lot of open source and supply chain in software trusted software. How does GreenLake look at that? How do you guys view that piece of it? That's an important part. >> Yeah, security is one of the key pillars that we're actually driving as a company right now. As I said, it's important to our customers as they're making purchasing decisions. And we're looking at it from the infrastructure all the way up to the actual service itself. And that's the beauty of having something like HP GreenLake, we don't have to pick is the infrastructure or the middle where, or the top of stack application, we can look at all of it. Yeah. It's all of it. That matters. >> Question on the ecosystem posture, so, I remember when HP was one company and then the GSIs were a little weird with HP because of EDS, you know, had data protector. So we weren't really chatting up Veeam at the time. And as soon as the split happened, ecosystem exploded. Now you have a situation where your Broadcom is acquiring VMware. You guys big Broadcom customer, has your attitude changed or has it not because, oh, we meet where the customers are. You've always said that, but have you have leaned in more? I mean, culturally is HPE, HPE now saying, hmm, now we have some real opportunities to partner in new ways that we don't have to sleep with one eye open, maybe. >> So I would some first of all, VMware and HPE, we've got a variety of different partners, we always have. If well, before any Broadcom announcement came along. We've been working with a variety of partners and that hasn't changed and that hasn't changed. And if your question is, has our posture toward VMware changed that all the answers absolutely not. We believe in what VMware is doing. We believe in what our customers are doing with VMware, and we're going to continue to work with VMware and partner with you. >> And of course we had to spin out ourselves in November of last year, which I worked on the whole Dell, whole Dell piece. >> But, you still had the same chairman. >> But since then, I think what's really become very apparent. And it's not just with HPE, but with many of our partners, many of the OEM partners, the opportunity in front of us is vast. And we need to rely on each other to help us solve the customer problems that are out there. So there's a willingness to overlook some things that in the past may have been barriers. >> But it's important to note also that it's not that we have not had history, right? Over... We've got over 200,000 customers join. >> Hundreds of millions of dollars of business. >> 100,000, over 10,000 or a 100,000 channel partners that we have in common. Numerous , numerous... >> And independent of the whole Broadcom overhang there, there's the ecosystem floor. Yeah, the expo floor. I mean, it's vibrant. I mean, there's clearly a wave coming. Ricky, we talked about this briefly at HPE Discover. I want to get an update from your perspective, both of you, if you don't mind weighing in on this, clearly the wave we calling it super cloud. Cause it's not just, multi-cloud completely different looking successes, >> Smart Cloud. >> It's not just vendors. It's also the customers turning into clouds themselves. You look at Goldman Sachs. I think every vertical will have its own power law of cloud players in the future. We believe that to be true. We're still testing that assumption, but it's trending in when you got OPEX has to go to in fund statement. CapEx goes to thanks for the cloud. All that's good, but there's a wave coming and we're trying to identify it. What do you guys see as this wave cause beyond multi-cloud and the obvious nature of that will end up happening as a state and what happens beyond that interoperability piece? That's a whole nother story and that's what everyone's fighting for. But everyone out in that ecosystem, it's a big wave coming. They got their surfboards. They're ready to go. So what do you guys see? What is the next wave that everyone's jacked up about here? >> Well, I think the multi-cloud is obviously at the epicenter. If you look at the results that are coming in, a lot of our customers, this is what's leading the discussion. And now we're in a position where we've brought many companies over the last few years, they're starting to come to fruition. They're starting to play a role in how we're moving forward. Some of those are a bit more applicable to the commercial space. We're finding commercial customers are never bought from us before never hundreds and hundreds are coming through our partner networks every single quarter. So brand new to VMware, the trick then is how do you nurture them? How do you encourage them? >> So new logos are coming in? >> New logos are coming in all the time, all the time from across the ecosystem. It's not just the OEMs, it's all the way back. >> So the ecosystem's back for VMware. >> Unbelievably. So what are we doing to help that? There's two big things that we've announced in the recent weeks is that partner connect 2.0. When I talk to you about multi-cloud and multicardt the customers are doing, you see that trend. Four, five different separate clouds that we've got here. The next piece is that they're changing their business models with the partners. Their services is becoming more and more apparent, etc. And the use of other partners to do other services deployment or this stuff is becoming prevalent. Then you've got the distributors that I talked about were there. Then you route to market, then you route to business. So how do you encapsulate all of that and ensure your rewarding partners on all aspects of that? Whether it's deployment, whether it's test and debt, it's a points based system we've put in place now. >> It's a big pie. That's developing the market's getting bigger. >> It's getting so much bigger and then help. >> You agree obviously with that. >> Yeah, absolutely, in fact, I think for a long time we were asking the question of, is it going to be there or is it going to be here? Which was the wrong question now it's everything. Yes. And what I think that what we're seeing in the ecosystem is people are finding the spots where they're going play. Am I going to be on the edge? Am I going to be an analytics play? Am I going to be a cloud transition play? A lot of players are now emerging and saying, we now have a place, a part to play. And having that industry view, not just of a commercial customer at that level, but the two of us are looking at Telco, are looking at financial services, at healthcare, at manufacturing. How do these new ecosystem players fit into it? >> ... is lifting, everyone can see their position there. >> We're now being asked for simplicity and talk to me about partner profitability. How do I know where to focus my efforts? Am I've spread too thin? And my advice that a partner ecosystem out there is, Hey, let's pick out spots together. Let's really go to, and then strategic solutions that we were talking about is good example of that. >> Sounds like composability to me, but not to go back guys. Thanks for coming on. I think there's a big market there. I think the fog is lifted, people seeing their spot there's value there. Value creation equals reward. Yeah. Simplicity, ease of use. This is the new normal great job. Thanks for coming on sharing. Okay. Back live coverage after this short break with more day one coverage here from the blue set here in Moscone.

Published Date : Aug 31 2022

SUMMARY :

the key with David Lante, Great to have you on. it's going to be, you know, going down. the whole Broadcom visibility and in the public cloud in a lot of cases. They were inventing way-- set the trajectory to VMware, It was the partnership. but now you see. So look at the OEMs, fact that they can go to a lot of the customers have done What are you seeing is the mix there? all the other management that can happen. You're seeing the resale motion One of the things I liked So prime idea right here. all of the various different DevOps is already in the cycle. but actually doing the right, we are investors. What's the benefit of that. a lot of vendors can say we And then to be able to make cost out of the box. behind the go to market. What are you seeing out there? of those compute cycles to be You got the hybrid cloud, right? with the VMware, VSphere. So does the customer, all the mechanics of how you So you were first, you We go to the source of the truth, What are the things that We've been in a lot of And I can tell you So you would concur with that? So third party validation. Yeah. got to get in place as you are It's the old adage that And so you got a lot of heavy lifting What's the choice. There's nothing to shop around. the market that what we do with We're seeing with you guys with VMware, So how do you guys look at that? And the industry is really the factory on the way up. Stay in the sandbox, so to speak, And that's the beauty of having And as soon as the split changed that all the And of course we had many of the OEM partners, But it's important to note Hundreds of millions that we have in common. And independent of the We believe that to be true. the trick then is how do you nurture them? It's not just the OEMs, When I talk to you about That's developing the It's getting so much Am I going to be on the edge? ... is lifting, everyone that we were talking about is This is the new normal great job.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Ricky CooperPERSON

0.99+

Joseph GeorgePERSON

0.99+

TelcoORGANIZATION

0.99+

HPORGANIZATION

0.99+

GreenLakeORGANIZATION

0.99+

DellORGANIZATION

0.99+

five yearsQUANTITY

0.99+

David LantePERSON

0.99+

VMwareORGANIZATION

0.99+

Pat GelsingerPERSON

0.99+

AMDORGANIZATION

0.99+

OPEXORGANIZATION

0.99+

2013DATE

0.99+

Goldman SachsORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

RickyPERSON

0.99+

FourQUANTITY

0.99+

20%QUANTITY

0.99+

Andy GrovePERSON

0.99+

HPEORGANIZATION

0.99+

CapExORGANIZATION

0.99+

two companiesQUANTITY

0.99+

BroadcomORGANIZATION

0.99+

DavePERSON

0.99+

John FurrierPERSON

0.99+

10 yearsQUANTITY

0.99+

20 yearsQUANTITY

0.99+

JohnPERSON

0.99+

fourQUANTITY

0.99+

twoQUANTITY

0.99+

yesterdayDATE

0.99+

Andy JassyPERSON

0.99+

Sumit DhawanPERSON

0.99+

MosconeLOCATION

0.99+

five thingsQUANTITY

0.99+

HPAORGANIZATION

0.99+

two organizationsQUANTITY

0.99+

bothQUANTITY

0.99+

hundredsQUANTITY

0.99+

Joseph GeorgePERSON

0.99+

SwitzerlandLOCATION

0.99+

AMD PensandoORGANIZATION

0.99+

firstQUANTITY

0.98+

oneQUANTITY

0.98+

PensandoORGANIZATION

0.98+

one exampleQUANTITY

0.98+

HPE DiscoverORGANIZATION

0.98+

12th yearQUANTITY

0.98+

todayDATE

0.98+

OneQUANTITY

0.98+

over 10,000QUANTITY

0.98+

RaguPERSON

0.98+

over 200,000 customersQUANTITY

0.98+

two big thingsQUANTITY

0.97+

last nightDATE

0.96+

VSphereTITLE

0.96+

this yearDATE

0.96+

Haseeb Budhani, Rafay & Adnan Khan, MoneyGram | Kubecon + Cloudnativecon Europe 2022


 

>> Announcer: theCUBE presents "Kubecon and Cloudnativecon Europe 2022" brought to you by Red Hat, the Cloud Native Computing Foundation and its ecosystem partners. >> Welcome to theCUBE coverage of Kubecon 2022, E.U. I'm here with my cohost, Paul Gillin. >> Pleased to work with you, Keith. >> Nice to work with you, Paul. And we have our first two guests. "theCUBE" is hot. I'm telling you we are having interviews before the start of even the show floor. I have with me, we got to start with the customers first. Enterprise Architect Adnan Khan, welcome to the show. >> Thank you so much. >> Keith: CUBE time first, now you're at CUBE-alumni. >> Yup. >> And Haseeb Budhani, CEO Arathi, welcome back. >> Nice to talk to you again today. >> So, we're talking all things Kubernetes and we're super excited to talk to MoneyGram about their journey to Kubernetes. First question I have for Adnan. Talk to us about what your pre-Kubernetes landscape looked like? >> Yeah. Certainly, Keith. So, we had a traditional mix of legacy applications and modern applications. A few years ago we made the decision to move to a microservices architecture, and this was all happening while we were still on-prem. So, your traditional VMs. And we started 20, 30 microservices but with the microservices packing. You quickly expand to hundreds of microservices. And we started getting to that stage where managing them without sort of an orchestration platform, and just as traditional VMs, was getting to be really challenging, especially from a day two operational. You can manage 10, 15 microservices, but when you start having 50, and so forth, all those concerns around high availability, operational performance. So, we started looking at some open-source projects. Spring cloud, we are predominantly a Java shop. So, we looked at the spring cloud projects. They give you a number of initiatives for doing some of those management. And what we realized again, to manage those components without sort of a platform, was really challenging. So, that kind of led us to sort of Kubernetes where along with our journey new cloud, it was the platform that could help us with a lot of those management operational concerns. >> So, as you talk about some of those challenges, pre-Kubernetes, what were some of the operational issues that you folks experienced? >> Yeah, certain things like auto scaling is number one. I mean, that's a fundamental concept of cloud native, right? Is how do you auto scale VMs, right? You can put in some old methods and stuff, but it was really hard to do that automatically. So, Kubernetes with like HPA gives you those out of the box. Provided you set the right policies, you can have auto scaling where it can scale up and scale back, so we were doing that manually. So, before, you know, MoneyGram, obviously, holiday season, people are sending more money, Mother's Day. Our Ops team would go and basically manually scale VMs. So, we'd go from four instances to maybe eight instances, but that entailed outages. And just to plan around doing that manually, and then sort of scale them back was a lot of overhead, a lot of administration overhead. So, we wanted something that could help us do that automatically in an efficient and intrusive way. That was one of the things, monitoring and and management operations, just kind of visibility into how those applications were during what were the status of your workloads, was also a challenge to do that. >> So, Haseeb, I got to ask the question. If someone would've came to me with that problem, I'd just say, "You know what? Go to the plug to cloud." How does your group help solve some of these challenges? What do you guys do? >> Yeah. What do we do? Here's my perspective on the market as it's playing out. So, I see a bifurcation happening in the Kubernetes space. But there's the Kubernetes run time, so Amazon has EKS, Azure as AKS. There's enough of these available, they're not managed services, they're actually really good, frankly. In fact, retail customers, if you're an Amazon why would you spin up your own? Just use EKS, it's awesome. But then, there's an operational layer that is needed to run Kubernetes. My perspective is that, 50,000 enterprises are adopting Kubernetes over the next 5 to 10 years. And they're all going to go through the same exact journey, and they're all going to end up potentially making the same mistake, which is, they're going to assume that Kubernetes is easy. They're going to say, "Well, this is not hard. I got this up and running on my laptop. This is so easy, no worries. I can do EKS." But then, okay, can you consistently spin up these things? Can you scale them consistently? Do you have the right blueprints in place? Do you have the right access management in place? Do you have the right policies in place? Can you deploy applications consistently? Do you have monitoring and visibility into those things? Do your developers have access when they need it? Do you have the right networking layer in place? Do you have the right chargebacks in place? Remember you have multiple teams. And by the way, nobody has a single cluster, so you got to do this across multiple clusters. And some of them have multiple clouds. Not because they want to be multiple clouds, because, but sometimes you buy a company, and they happen to be in Azure. How many dashboards do you have now across all the open-source technologies that you have identified to solve these problems? This is where pain lies. So, I think that Kubernetes is fundamentally a solve problem. Like our friends at AWS and Azure, they've solved this problem. It's like a AKS, EKS, et cetera, EGK for that matter. They're great, and you should use them, and don't even think about spinning up QB best clusters. Don't do it, use the platforms that exist. And commensurately on-premises, OpenShift is pretty awesome. If you like it, use it. But then when it comes to the operations layer, that's where today, we end up investing in a DevOps team, and then an SRE organization that need to become experts in Kubernetes, and that is not tenable. Can you, let's say unlimited capital, unlimited budgets. Can you hire 20 people to do Kubernetes today? >> If you could find them. >> If you can find 'em, right? So, even if you could, the point is that, see five years ago when your competitors were not doing Kubernetes, it was a competitive advantage to go build a team to do Kubernetes so you could move faster. Today, you know, there's a high chance that your competitors are already buying from a Rafay or somebody like Rafay. So, now, it's better to take these really, really sharp engineers and have them work on things that make the company money. Writing operations for Kubernetes, this is a commodity now. >> How confident are you that the cloud providers won't get in and do what you do and put you out of business? >> Yeah, I mean, absolutely. In fact, I had a conversation with somebody from HBS this morning and I was telling them, I don't think you have a choice, you have to do this. Competition is not a bad thing. If we are the only company in a space, this is not a space, right? The bet we are making is that every enterprise, they have an on-prem strategy, they have at least a handful of, everybody's got at least two clouds that they're thinking about. Everybody starts with one cloud, and then they have some other cloud that they're also thinking about. For them to only rely on one cloud's tools to solve for on-prem, plus that second cloud, they potentially they may have, that's a tough thing to do. And at the same time, we as a vendor, I mean, the only real reason why startups survive, is because you have technology that is truly differentiator. Otherwise, I mean, you got to build something that is materially interesting, right? We seem to have- >> Keith: Now. Sorry, go ahead. >> No, I was going to, you actually have me thinking about something. Adnan? >> Yes. >> MoneyGram, big, well known company. a startup, adding, working in a space with Google, VMware, all the biggest names. What brought you to Rafay to solve this operational challenge? >> Yeah. A good question. So, when we started out sort of in our Kubernetes, we had heard about EKS and we are an AWS shop, so that was the most natural path. And we looked at EKS and used that to create our clusters. But then we realized very quickly, that, yes, to Haseeb's point, AWS manages the control plane for you, it gives you the high availability. So, you're not managing those components which is some really heavy lifting. But then what about all the other things like centralized dashboard? What about, we need to provision Kubernetes clusters on multicloud, right? We have other clouds that we use, or also on-prem, right? How do you do some of that stuff? We also, at that time were looking at other tools also. And I had, I remember come up with an MVP list that we needed to have in place for day one or day two operations before we even launch any single applications into production. And my Ops team looked at that list and literally, there was only one or two items that they could check off with EKS. They've got the control plane, they've got the cluster provision, but what about all those other components? And some of that kind of led us down the path of, you know, looking at, "Hey, what's out there in this space?" And we realized pretty quickly that there weren't too many. There were some large providers and capabilities like Antos, but we felt that it was a little too much for what we were trying to do at that point in time. We wanted to scale slowly. We wanted to minimize our footprint, and Rafay seemed to sort of, was a nice mix from all those different angles. >> How was the situation affecting your developer experience? >> So, that's a really good question also. So, operations was one aspect to it. The other part is the application development. We've got MoneyGram is when a lot of organizations have a plethora of technologies from Java, to .net, to node.js, what have you, right? Now, as you start saying, okay, now we're going cloud native and we're going to start deploying to Kubernetes. There's a fair amount of overhead because a tech stack, all of a sudden goes from, just being Java or just being .net, to things like Docker. All these container orchestration and deployment concerns, Kubernetes deployment artifacts, (chuckles) I got to write all this YAML as my developer say, "YAML hell." (panel laughing) I got to learn Docker files. I need to figure out a package manager like HELM on top of learning all the Kubernetes artifacts. So, initially, we went with sort of, okay, you know, we can just train our developers. And that was wrong. I mean, you can't assume that everyone is going to sort of learn all these deployment concerns and we'll adopt them. There's a lot of stuff that's outside of their sort of core dev domain, that you're putting all this burden on them. So, we could not rely on them in to be sort of CUBE cuddle experts, right? That's a fair amount overhead learning curve there. So, Rafay again, from their dashboard perspective, saw the managed CUBE cuddle, gives you that easy access for devs, where they can go and monitor the status of their workloads. They don't have to figure out, configuring all these tools locally, just to get it to work. We did some things from a DevOps perspective to basically streamline and automate that process. But then, also Rafay came in and helped us out on kind of that providing that dashboard. They don't have to break, they can basically get on through single sign on and have visibility into the status of their deployment. They can do troubleshooting diagnostics all through a single pane of glass, which was a key key item. Initially, before Rafay, we were doing that command line. And again, just getting some of the tools configured was huge, it took us days just to get that. And then the learning curve for development teams "Oh, now you got the tools, now you got to figure out how to use it." >> So, Haseeb talk to me about the cloud native infrastructure. When I look at that entire landscape number, I'm just overwhelmed by it. As a customer, I look at it, I'm like, "I don't know where to start." I'm sure, Adnan, you folks looked at it and said, "Wow, there's so many solutions." How do you engage with the ecosystem? You have to be at some level opinionated but flexible enough to meet every customer's needs. How do you approach that? >> So, it's a really tough problem to solve because... So, the thing about abstraction layers, we all know how that plays out, right? So, abstraction layers are fundamentally never the right answer because they will never catch up, because you're trying to write a layer on top. So, then we had to solve the problem, which was, well, we can't be an abstraction layer, but then at the same time, we need to provide some, sort of like centralization standardization. So, we sort of have this the following dissonance in our platform, which is actually really important to solve the problem. So, we think of a stack as floor things. There's the Kubernetes layer, infrastructure layer, and EKS is different from AKS, and it's okay. If we try to now bring them all together and make them behave as one, our customers are going to suffer. Because there are features in EKS that I really want, but then if you write an abstraction then I'm not going to get 'em so not okay. So, treat them as individual things that we logic that we now curate. So, every time EKS, for example, goes from 1.22 to 1.23, we write a new product, just so my customer can press a button and upgrade these clusters. Similarly, we do this for AKS, we do this for GK. It's a really, really hard job, but that's the job, we got to do it. On top of that, you have these things called add-ons, like my network policy, my access management policy, my et cetera. These things are all actually the same. So, whether I'm EKS or AKS, I want the same access for Keith versus Adnan, right? So, then those components are sort of the same across, doesn't matter how many clusters, doesn't matter how many clouds. On top of that, you have applications. And when it comes to the developer, in fact I do the following demo a lot of times. Because people ask the question. People say things like, "I want to run the same Kubernetes distribution everywhere because this is like Linux." Actually, it's not. So, I do a demo where I spin up access to an OpenShift cluster, and an EKS cluster, and then AKS cluster. And I say, "Log in, show me which one is which?" They're all the same. >> So, Adnan, make that real for me. I'm sure after this amount of time, developers groups have come to you with things that are snowflakes. And as a enterprise architect, you have to make it work within your framework. How has working with Rafay made that possible? >> Yeah, so I think one of the very common concerns is the whole deployment to Haseeb's point, is you are from a deployment perspective, it's still using HELM, it's still using some of the same tooling. How do you? Rafay gives us some tools. You know, they have a command line Add Cuddle API that essentially we use. We wanted parity across all our different environments, different clusters, it doesn't matter where you're running. So, that gives us basically a consistent API for deployment. We've also had challenges with just some of the tooling in general that we worked with Rafay actually, to actually extend their, Add Cuddle API for us so that we have a better deployment experience for our developers. >> Haseeb, how long does this opportunity exist for you? At some point, do the cloud providers figure this out, or does the open-source community figure out how to do what you've done and this opportunity is gone? >> So, I think back to a platform that I think very highly of, which has been around a long time and continues to live, vCenter. I think vCenter is awesome. And it's beautiful, VMware did an incredible job. What is the job? It's job is to manage VMs, right? But then it's for access, it's also storage. It's also networking in a sec, right? All these things got done because to solve a real problem, you have to think about all the things that come together to help you solve that problem from an operations perspective. My view is that this market needs essentially a vCenter, but for Kubernetes, right? And that is a very broad problem. And it's going to spend, it's not about a cloud. I mean, every cloud should build this. I mean, why would they not? It makes sense. Anto exist, right? Everybody should have one. But then, the clarity in thinking that the Rafay team seems to have exhibited, till date, seems to merit an independent company, in my opinion, I think like, I mean, from a technical perspective, this product's awesome, right? I mean, we seem to have no real competition when it comes to this broad breadth of capabilities. Will it last? We'll see, right? I mean, I keep doing "CUBE" shows, right? So, every year you can ask me that question again, and we'll see. >> You make a good point though. I mean, you're up against VMware, You're up against Google. They're both trying to do sort of the same thing you're doing. Why are you succeeding? >> Maybe it's focused. Maybe it's because of the right experience. I think startups, only in hindsight, can one tell why a startup was successful. In all honesty, I've been in a one or two startups in the past, and there's a lot of luck to this, there's a lot of timing to this. I think this timing for a product like this is perfect. Like three, four years ago, nobody would've cared. Like honesty, nobody would've cared. This is the right time to have a product like this in the market because so many enterprises are now thinking of modernization. And because everybody's doing this, this is like the boots strong problem in HCI. Everybody's doing it, but there's only so many people in the industry who actually understand this problem, so they can't even hire the people. And the CTO said, "I got to go. I don't have the people, I can't fill the seats." And then they look for solutions, and via that solution, that we're going to get embedded. And when you have infrastructure software like this embedded in your solution, we're going to be around with the... Assuming, obviously, we don't score up, right? We're going to be around with these companies for some time. We're going to have strong partners for the long term. >> Well, vCenter for Kubernetes I love to end on that note. Intriguing conversation, we could go on forever on this topic, 'cause there's a lot of work to do. I don't think this will over be a solved problem for the Kubernetes as cloud native solutions, so I think there's a lot of opportunities in that space. Haseeb Budhani, thank you for rejoining "theCUBE." Adnan Khan, welcome becoming a CUBE-alum. >> (laughs) Awesome. Thank you so much. >> Check your own profile on the sound's website, it's really cool. From Valencia, Spain, I'm Keith Townsend, along with my Host Paul Gillin . And you're watching "theCUBE," the leader in high tech coverage. (bright upbeat music)

Published Date : May 19 2022

SUMMARY :

brought to you by Red Hat, Welcome to theCUBE Nice to work with you, Paul. now you're at CUBE-alumni. And Haseeb Budhani, Talk to us about what your pre-Kubernetes So, that kind of led us And just to plan around So, Haseeb, I got to ask the question. that you have identified So, even if you could, the point I don't think you have a Keith: Now. No, I was going to, you to solve this operational challenge? that to create our clusters. I got to write all this YAML So, Haseeb talk to me but that's the job, we got to do it. developers groups have come to you so that we have a better to help you solve that problem Why are you succeeding? And the CTO said, "I got to go. I love to end on that note. Thank you so much. on the sound's website,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Keith TownsendPERSON

0.99+

KeithPERSON

0.99+

Haseeb BudhaniPERSON

0.99+

Paul GillinPERSON

0.99+

10QUANTITY

0.99+

AmazonORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

20QUANTITY

0.99+

AdnanPERSON

0.99+

oneQUANTITY

0.99+

Red HatORGANIZATION

0.99+

Adnan KhanPERSON

0.99+

Cloud Native Computing FoundationORGANIZATION

0.99+

AWSORGANIZATION

0.99+

PaulPERSON

0.99+

20 peopleQUANTITY

0.99+

JavaTITLE

0.99+

50QUANTITY

0.99+

TodayDATE

0.99+

Adnan KhanPERSON

0.99+

HBSORGANIZATION

0.99+

RafayPERSON

0.99+

50,000 enterprisesQUANTITY

0.99+

node.jsTITLE

0.99+

Valencia, SpainLOCATION

0.99+

two itemsQUANTITY

0.98+

second cloudQUANTITY

0.98+

vCenterTITLE

0.98+

HPAORGANIZATION

0.98+

first two guestsQUANTITY

0.98+

eight instancesQUANTITY

0.98+

one cloudQUANTITY

0.98+

HaseebPERSON

0.98+

todayDATE

0.98+

five years agoDATE

0.98+

hundreds of microservicesQUANTITY

0.98+

KubernetesTITLE

0.98+

LinuxTITLE

0.98+

EKSORGANIZATION

0.98+

Mother's DayEVENT

0.98+

ArathiPERSON

0.97+

HaseebORGANIZATION

0.97+

DockerTITLE

0.97+

First questionQUANTITY

0.97+

VMwareORGANIZATION

0.97+

four years agoDATE

0.97+

MoneyGramORGANIZATION

0.97+

bothQUANTITY

0.97+

15 microservicesQUANTITY

0.97+

single clusterQUANTITY

0.96+

CUBEORGANIZATION

0.96+

30 microservicesQUANTITY

0.95+

singleQUANTITY

0.95+

one aspectQUANTITY

0.95+

firstQUANTITY

0.95+

theCUBEORGANIZATION

0.95+

RafayORGANIZATION

0.94+

EKSTITLE

0.94+

CloudnativeconORGANIZATION

0.94+

AzureORGANIZATION

0.94+

two startupsQUANTITY

0.94+

theCUBETITLE

0.94+

AKSORGANIZATION

0.94+

Matt Provo & Patrick Bergstrom, StormForge | Kubecon + Cloudnativecon Europe 2022


 

>> Instructor: "theCUBE" presents KubeCon and CloudNativeCon Europe 2022, brought to you by Red Hat, the Cloud Native Computing Foundation and its ecosystem partners. >> Welcome to Valencia, Spain and we're at KubeCon, CloudNativeCon Europe 2022. I'm Keith Townsend, and my co-host, Enrico Signoretti. Enrico's really proud of me. I've called him Enrico instead of Enrique every session. >> Every day. >> Senior IT analyst at GigaOm. We're talking to fantastic builders at KubeCon, CloudNativeCon Europe 2022 about the projects and their efforts. Enrico, up to this point, it's been all about provisioning, insecurity, what conversation have we been missing? >> Well, I mean, I think that we passed the point of having the conversation of deployment, of provisioning. Everybody's very skilled, actually everything is done at day two. They are discovering that, well, there is a security problem. There is an observability problem a and in fact, we are meeting with a lot of people and there are a lot of conversation with people really needing to understand what is happening. I mean, in their cluster work, why it is happening and all the questions that come with it. And the more I talk with people in the show floor here or even in the various sessions is about, we are growing so that our clusters are becoming bigger and bigger, applications are becoming bigger as well. So we need to now understand better what is happening. As it's not only about cost, it's about everything at the end. >> So I think that's a great set up for our guests, Matt Provo, founder and CEO of StormForge and Patrick Brixton? >> Bergstrom. >> Bergstrom. >> Yeah. >> I spelled it right, I didn't say it right, Bergstrom, CTO. We're at KubeCon, CloudNativeCon where projects are discussed, built and StormForge, I've heard the pitch before, so forgive me. And I'm kind of torn. I have service mesh. What do I need more, like what problem is StormForge solving? >> You want to take it? >> Sure, absolutely. So it's interesting because, my background is in the enterprise, right? I was an executive at UnitedHealth Group before that I worked at Best Buy and one of the issues that we always had was, especially as you migrate to the cloud, it seems like the CPU dial or the memory dial is your reliability dial. So it's like, oh, I just turned that all the way to the right and everything's hunky-dory, right? But then we run into the issue like you and I were just talking about, where it gets very very expensive very quickly. And so my first conversations with Matt and the StormForge group, and they were telling me about the product and what we're dealing with. I said, that is the problem statement that I have always struggled with and I wish this existed 10 years ago when I was dealing with EC2 costs, right? And now with Kubernetes, it's the same thing. It's so easy to provision. So realistically what it is, is we take your raw telemetry data and we essentially monitor the performance of your application, and then we can tell you using our machine learning algorithms, the exact configuration that you should be using for your application to achieve the results that you're looking for without over-provisioning. So we reduce your consumption of CPU, of memory and production which ultimately nine times out of 10, actually I would say 10 out of 10, reduces your cost significantly without sacrificing reliability. >> So can your solution also help to optimize the application in the long run? Because, yes, of course-- >> Yep. >> The lowering fluid as you know optimize the deployment. >> Yeah. >> But actually the long-term is optimizing the application. >> Yes. >> Which is the real problem. >> Yep. >> So, we're fine with the former of what you just said, but we exist to do the latter. And so, we're squarely and completely focused at the application layer. As long as you can track or understand the metrics you care about for your application, we can optimize against it. We love that we don't know your application, we don't know what the SLA and SLO requirements are for your app, you do, and so, in our world it's about empowering the developer into the process, not automating them out of it and I think sometimes AI and machine learning sort of gets a bad rap from that standpoint. And so, at this point the company's been around since 2016, kind of from the very early days of Kubernetes, we've always been, squarely focused on Kubernetes, using our core machine learning engine to optimize metrics at the application layer that people care about and need to go after. And the truth of the matter is today and over time, setting a cluster up on Kubernetes has largely been solved. And yet the promise of Kubernetes around portability and flexibility, downstream when you operationalize, the complexity smacks you in the face and that's where StormForge comes in. And so we're a vertical, kind of vertically oriented solution, that's absolutely focused on solving that problem. >> Well, I don't want to play, actually. I want to play the devils advocate here and-- >> You wouldn't be a good analyst if you didn't. >> So the problem is when you talk with clients, users, there are many of them still working with Java, something that is really tough. I mean, all of us loved Java. >> Yeah, absolutely. >> Maybe 20 years ago. Yeah, but not anymore, but still they have developers, they have porting applications, microservices. Yes, but not very optimized, et cetera, cetera, et cetera. So it's becoming tough. So how you can interact with this kind of old hybrid or anyway, not well engineered applications. >> Yeah. >> We do that today. We actually, part of our platform is we offer performance testing in a lower environment and stage and we, like Matt was saying, we can use any metric that you care about and we can work with any configuration for that application. So perfect example is Java, you have to worry about your heap size, your garbage collection tuning and one of the things that really struck me very early on about the StormForge product is because it is true machine learning. You remove the human bias from that. So like a lot of what I did in the past, especially around SRE and performance tuning, we were only as good as our humans were because of what they knew. And so, we kind of got stuck in these paths of making the same configuration adjustments, making the same changes to the application, hoping for different results. But then when you apply machine learning capability to that the machine will recommend things you never would've dreamed of. And you get amazing results out of that. >> So both me and Enrico have been doing this for a long time. Like, I have battled to my last breath the argument when it's a bare metal or a VM, look, I cannot give you any more memory. >> Yeah. >> And the argument going all the way up to the CIO and the CIO basically saying, you know what, Keith you're cheap, my developer resources are expensive, buy bigger box. >> Yeah. >> Yap. >> Buying a bigger box in the cloud to your point is no longer a option because it's just expensive. >> Yeah. >> Talk to me about the carrot or the stick as developers are realizing that they have to be more responsible. Where's the culture change coming from? Is it the shift in responsibility? >> I think the center of the bullseye for us is within those sets of decisions, not in a static way, but in an ongoing way, especially as the development of applications becomes more and more rapid and the management of them. Our charge and our belief wholeheartedly is that you shouldn't have to choose. You should not have to choose between costs or performance. You should not have to choose where your applications live, in a public private or hybrid cloud environment. And so, we want to empower people to be able to sit in the middle of all of that chaos and for those trade offs and those difficult interactions to no longer be a thing. We're at a place now where we've done hundreds of deployments and never once have we met a developer who said, "I'm really excited to get out of bed and come to work every day and manually tune my application." One side, secondly, we've never met, a manager or someone with budget that said, please don't increase the value of my investment that I've made to lift and shift us over to the cloud or to Kubernetes or some combination of both. And so what we're seeing is the converging of these groups, their happy place is the lack of needing to be able to make those trade offs, and that's been exciting for us. >> So, I'm listening and looks like that your solution is right in the middle in application performance, management, observability. >> Yeah. >> And, monitoring. >> Yeah. >> So it's a little bit of all of this. >> Yeah, so we want to be, the intel inside of all of that, we often get lumped into one of those categories, it used to be APM a lot, we sometimes get, are you observability or and we're really not any of those things, in and of themselves, but we instead we've invested in deep integrations and partnerships with a lot of that tooling 'cause in a lot of ways, the tool chain is hardening in a cloud native and in Kubernetes world. And so, integrating in intelligently, staying focused and great at what we solve for, but then seamlessly partnering and not requiring switching for our users who have already invested likely, in a APM or observability. >> So to go a little bit deeper. What does it mean integration? I mean, do you provide data to this, other applications in the environment or are they supporting you in the work that you do. >> Yeah, we're a data consumer for the most part. In fact, one of our big taglines is take your observability and turn it into action ability, right? Like how do you take that, it's one thing to collect all of the data, but then how do you know what to do with it, right? So to Matt's point, we integrate with folks like Datadog, we integrate with Prometheus today. So we want to collect that telemetry data and then do something useful with it for you. >> But also we want Datadog customers, for example, we have a very close partnership with Datadog so that in your existing Datadog dashboard, now you have-- >> Yeah. >> The StormForge capability showing up in the same location. >> Yep. >> And so you don't have to switch out. >> So I was just going to ask, is it a push pull? What is the developer experience when you say you provide developer this resolve ML learnings about performance, how do they receive it? Like, what's the developer experience. >> They can receive it, for a while we were CLI only, like any good developer tool. >> Right. >> And, we have our own UI. And so it is a push in a lot of cases where I can come to one spot, I've got my applications and every time I'm going to release or plan for a release or I have released and I want to pull in observability data from a production standpoint, I can visualize all of that within the StormForge UI and platform, make decisions, we allow you to set your, kind of comfort level of automation that you're okay with. You can be completely set and forget or you can be somewhere along that spectrum and you can say, as long as it's within, these thresholds, go ahead and release the application or go ahead and apply the configuration. But we also allow you to experience the same, a lot of the same functionality right now, in Grafana, in Datadog and a bunch of others that are coming. >> So I've talked to Tim Crawford who talks to a lot of CIOs and he's saying one of the biggest challenges or if not, one of the biggest challenges CIOs are facing are resource constraints. >> Yeah. >> They cannot find the developers to begin with to get this feedback. How are you hoping to address this biggest pain point for CIOs-- >> Yeah.6 >> And developers? >> You should take that one. >> Yeah, absolutely. So like my background, like I said at UnitedHealth Group, right. It's not always just about cost savings. In fact, the way that I look about at some of these tech challenges, especially when we talk about scalability there's kind of three pillars that I consider, right? There's the tech scalability, how am I solving those challenges? There's the financial piece 'cause you can only throw money at a problem for so long and it's the same thing with the human piece. I can only find so many bodies and right now that pool is very small, and so, we are absolutely squarely in that footprint of we enable your team to focus on the things that they matter, not manual tuning like Matt said. And then there are other resource constraints that I think that a lot of folks don't talk about too. Like, you were talking about private cloud for instance and so having a physical data center, I've worked with physical data centers that companies I've worked for have owned where it is literally full, wall to wall. You can't rack any more servers in it, and so their biggest option is, well, I could spend $1.2 billion to build a new one if I wanted to, or if you had a capability to truly optimize your compute to what you needed and free up 30% of your capacity of that data center. So you can deploy additional name spaces into your cluster, like that's a huge opportunity. >> So I have another question. I mean, maybe it doesn't sound very intelligent at this point, but, so is it an ongoing process or is it something that you do at the very beginning, I mean you start deploying this. >> Yeah. >> And maybe as a service. >> Yep. >> Once in a year I say, okay, let's do it again and see if something change it. >> Sure. >> So one spot, one single.. >> Yeah, would you recommend somebody performance test just once a year? Like, so that's my thing is, at previous roles, my role was to do performance test every single release, and that was at a minimum once a week and if your thing did not get faster, you had to have an executive exception to get it into production and that's the space that we want to live in as well as part of your CICD process, like this should be continuous verification, every time you deploy, we want to make sure that we're recommending the perfect configuration for your application in the name space that you're deploying into. >> And I would be as bold as to say that we believe that we can be a part of adding, actually adding a step in the CICD process that's connected to optimization and that no application should be released, monitored, and sort of analyzed on an ongoing basis without optimization being a part of that. And again, not just from a cost perspective, but for cost and performance. >> Almost a couple of hundred vendors on this floor. You mentioned some of the big ones Datadog, et cetera, but what happens when one of the up and comings out of nowhere, completely new data structure, some imaginative way to click to telemetry data. >> Yeah. >> How do, how do you react to that? >> Yeah, to us it's zeros and ones. >> Yeah. >> And, we really are data agnostic from the standpoint of, we're fortunate enough from the design of our algorithm standpoint, it doesn't get caught up on data structure issues, as long as you can capture it and make it available through one of a series of inputs, one would be load or performance tests, could be telemetry, could be observability, if we have access to it. Honestly, the messier the better from time to time from a machine learning standpoint, it's pretty powerful to see. We've never had a deployment where we saved less than 30%, while also improving performance by at least 10%. But the typical results for us are 40 to 60% savings and 30 to 40% improvement in performance. >> And what happens if the application is, I mean, yes Kubernetes is the best thing of the world but sometimes we have to, external data sources or, we have to connect with external services anyway. >> Yeah. >> So, can you provide an indication also on this particular application, like, where the problem could be? >> Yeah. >> Yeah, and that's absolutely one of the things that we look at too, 'cause it's, especially when you talk about resource consumption it's never a flat line, right? Like depending on your application, depending on the workloads that you're running it varies from sometimes minute to minute, day to day, or it could be week to week even. And so, especially with some of the products that we have coming out with what we want to do, integrating heavily with the HPA and being able to handle some of those bumps and not necessarily bumps, but bursts and being able to do it in a way that's intelligent so that we can make sure that, like I said, it's the perfect configuration for the application regardless of the time of day that you're operating in or what your traffic patterns look like, or, what your disc looks like, right. Like 'cause with our low environment testing, any metric you throw at us, we can optimize for. >> So Matt and Patrick, thank you for stopping by. >> Yeah. >> Yes. >> We can go all day because day two is I think the biggest challenge right now, not just in Kubernetes but application re-platforming and transformation, very, very difficult. Most CTOs and EASs that I talked to, this is the challenge space. From Valencia, Spain, I'm Keith Townsend, along with my host Enrico Signoretti and you're watching "theCube" the leader in high-tech coverage. (whimsical music)

Published Date : May 19 2022

SUMMARY :

brought to you by Red Hat, and we're at KubeCon, about the projects and their efforts. And the more I talk with I've heard the pitch and then we can tell you know optimize the deployment. is optimizing the application. the complexity smacks you in the face I want to play the devils analyst if you didn't. So the problem is when So how you can interact and one of the things that last breath the argument and the CIO basically saying, Buying a bigger box in the cloud Is it the shift in responsibility? and the management of them. that your solution is right in the middle we sometimes get, are you observability or in the work that you do. consumer for the most part. showing up in the same location. What is the developer experience for a while we were CLI only, and release the application and he's saying one of the They cannot find the developers and it's the same thing or is it something that you do Once in a year I say, okay, and that's the space and that no application You mentioned some of the and 30 to 40% improvement in performance. Kubernetes is the best thing of the world so that we can make So Matt and Patrick, Most CTOs and EASs that I talked to,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Keith TownsendPERSON

0.99+

EnricoPERSON

0.99+

Enrico SignorettiPERSON

0.99+

MattPERSON

0.99+

JeffPERSON

0.99+

Tim CrawfordPERSON

0.99+

PatrickPERSON

0.99+

2003DATE

0.99+

Keith TownsendPERSON

0.99+

UnitedHealth GroupORGANIZATION

0.99+

40QUANTITY

0.99+

AlexPERSON

0.99+

Jeff FrickPERSON

0.99+

Santa ClaraLOCATION

0.99+

30QUANTITY

0.99+

$1.2 billionQUANTITY

0.99+

Alex WolfPERSON

0.99+

EnriquePERSON

0.99+

StormForgeORGANIZATION

0.99+

Alexander WolfPERSON

0.99+

Silicon ValleyLOCATION

0.99+

ACGORGANIZATION

0.99+

JanuaryDATE

0.99+

Matt ProvoPERSON

0.99+

Red HatORGANIZATION

0.99+

Santa CruzLOCATION

0.99+

Cloud Native Computing FoundationORGANIZATION

0.99+

Patrick BergstromPERSON

0.99+

Best BuyORGANIZATION

0.99+

30%QUANTITY

0.99+

first timeQUANTITY

0.99+

BergstromORGANIZATION

0.99+

nine timesQUANTITY

0.99+

10QUANTITY

0.99+

Valencia, SpainLOCATION

0.99+

300 peopleQUANTITY

0.99+

millionsQUANTITY

0.99+

DatadogORGANIZATION

0.99+

JavaTITLE

0.99+

GigaOmORGANIZATION

0.99+

Baskin School of EngineeringORGANIZATION

0.99+

two thingsQUANTITY

0.99+

third yearQUANTITY

0.99+

Mountain View, CaliforniaLOCATION

0.99+

KubeConEVENT

0.99+

ACGSVORGANIZATION

0.99+

bothQUANTITY

0.99+

once a weekQUANTITY

0.99+

less than 30%QUANTITY

0.99+

ACGSV GROW! AwardsEVENT

0.98+

2016DATE

0.98+

oneQUANTITY

0.98+

KubernetesTITLE

0.98+

40%QUANTITY

0.98+

Santa Cruz UC Santa Cruz School of EngineeringORGANIZATION

0.98+

todayDATE

0.98+

ACG Silicon ValleyORGANIZATION

0.98+

60%QUANTITY

0.98+

once a yearQUANTITY

0.98+

one spotQUANTITY

0.98+

10 years agoDATE

0.97+

Patrick BrixtonPERSON

0.97+

PrometheusTITLE

0.97+

20 years agoDATE

0.97+

CloudNativeCon Europe 2022EVENT

0.97+

secondlyQUANTITY

0.97+

one singleQUANTITY

0.96+

first conversationsQUANTITY

0.96+

millions of dollarsQUANTITY

0.96+

ACGSV GROW! Awards 2018EVENT

0.96+

Matt Provo & Patrick Bergstrom, StormForge | Kubecon + Cloudnativecon Europe 2022


 

>>The cube presents, Coon and cloud native con Europe 22, brought to you by the cloud native computing foundation. >>Welcome to Melissa Spain. And we're at cuon cloud native con Europe, 2022. I'm Keith Townsend. And my co-host en Rico senior Etti en Rico's really proud of me. I've called him en Rico and said IK, every session, senior it analyst giga, O we're talking to fantastic builders at Cuban cloud native con about the projects and the efforts en Rico up to this point, it's been all about provisioning insecurity. What, what conversation have we been missing? >>Well, I mean, I, I think, I think that, uh, uh, we passed the point of having the conversation of deployment of provisioning. You know, everybody's very skilled, actually everything is done at day two. They are discovering that, well, there is a security problem. There is an observability problem. And in fact, we are meeting with a lot of people and there are a lot of conversation with people really needing to understand what is happening. I mean, in their classroom, what, why it is happening and all the, the questions that come with it. I mean, and, uh, the more I talk with, uh, people in the, in the show floor here, or even in the, you know, in the various sessions is about, you know, we are growing, the, our clusters are becoming bigger and bigger. Uh, applications are becoming, you know, bigger as well. So we need to know, understand better what is happening. It's not only, you know, about cost it's about everything at the >>End. So I think that's a great set up for our guests, max, Provo, founder, and CEO of storm for forge and Patrick Britton, Bergstrom, Brookstone. Yeah, I spelled it right. I didn't say it right. Berg storm CTO. We're at Q con cloud native con we're projects are discussed, built and storm forge. I I've heard the pitch before, so forgive me. And I'm, I'm, I'm, I'm, I'm, I'm kind of torn. I have service mesh. What do I need more like, what problem is storm for solving? >>You wanna take it? >>Sure, absolutely. So it it's interesting because, uh, my background is in the enterprise, right? I was an executive at United health group. Um, before that I worked at best buy. Um, and one of the issues that we always had was, especially as you migrate to the cloud, it seems like the CPU dial or the memory dial is your reliability dial. So it's like, oh, I just turned that all the way to the right and everything's hunky Dory. Right. Uh, but then we run into the issue like you and I were just talking about where it gets very, very expensive, very quickly. Uh, and so my first conversations with Matt and the storm forge group, and they were telling me about the product and, and what we're dealing with. I said, that is the problem statement that I have always struggled with. And I wish this existed 10 years ago when I was dealing with EC two costs, right? And now with Kubernetes, it's the same thing. It's so easy to provision. So realistically, what it is is we take your raw telemetry data and we essentially monitor the performance of your application. And then we can tell you using our machine learning algorithms, the exact configuration that you should be using for your application to achieve the results that you're looking for without over provisioning. So we reduce your consumption of CPU of memory and production, which ultimately nine times outta 10, actually I would say 10 out of 10 reduces your cost significantly without sacrificing reliability. >>So can your solution also help to optimize the application in the long run? Because yes, of course, yep. You know, the lowing fluid is, you know, optimize the deployment. Yeah. But actually the long term is optimizing the application. Yes. Which is the real problem. >>Yep. So we actually, um, we're fine with the, the former of what you just said, but we exist to do the latter. And so we're squarely and completely focused at the application layer. Um, we are, uh, as long as you can track or understand the metrics you care about for your application, uh, we can optimize against it. Um, we love that we don't know your application. We don't know what the SLA and SLO requirements are for your app. You do. And so in, in our world, it's about empowering the developer into the process, not automating them out of it. And I think sometimes AI and machine learning sort of gets a bad wrap from that standpoint. And so, uh, we've at this point, the company's been around, you know, since 2016, uh, kind of from the very early days of Kubernetes, we've always been, you know, squarely focused on Kubernetes using our core machine learning, uh, engine to optimize metrics at the application layer, uh, that people care about and, and need to need to go after. And the truth of the matter is today. And over time, you know, setting a cluster up on Kubernetes has largely been solved. Um, and yet the promise of, of Kubernetes around portability and flexibility, uh, downstream when you operationalize the complexity, smacks you in the face. And, uh, and that's where, where storm forge comes in. And so we're a vertical, you know, kind of vertically oriented solution. Um, that's, that's absolutely focused on solving that problem. >>Well, I don't want to play, actually. I want to play the, uh, devils advocate here and, you know, >>You wouldn't be a good analyst if you didn't. >>So the, the problem is when you talk with clients, users, they, there are many of them still working with Java with, you know, something that is really tough. Mm-hmm <affirmative>, I mean, we loved all of us loved Java. Yeah, absolutely. Maybe 20 years ago. Yeah. But not anymore, but still they have developers. They are porting applications, microservices. Yes. But not very optimized, etcetera. C cetera. So it's becoming tough. So how you can interact with these kind of yeah. Old hybrid or anyway, not well in generic applications. >>Yeah. We, we do that today. We actually, part of our platform is we offer performance testing in a lower environment and stage. And we like Matt was saying, we can use any metric that you care about and we can work with any configuration for that application. So the perfect example is Java, you know, you have to worry about your heap size, your garbage collection tuning. Um, and one of the things that really struck, struck me very early on about the storm forage product is because it is true machine learning. You remove the human bias from that. So like a lot of what I did in the past, especially around SRE and, and performance tuning, we were only as good as our humans were because of what they knew. And so we were, we kind of got stuck in these paths of making the same configuration adjustments, making the same changes to the application, hoping for different results. But then when you apply machine learning capability to that, the machine will recommend things you never would've dreamed of. And you get amazing results out of >>That. So both me and an Rico have been doing this for a long time. Like I have battled to my last breath, the, the argument when it's a bare metal or a VM. Yeah. Look, I cannot give you any more memory. Yeah. And the, the argument going all the way up to the CIO and the CIO basically saying, you know what, Keith you're cheap, my developer resources expensive, my bigger box. Yep. Uh, buying a bigger box in the cloud to your point is no longer a option because it's just expensive. Talk to me about the carrot or the stick as developers are realizing that they have to be more responsible. Where's the culture change coming from? So is it, that is that if it, is it the shift in responsibility? >>I think the center of the bullseye for us is within those sets of decisions, not in a static way, but in an ongoing way, especially, um, especially as the development of applications becomes more and more rapid. And the management of them, our, our charge and our belief wholeheartedly is that you shouldn't have to choose, you should not have to choose between costs or performance. You should not have to choose where your, you know, your applications live, uh, in a public private or, or hybrid cloud environment. And so we want to empower people to be able to sit in the middle of all of that chaos and for those trade-offs and those difficult interactions to no, no longer be a thing. You know, we're at, we're at a place now where we've done, you know, hundreds of deployments and never once have we met a developer who said, I'm really excited to get outta bed and come to work every day and manually tune my application. <laugh> One side, secondly, we've never met, uh, you know, uh, a manager or someone with budget that said, uh, please don't, you know, increase the value of my investment that I've made to lift and shift us over mm-hmm <affirmative>, you know, to the cloud or to Kubernetes or, or some combination of both. And so what we're seeing is the converging of these groups, um, at, you know, their happy place is the lack of needing to be able to, uh, make those trade offs. And that's been exciting for us. So, >>You know, I'm listening and looks like that your solution is right in the middle in application per performance management, observability. Yeah. And, uh, and monitoring. So it's a little bit of all of this. >>So we, we, we, we want to be, you know, the Intel inside of all of that, mm-hmm, <affirmative>, we don't, you know, we often get lumped into one of those categories. It used to be APM a lot. We sometimes get a, are you observability or, and we're really not any of those things in and of themselves, but we, instead of invested in deep integrations and partnerships with a lot of those, uh, with a lot of that tooling, cuz in a lot of ways, the, the tool chain is hardening, uh, in a cloud native and, and Kubernetes world. And so, you know, integrating in intelligently staying focused and great at what we solve for, but then seamlessly partnering and not requiring switching for, for our users who have already invested likely in a APM or observability. >>So to go a little bit deeper. Sure. What does it mean integration? I mean, do you provide data to this, you know, other applications in, in the environment or are they supporting you in the work that you >>Yeah, we're, we're a data consumer for the most part. Um, in fact, one of our big taglines is take your observability and turn it into actionability, right? Like how do you take the it's one thing to collect all of the data, but then how do you know what to do with it? Right. So to Matt's point, um, we integrate with folks like Datadog. Um, we integrate with Prometheus today. So we want to collect that telemetry data and then do something useful with it for you. >>But, but also we want Datadog customers. For example, we have a very close partnership with, with Datadog, so that in your existing data dog dashboard, now you have yeah. This, the storm for capability showing up in the same location. Yep. And so you don't have to switch out. >>So I was just gonna ask, is it a push pull? What is the developer experience? When you say you provide developer, this resolve ML, uh, learnings about performance mm-hmm <affirmative> how do they receive it? Like what, yeah, what's the, what's the, what's the developer experience >>They can receive it. So we have our own, we used to for a while we were CLI only like any good developer tool. Right. Uh, and you know, we have our own UI. And so it is a push in that, in, in a lot of cases where I can come to one spot, um, I've got my applications and every time I'm going to release or plan for a release or I have released, and I want to take, pull in, uh, observability data from a production standpoint, I can visualize all of that within the storm for UI and platform, make decisions. We allow you to, to set your, you know, kind of comfort level of automation that you're, you're okay with. You can be completely set and forget, or you can be somewhere along that spectrum. And you can say, as long as it's within, you know, these thresholds, go ahead and release the application or go ahead and apply the configuration. Um, but we also allow you to experience, uh, the same, a lot of the same functionality right now, you know, in Grafana in Datadog, uh, and a bunch of others that are coming. >>So I've talked to Tim Crawford who talks to a lot of CIOs and he's saying one of the biggest challenges, or if not, one of the biggest challenges CIOs are facing are resource constraints. Yeah. They cannot find the developers to begin with to get this feedback. How are you hoping to address this biggest pain point for CIOs? Yeah. >>Development? >>Just take that one. Yeah, absolutely. That's um, so like my background, like I said, at United health group, right. It's not always just about cost savings. In fact, um, the way that I look about at some of these tech challenges, especially when we talk about scalability, there's kind of three pillars that I consider, right? There's the tech scalability, how am I solving those challenges? There's the financial piece, cuz you can only throw money at a problem for so long. And it's the same thing with the human piece. I can only find so many bodies and right now that pool is very small. And so we are absolutely squarely in that footprint of, we enable your team to focus on the things that they matter, not manual tuning like Matt said. And then there are other resource constraints that I think that a lot of folks don't talk about too. >>Like we were, you were talking about private cloud for instance. And so having a physical data center, um, I've worked with physical data centers that companies I've worked for have owned where it is literally full wall to wall. You can't rack any more servers in it. And so their biggest option is, well, I could spend 1.2 billion to build a new one if I wanted to. Or if you had a capability to truly optimize your compute to what you needed and free up 30% of your capacity of that data center. So you can deploy additional name spaces into your cluster. Like that's a huge opportunity. >>So either out of question, I mean, may, maybe it, it doesn't sound very intelligent at this point, but so is it an ongoing process or is it something that you do at the very beginning mean you start deploying this. Yeah. And maybe as a service. Yep. Once in a year I say, okay, let's do it again and see if something changes. Sure. So one spot 1, 1, 1 single, you know? >>Yeah. Um, would you recommend somebody performance tests just once a year? >>Like, so that's my thing is, uh, previous at previous roles I had, uh, my role was you performance test, every single release. And that was at a minimum once a week. And if your thing did not get faster, you had to have an executive exception to get it into production. And that's the space that we wanna live in as well as part of your C I C D process. Like this should be continuous verification every time you deploy, we wanna make sure that we're recommending the perfect configuration for your application in the name space that you're deploying >>Into. And I would be as bold as to say that we believe that we can be a part of adding, actually adding a step in the C I C D process that's connected to optimization and that no application should be released monitored and sort of, uh, analyzed on an ongoing basis without optimization being a part of that. And again, not just from a cost perspective, yeah. Cost end performance, >>Almost a couple of hundred vendors on this floor. You know, you mentioned some of the big ones, data, dog, et cetera. But what happens when one of the up and comings out of nowhere, completely new data structure, some imaginable way to click to elementry data. Yeah. How do, how do you react to that? >>Yeah. To us it's zeros and ones. Yeah. Uh, and you know, we're, we're, we're really, we really are data agnostic from the standpoint of, um, we're not, we we're fortunate enough to, from the design of our algorithm standpoint, it doesn't get caught up on data structure issues. Um, you know, as long as you can capture it and make it available, uh, through, you know, one of a series of inputs, what one, one would be load or performance tests, uh, could be telemetry, could be observability if we have access to it. Um, honestly the messier, the, the better from time to time, uh, from a machine learning standpoint, um, it, it, it's pretty powerful to see we've, we've never had a deployment where we, uh, where we saved less than 30% while also improving performance by at least 10%. But the typical results for us are 40 to 60% savings and, you know, 30 to 40% improvement in performance. >>And what happens if the application is, I, I mean, yes, Kubernetes is the best thing of the world, but sometimes we have to, you know, external data sources or, or, you know, we have to connect with external services anyway. Mm-hmm <affirmative> yeah. So can you, you know, uh, can you provide an indication also on, on, on this particular application, like, you know, where the problem could >>Be? Yeah, yeah. And that, that's absolutely one of the things that we look at too, cuz it's um, especially when you talk about resource consumption, it's never a flat line, right? Like depending on your application, depending on the workloads that you're running, um, it varies from sometimes minute to minute, day to day, or it could be week to week even. Um, and so especially with some of the products that we have coming out with what we want to do, you know, partnering with, uh, you know, integrating heavily with the HPA and being able to handle some of those bumps and not necessarily bumps, but bursts and being able to do it in a way that's intelligent so that we can make sure that, like I said, it's the perfect configuration for the application regardless of the time of day that you're operating in or what your traffic patterns look like. Um, or you know, what your disc looks like, right? Like cuz with our, our low environment testing, any metric you throw at us, we can, we can optimize for. >>So Madden Patrick, thank you for stopping by. Yeah. Yes. We can go all day. Because day two is I think the biggest challenge right now. Yeah. Not just in Kubernetes, but application replatforming and re and transformation. Very, very difficult. Most CTOs and S that I talked to, this is the challenge space from Valencia Spain. I'm Keith Townsend, along with my host en Rico senior. And you're watching the queue, the leader in high tech coverage.

Published Date : May 18 2022

SUMMARY :

brought to you by the cloud native computing foundation. And we're at cuon cloud native you know, in the various sessions is about, you know, we are growing, I I've heard the pitch before, and one of the issues that we always had was, especially as you migrate to the cloud, You know, the lowing fluid is, you know, optimize the deployment. And so we're a vertical, you know, devils advocate here and, you know, So the, the problem is when you talk with clients, users, So the perfect example is Java, you know, you have to worry about your heap size, And the, the argument going all the way up to the CIO and the CIO basically saying, you know what, that I've made to lift and shift us over mm-hmm <affirmative>, you know, to the cloud or to Kubernetes or, You know, I'm listening and looks like that your solution is right in the middle in all of that, mm-hmm, <affirmative>, we don't, you know, we often get lumped into one of those categories. this, you know, other applications in, in the environment or are they supporting Like how do you take the it's one thing to collect all of the data, And so you don't have to switch out. Um, but we also allow you to experience, How are you hoping to address this And it's the same thing with the human piece. Like we were, you were talking about private cloud for instance. is it something that you do at the very beginning mean you start deploying this. And that's the space that we wanna live in as well as part of your C I C D process. actually adding a step in the C I C D process that's connected to optimization and that no application You know, you mentioned some of the big ones, data, dog, Um, you know, as long as you can capture it and make it available, or, you know, we have to connect with external services anyway. we want to do, you know, partnering with, uh, you know, integrating heavily with the HPA and being able to handle some So Madden Patrick, thank you for stopping by.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Tim CrawfordPERSON

0.99+

Keith TownsendPERSON

0.99+

30QUANTITY

0.99+

40QUANTITY

0.99+

1.2 billionQUANTITY

0.99+

MattPERSON

0.99+

Matt ProvoPERSON

0.99+

DatadogORGANIZATION

0.99+

storm for forgeORGANIZATION

0.99+

Patrick BergstromPERSON

0.99+

2016DATE

0.99+

JavaTITLE

0.99+

10QUANTITY

0.99+

Melissa SpainPERSON

0.99+

nine timesQUANTITY

0.99+

Valencia SpainLOCATION

0.99+

40%QUANTITY

0.99+

less than 30%QUANTITY

0.99+

10 years agoDATE

0.98+

United health groupORGANIZATION

0.98+

bothQUANTITY

0.98+

20 years agoDATE

0.98+

oneQUANTITY

0.98+

KeithPERSON

0.98+

once a yearQUANTITY

0.98+

once a weekQUANTITY

0.98+

HPAORGANIZATION

0.98+

2022DATE

0.98+

CoonORGANIZATION

0.98+

30%QUANTITY

0.98+

first conversationsQUANTITY

0.97+

CloudnativeconORGANIZATION

0.97+

60%QUANTITY

0.97+

KubernetesTITLE

0.97+

EttiPERSON

0.97+

todayDATE

0.96+

Patrick BrittonPERSON

0.96+

KubeconORGANIZATION

0.96+

StormForgeORGANIZATION

0.95+

data dogORGANIZATION

0.94+

PrometheusTITLE

0.94+

three pillarsQUANTITY

0.94+

secondlyQUANTITY

0.94+

RicoORGANIZATION

0.93+

Q con cloudORGANIZATION

0.93+

hundreds of deploymentsQUANTITY

0.92+

day twoQUANTITY

0.92+

EuropeLOCATION

0.92+

KubernetesORGANIZATION

0.92+

IntelORGANIZATION

0.92+

one spotQUANTITY

0.89+

at least 10%QUANTITY

0.87+

one thingQUANTITY

0.85+

hundred vendorsQUANTITY

0.83+

Once in a yearQUANTITY

0.83+

cuon cloud native conORGANIZATION

0.81+

RicoLOCATION

0.81+

BrookstoneORGANIZATION

0.8+

GrafanaORGANIZATION

0.8+

Berg storm CTOORGANIZATION

0.8+

SRETITLE

0.79+

SLATITLE

0.79+

BergstromORGANIZATION

0.79+

cloud native conORGANIZATION

0.78+

single releaseQUANTITY

0.77+

storm forge groupORGANIZATION

0.75+

1QUANTITY

0.75+

One sideQUANTITY

0.74+

EC twoTITLE

0.74+

1 singleQUANTITY

0.74+

PatrickPERSON

0.74+

Haseeb Budhani, Rafay & Adnan Khan, MoneyGram | Kubecon + Cloudnativecon Europe 2022


 

>>The cube presents, Coon and cloud native con Europe 22, brought to you by the cloud native computing foundation. >>Welcome to the cube coverage of CubeCon 2022 EU. I'm here with my cohost Paul Gill. Please work with you, Keith. Nice to work with you, Paul. And we have our first two guests. The cube is hot. I'm telling you we are having interviews before the start of even the show floor I have with me. We gotta start with the customers first enterprise architect, a non-con Aon con. Welcome to the show. >>Thank you so >>Much. Cube time cube time. First now you're at cube alumni. Yep. <laugh> and, and, uh, has Havani CEO. Arai welcome back. Nice to, >>Uh, >>Talk to you again today. So we're talking all things Kubernetes and we're super excited to talk to MoneyGram about their journey to Kubernetes. First question I have for Anon. Talk to us about what your pre Kubernetes landscape looked like. >>Yeah, certainly. Uh, Keith, so, um, we had a, uh, you know, a traditional mix of legacy applications and modern applications. Uh, you know, a few years ago we made the decision to move to a microservices architecture. Um, and this was all happening while we were still on prem. Right? So your traditional VMs, um, and you know, we started 20, 30 microservices, but with the microservices packing, you know, you quickly expand to hundreds of microservices. Um, and we started getting to that stage where managing them without sort of an orchestration platform, uh, and just as traditional VMs was getting to be really challenging, right. Uh, especially from a day two operational, uh, you know, you can manage 10, 15 microservices, but when you start having 50 and so forth, um, all those concerns around, uh, you know, high availability, operational performance. Um, so we started looking at some open source projects, you know, spring cloud. Uh, we are predominantly a Java, um, shop. So we looked at the spring cloud projects. Uh, they give you a number, uh, you know, of initiatives, um, for doing some of those, um, management and what we realized again, to manage those components, um, without sort of a platform was really challenging. So that, that kind of led us to sort of Kubernetes where, um, along with our journey cloud, uh, it was the platform that could help us with a lot of those management operational concerns. >>So as you talk about some of those challenges, pre Kubernetes, what were some of the operational issues that you folks experienced? >>Yeah. You know, uh, certain things like auto scaling is, is number one, right? I mean, that's a fundamental concept of cloud native, right. Is, um, how do you auto scale VMs? Right. Uh, you can put in some old methods and stuff, but, uh, it was really hard to do that automatically. Right. So, uh, Kubernetes with like HPA gives you those out of the box, right? Provided you set the right policies. Uh, you can have auto scaling, uh, where it can scale up and scale back. So we were doing that manually. Right. So before, uh, you know, MoneyGram, obviously, you know, holiday season, people are sending more money mother's day. Um, our ops team would go in basically manually scale, uh, VMs. Right. So we'd go from four instances to maybe eight instances. Right. Uh, but, but that entailed outages. Right. Um, and just to plan around doing that manually and then sort of scale them back was a lot of overhead, a lot of administration overhead. Right. So, uh, we wanted something that could help us do that automatically right. In a, in an efficient, uh, unintrusive way. So, so, you know, that was one of the things, uh, monitoring, um, and, and management, uh, operations, you know, just kind of visibility into how those applications were during, what were the status of your, um, workloads was also a challenge, right. Uh, to do that. >>So, cause see, I gotta ask the question. If someone would've came to me with that problem, I'd just say, you know, what, go to the plug, the cloud, what, how does, uh, your group help solve some of these challenges? What do you guys do? >>Yeah. What, what do we do? So here's my perspective on the market as it's playing out. So I see a bifurcation happening in the Kubernetes space, but there's the Kubernetes run time. So Amazon is EKS Azure as EKS, you know, there's enough of these available. They're not managed services. They're actually really good, frankly. Right? In fact, retail customers, if you're an Amazon, why would you spin up your own? Just use EK. It's awesome. But then there's an operational layer that is needed to run Kubernetes. Uh, my perspective is that, you know, 50,000 enterprises are adopting Kubernetes over the next five to 10 years. And they're all gonna go through the same exact journey and they're all gonna end up, you know, potentially making the same mistake, which is, they're gonna assume that Kubernetes is easy. <laugh> they're gonna say, well, this is not hard. I got this up and running on my laptop. >>This is so easy. No worries. Right. I can do key gas, but then, okay. Can you consistently spin up these things? Can you scale them consistently? Do you have the right blueprints in place? Do you have the right access management in place? Do you have the right policies in place? Can you deploy applications consistently? Do you have monitoring and visibility into those things? Do your developers have access to when they need it? Do you have the right networking layer in place? Do you have the right chargebacks in place? Remember you have multiple teams and by the way, nobody has a single cluster. So you gotta do this across multiple clusters. And some of them have multiple clouds, not because they wanna be multiple clouds because, but sometimes you buy a company and they happen to be in Azure. How many dashboards do you have now across all the open source technologies that you have identified to solve these problems? >>This is where pain lies. So I think that Kubernetes is fundamentally a solve problem. Like our friends at AWS and Azure they've solved this problem. It's like a KSKS et cetera, GK for that matter. They're they're great. And you should use them and don't even think about spinning up Q B and a best clusters. Don't do it. Use the platforms that exist and commensurately on premises. OpenShift is pretty awesome, right? If you like it, use it. But then when it comes to the operations layer, right, that's where today we end up investing in a DevOps team and then an SRE organization that need to become experts in Kubernetes. And that is not tenable, right? Can you let's say unlimited capital unlimited budgets. Can you hire 20 people to do Kubernetes today? >>If you could find them, if >>You can find 'em right. So even if you could, the point is that see, five years ago, when your competitors were not doing Kubernetes, it was a competitive advantage to go build a team to do Kubernetes. So you could move faster today. You know, there's a high chance that your competitors are already buying from a Rafa or somebody like Rafa. So now it's better to take these really, really sharp engineers and have them work on things that make the company money, writing operations for Kubernetes. This is a commodity. Now >>How confident are you that the cloud providers won't get in and do what you do and put you out of business? >>Yeah, I mean, absolutely. I think, I mean, in fact, I, I had a conversation with somebody from HBS this morning and I was telling them, I don't think you have a choice. You have to do this right. Competition is not a bad thing. Right? This, the, >>If we are the only company in a space, this is not a space, right. The bet we are making is that every enterprise has, you know, they have an on-prem strategy. They have at least a handful of, everybody's got at least two clouds that they're thinking about. Everybody starts with one cloud and then they have some other cloud that they're also thinking about, um, for them to only rely on one cloud's tools to solve for on-prem plus that second cloud, they potentially, they may have, that's a tough thing to do. Um, and at the same time we as a vendor, I mean the only real reason why startups survive is because you have technology that is truly differentiated, right. Otherwise, right. I mean, you gotta build something that is materially. Interesting. Right. We seem to have, sorry, go ahead. >>No, I was gonna ask you, you actually had me thinking about something, a non yes. MoneyGram big, well known company, a startup, adding, working in a space with Google, VMware, all the biggest names. What brought you to Rafi to solve this operational challenge? >>Yeah. Good question. So when we started out sort of in our Kubernetes, um, you know, we had heard about EKS, uh, and, and we are an AWS shop. So, uh, that was the most natural path. And, and we looked at, um, EKS and, and used that to, you know, create our clusters. Um, but then we realized very quickly that yes, toe's point AWS manages the control plane for you. It gives you the high availability. So you're not managing those components, which is some really heavy lifting. Right. Uh, but then what about all the other things like, you know, centralized dashboard, what about, we need to provision, uh, Kubernetes clusters on multi-cloud right. We have other clouds that we use, uh, or also on prem. Right. Um, how do you do some of that stuff? Right. Um, we, we also, at that time were looking at, uh, other, uh, tools also. >>And I had, I remember come up with an MVP list that we needed to have in place for day one or day two, uh, operations, right. To before we even launch any single applications into production. Um, and my ops team looked at that list. Um, and literally there was only one or two items that they could check, check off with S you know, they they've got the control plane, they've got the cluster provision, but what about all those other components? Uh, and some of that kind of led us down the path of, uh, you know, looking at, Hey, what's out there in this space. And, and we realized pretty quickly that there weren't too many, there were some large providers and capabilities like Antos, but we felt that it was, uh, a little too much for what we were trying to do. You know, at that point in time, we wanted to scale slowly. We wanted to minimize our footprint. Um, and, and Rafa seemed to sort of, uh, was, was a nice mix, uh, you know, uh, from all those different angles, how >>Was, how was the situation affecting your developer experience? >>So, um, so that's a really good question also. So operations was one aspect of, to it, right? The other part is the application development, right? We've got, uh, you know, Moneygrams when a lot of organizations have a plethora of technologies, right? From, from Java to.net to no GS, what have you, right. Um, now as you start saying, okay, now we're going cloud native, and we're gonna start deploying to Kubernetes. Um, there's a fair amount of overhead because a tech stack, all of a sudden goes from, you know, just being Java or just being.net to things like Docker, right? All these container orchestration and deployment concerns, Kubernetes, uh, deployment artifacts, right. I gotta write all this YAML, uh, as my developer say, YAML, hell right. <laugh>, uh, I gotta learn Docker files. I need to figure out, um, a package manager like helm, uh, on top of learning all the Kubernetes artifacts. >>Right. So, um, initially we went with sort of, okay, you know, we can just train our developers. Right. Um, and that was wrong. Right. I mean, you can't assume that everyone is gonna sort of learn all these deployment concerns, uh, and we'll adopt them. Right. Um, uh, there's a lot of stuff that's outside of their sort of core dev domain, uh, that you're putting all this burden on them. Right. So, um, we could not rely on them and to be sort of cube cuddle experts, right. That that's a fair amount, overhead learning curve there. Um, so Rafa again, from their dashboard perspective, right? So the managed cube cuddle gives you that easy access for devs, right. Where they can go and monitor the status of their workloads. Um, they can, they don't have to figure out, you know, configuring all these tools locally just to get it to work. >>Uh, we did some things from a DevOps perspective to basically streamline and automate that process. But then also office order came in and helped us out, uh, on kind of that providing that dashboard. They don't have to worry. They can basically get on through single sign on and have visibility into the status of their deployment. Uh, they can do troubleshooting diagnostics all through a single pane of glass. Right. Which was a key key item. Uh, initially before Rafa, we were doing that command line. Right. And again, just getting some of the tools configured was, was huge. Right. Took us days just to get that. And then the learning curve for development teams, right? Oh, now you gotta, you got the tools now you gotta figure out how to use it. Right. Um, so >>See, talk to me about the, the cloud native infrastructure. When I look at that entire landscaping number, I'm just overwhelmed by it. As a customer, I look at it, I'm like, I, I don't know where to start I'm sure. Or not, you, you folks looked at it and said, wow, there's so many solutions. How do you engage with the ecosystem? You have to be at some level opinionated, but flexible enough to, uh, meet every customer's needs. How, how do you approach that? >>Yeah. So it's a, it's a really tough problem to solve because, so, so the thing about abstraction layers, you know, we all know how that plays out, right? So abstraction layers are fundamentally never the right answer because they will never catch up. Right. Because you're trying to write and layer on top. So then we had to solve the problem, which was, well, we can't be an abstraction layer, but then at the same time, we need to provide some sort of, sort of like centralization standardization. Right. So, so we sort of have this, the following dissonance in our platform, which is actually really important to solve the problem. So we think of a, of a stack as sort of four things. There's the, there's the Kubernetes layer infrastructure layer, um, and EKS is different from ES and it's okay. Mm-hmm <affirmative>, if we try to now bring them all together and make them behave as one, our customers are gonna suffer because there are features in ESS that I really want. >>But then if you write an AB obsession layer, I'm not gonna get 'em so not. Okay. So treat them as individual things. And we logic that we now curate. So every time S for example, goes from 1 22 to 1 23, rewrite a new product, just so my customer can press a button and upgrade these clusters. Similarly, we do this fors, we do this for GK. We it's a really, really hard job, but that's the job. We gotta do it on top of that, you have these things called. Add-ons like my network policy, my access management policy, my et cetera. Right. These things are all actually the same. So whether I'm Anek or a Ks, I want the same access for Keith versus a none. Right. So then those components are sort of the same across doesn't matter how many clusters does money clouds on top of that? You have applications. And when it comes to the developer, in fact, I do the following demo a lot of times because people ask the question, right? Mean, I, I, I, people say things like, I wanna run the same Kubernetes distribution everywhere, because this is like Linux, actually, it's not. So I, I do a demo where I spin up a access to an OpenShift cluster and an EKS cluster and an AKs cluster. And I say, log in, show me which one is, which they're all the same. >>So Anan get, put, make that real for me, I'm sure after this amount of time, developers groups have come to you with things that are snowflakes and you, and as a enterprise architect, you have to make it work within your framework. How has working with RAI made that possible? >>Yeah. So, um, you know, I think one of the very common concerns is right. The whole deployment, right. Uh, toe's point, right. Is you are from an, from a deployment perspective. Uh, it's still using helm. It's still using some of the same tooling, um, right. But, um, how do you Rafa gives us, uh, some tools, you know, they have a, a command line, art cuddle API that essentially we use. Um, we wanted parody, um, across all our different environments, different clusters, you know, it doesn't matter where you're running. Um, so that gives us basically a consistent API for deployment. Um, we've also had, um, challenges, uh, with just some of the tooling in general, that we worked with RA actually to actually extend their, our cuddle API for us, so that we have a better deployment experience for our developers. So, >>Uh Huie how long does this opportunity exist for you? At some point, do the cloud providers figure this out or does the open source community figure out how to do what you've done and, and this opportunity is gone. >>So, so I think back to a platform that I, I think very highly of, which is a highly off, which has been around a long time and continues to live vCenter, I think vCenter is awesome. And it's, it's beautiful. VMware did an incredible job. Uh, what is the job? Its job is to manage VMs, right? But then it's for access. It's also storage. It's also networking and a sex, right? All these things got done because to solve a real problem, you have to think about all the things that come together to solve, help you solve that problem from an operations perspective. Right? My view is that this market needs essentially a vCenter, but for Kubernetes, right. Um, and that is a very broad problem, right. And it's gonna spend, it's not about a cloud, right? I mean, every cloud should build this. I mean, why would they not? It makes sense, Anto success, right. Everybody should have one. But then, you know, the clarity in thinking that the Rafa team seems to have exhibited till date seems to merit an independent company. In my opinion, I think like, I mean, from a technical perspective, this products awesome. Right? I mean, you know, we seem to have, you know, no real competition when it comes to this broad breadth of capabilities, will it last, we'll see, right. I mean, I keep doing Q shows, right? So every year you can ask me that question again. Well, you're >>You make a good point though. I mean, you're up against VMware, you're up against Google. They're both trying to do sort of the same thing you're doing. What's why are you succeeding? >>Maybe it's focus. Maybe it's because of the right experience. I think startups only in hindsight, can one tell why a startup was successful? In all honesty. I, I, I've been in a one or two service in the past. Um, and there's a lot of luck to this. There's a lot of timing to this. I think this timing for a com product like this is perfect. Like three, four years ago, nobody would've cared. Like honestly, nobody would've cared. This is the right time to have a product like this in the market because so many enterprises are now thinking of modernization. And because everybody's doing this, this is like the boots storm problem in HCI. Everybody's doing it. But there's only so many people in the industry who actually understand this problem. So they can't even hire the people. And the CTO said, I gotta go. I don't have the people. I can't fill the, the seats. And then they look for solutions and we are that solution that we're gonna get embedded. And when you have infrastructure software like this embedded in your solution, we're gonna be around with the assuming, obviously we don't score up, right. We're gonna be around with these companies for some time. We're gonna have strong partners for the long term. >>Well, vCenter for Kubernetes, I love to end on that note, intriguing conversation. We could go on forever on this topic, cuz there's a lot of work to do. I think, uh, I don't think this will over be a solve problem for the Kubernetes of cloud native solution. So I think there's a lot of opportunity in that space. Hi, thank you for rejoining the cube. I non con welcome becoming a cube alum. <laugh> I awesome. Thank you. Get your much your profile on the, on the Ken's. Website's really cool from Valencia Spain. I'm Keith Townsend, along with my whole Paul Gillon and you're watching the cube, the leader in high tech coverage.

Published Date : May 18 2022

SUMMARY :

brought to you by the cloud native computing foundation. I'm telling you we are having interviews before the start of even the <laugh> and, and, uh, has Havani CEO. Talk to you again today. Uh, Keith, so, um, we had a, uh, you know, So before, uh, you know, MoneyGram, obviously, you know, that problem, I'd just say, you know, what, go to the plug, the cloud, what, how does, So Amazon is EKS Azure as EKS, you know, How many dashboards do you have now across all the open source technologies that you have identified to And you should use them and don't even think about spinning up Q B and a best clusters. So even if you could, the point is that see, five years ago, I don't think you have a choice. we as a vendor, I mean the only real reason why startups survive is because you have technology that is truly What brought you to Rafi to solve Uh, but then what about all the other things like, you know, centralized dashboard, that they could check, check off with S you know, they they've got the control plane, they've got the cluster provision, you know, just being Java or just being.net to things like Docker, right? So, um, initially we went with sort of, okay, you know, we can just Oh, now you gotta, you got the tools now you gotta figure out how to use it. How do you engage with the ecosystem? so the thing about abstraction layers, you know, we all know how that plays out, We gotta do it on top of that, you have these things called. developers groups have come to you with things that are snowflakes and you, some tools, you know, they have a, a command line, art cuddle API that essentially we use. does the open source community figure out how to do what you've done and, and this opportunity is gone. you know, the clarity in thinking that the Rafa team seems to have exhibited till date seems What's why are you succeeding? And when you have infrastructure software like this embedded in your solution, we're thank you for rejoining the cube.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Paul GillPERSON

0.99+

Keith TownsendPERSON

0.99+

Paul GillonPERSON

0.99+

PaulPERSON

0.99+

oneQUANTITY

0.99+

KeithPERSON

0.99+

GoogleORGANIZATION

0.99+

20QUANTITY

0.99+

HBSORGANIZATION

0.99+

RafayPERSON

0.99+

10QUANTITY

0.99+

AWSORGANIZATION

0.99+

Adnan KhanPERSON

0.99+

AmazonORGANIZATION

0.99+

JavaTITLE

0.99+

20 peopleQUANTITY

0.99+

Haseeb BudhaniPERSON

0.99+

RafaPERSON

0.99+

eight instancesQUANTITY

0.99+

Valencia SpainLOCATION

0.99+

AraiPERSON

0.99+

50QUANTITY

0.99+

FirstQUANTITY

0.99+

50,000 enterprisesQUANTITY

0.99+

second cloudQUANTITY

0.99+

15 microservicesQUANTITY

0.99+

LinuxTITLE

0.98+

one cloudQUANTITY

0.98+

vCenterTITLE

0.98+

todayDATE

0.98+

mother's dayEVENT

0.98+

firstQUANTITY

0.98+

First questionQUANTITY

0.98+

bothQUANTITY

0.98+

five years agoDATE

0.98+

four instancesQUANTITY

0.98+

ESTITLE

0.98+

AnanPERSON

0.97+

RafiPERSON

0.97+

MoneyGramORGANIZATION

0.97+

first two guestsQUANTITY

0.97+

HPAORGANIZATION

0.97+

four years agoDATE

0.96+

KubernetesTITLE

0.96+

single clusterQUANTITY

0.95+

1 23OTHER

0.95+

hundreds of microservicesQUANTITY

0.95+

30 microservicesQUANTITY

0.95+

singleQUANTITY

0.95+

OpenShiftTITLE

0.95+

one aspectQUANTITY

0.95+

single paneQUANTITY

0.94+

VMwareORGANIZATION

0.94+

two itemsQUANTITY

0.94+

day twoQUANTITY

0.93+

CoonORGANIZATION

0.93+

ESSTITLE

0.9+

10 yearsQUANTITY

0.89+

AzureORGANIZATION

0.89+

day oneQUANTITY

0.89+

RafaORGANIZATION

0.88+

KubernetesORGANIZATION

0.88+

this morningDATE

0.88+

DockerTITLE

0.87+

CloudnativeconORGANIZATION

0.86+

KenPERSON

0.86+

Webb Brown, Kubecost | CUBE Conversation


 

>>Welcome to this cube conversation. I'm Dave Nicholson, and this is part of the AWS startup showcase season two. I'm very happy to have with me Webb brown CEO of Qube cost web. Welcome to the program. How are you? I'm doing >>Great. It's great to be here, Dave. Thank you so much for having me really excited for the discussion. >>Good to see you. I guess we saw each other last down in Los Angeles for, for coop con, >>Right? Exactly. Right. Still feeling the energy from that event. Hoping we can be back together in person. Not, not too long from now. >>Yeah. Well I'll second that, well, let, let's get straight to it. Tell us, tell us about Q cost. What do you guys do? And I think just central to that question is what gives you guys the right to exist? What problem are you solving? >>Yeah, I love the question. So first and foremost coupe costs, we provide cost monitoring and cost management solutions for teams running Kubernetes or cloud native workloads. Everything we do is, is built on open source. Our founding team was working on infrastructure monitoring solutions at Google before this. And, and what we saw was as we had several teammates join the Kubernetes effort very early days at Google, we saw teams really struggling even just to, to monitor and understand Kubernetes costs, right? There's lots of complexity with the Kubernetes scheduler and being able to answer the question of what is the cost of an application or what is the cost of, you know, a team department, et cetera. And the workloads that they're deploying was really hard for most teams. If you look at CNCF study from late last year, still today, about two thirds of teams, can't answer where they are spending money. And what we saw when digging in there is that when you can't answer that question, it's really hard to be efficient. And by be efficient, we, we mean get the right balance between cost and performance and reliability. So we help teams in, in these areas and more where, you know, now have thousands of teams using our product. You know, we feel where we're just getting started on our mission as well. >>So when people hear it, when people think of coop costs, they w they naturally associate that with Kubernetes. And they think, well, Kubernetes is open-source wait, isn't that free? So what, so what costs are you tracking? Exactly. >>Yeah. Great question. We would track costs in any environment where you can run Kubernetes. So if that's on-prem, you can bring a custom pricing sheet to monitor, say the cost of your underlying CPU course, you know, GPU's memory, et cetera. If you're running in a cloud environment, we have integrations with Azure, GCP and AWS, where we would be able to reflect all the complexity of, you know, whatever deployment you have, whether you're using a spot and multiple regions where you have complex enterprise discounts are eyes savings plans, you name it, we'd be reflecting it. So it's really about, you know, not just generic prices, it's about getting the right price for your organization. >>So the infrastructure that goes into this calculation can be on premises or off premises in the form of cloud. I heard that, right? >>Yeah, that's exactly right. So all of those environments, we'd give you a visibility into all the resources that your Kubernetes clusters are consuming. Again, that's, you know, nodes, load balancers, every resource that it's directly touching also have the ability for you to pull in external costs, right? So if you have Kubernetes tenants that are using S3 or cloud sequel, or, you know, another external cloud service, we would make that connection for you. And then lastly, if you have shared costs, sometimes even like the cost of a dev ops team, we'd give you the ability to kind of allocate that back to your core infrastructure, which may be used for showback or even charged back across your, your, >>So who are the folks in an organization that are tapping into this, are these, you know, our, our, our, our developers being encouraged to be cognizant of these costs throughout the process, or is this just sort of a CFO on down visibility tool? >>Yeah, it's a great, it's a great question. And what we see is a major transformation here where, you know, kind of shift left from a cost perspective where more and more engineering teams are interested in just being aware or having transparency. So they can build a culture of accountability with costs, right, with the amazing ability to rapidly push to production and iterate, you know, with microservices and Kubernetes, it's hard to have this kind of, you know, just wait for say the finance team to review this at the end of the month or the end of the quarter. We see this increasingly be being viewed in real time by infrastructure teams, by engineering teams. Now finance is still a very important stakeholder and, you know, absolutely has a very important like seat at the table in these conversations. But increasingly these are, again, real time or near real time engineering decisions that are really moving the needle on cost and cost efficiency, overtime and performance as well. >>Now, can you use this to model what costs might be, or is this, or is this, you know, you, you mentioned monitoring in real time, is this only for pulling information as it exists, or could you do, could you use some of the aspects of, of, of your toolset to make a decision, whether something makes more sense to run on your existing infrastructure on premises versus moving into, you know, working in a cloud? Is that something that is designed for or not? >>Great question. So we do have the ability to predict cost cost going forward, based on everything we've learned about your environment, whether you're in multi-cloud hybrid cloud, et cetera. So some really interesting functionality there and a lot more coming later this year, because we do see more and more teams wanting to model the state of the future, right? As you deploy really complex technologies, like say the cluster auto scale or, or HPA in different environments, it can really challenging to do an apples to apples comparison, and we help teams do exactly that. And again, gonna have a lot more interesting announcements here later this year. >>So later that later this year, meaning not in the next few minutes while we're together, >>Nothing new to announce on that front today, but I would say, you know, expect later this quarter for us to have more. >>Okay, that sounds good. Now, now you touched on this a little bit, but I want to hone in on why this is particularly relevant now and moving into the future. You know, we've always tracking costs has always been important, you know, even before the Dawn of cloud, but why is it increasingly important? And, and, you know, there are, there are alternatives for cost tracking legacy alternatives that are out there. So talk about why it's particularly relevant now and tell us what your super power is. You know, what's the, all right. All right. >>Secrets, >>Secret sauce is something you can't share super power. You can talk about >>Absolutely >>NDA. So yes, >>Your superpower. Yeah. Great questions. So for support, just to, to, to touch on, what's fundamentally changing to make a company like ours, you know, impactful or relevant. There's really three things here first and foremost is the new abstractions or complexities that come with Kubernetes, right. Super powerful, but from a cost standpoint, make it considerably harder to accurately track costs. And the big transformation here is, you know, with Kubernetes, you can, at any given moment have 50 applications running on a single node or a single VM, you can fast forward five minutes and there could be 50 entirely new applications, right? So just assigning that VM or, you know, tagging that VM back to an application or team or department really is not relevant in those places. So just the new complexity related to costs makes this problem harder for teams. Second is what we touch on. >>Just again, the power of Cooney. Kubernetes is the ability to allow distributed engineering teams to work on many microservices concurrently. So you're no longer in a lot of ways managing this problem where they centralized kind of single point of decision-making. Oftentimes these decisions are distributed across not only your infrastructure team, but your engineering team. So just the way these decisions and, you know, innovation is happening is changing how you manage these. And lastly, it's just scale, right? The, the cloud and, you know, Kubernetes continue to be incredibly successful. You know, where as goop costs now managing billions of dollars as these numbers get bigger and bigger just becomes more of a business focus and business critical issue. So those are the, you know, the three kind of underlying themes that are changing. When I talk about what we do, that makes us special. It's really this like foundational layer of visibility that we build. >>And what we can do is in real time with a very high degree of accuracy at the largest Kubernetes clusters in the world, give you visibility at any dimension. And so from there, you can do things like have real-time monitoring. You can have real-time insights, you can allow automation to make decisions on these, you know, inputs or data feeds. You can set alerts, you can set recurring reports. All of these things are made possible because of, you know, the, the, I would say really hard work that we've done to, again, give this real-time visibility with a high degree of accuracy at, at crazy scale. >>So if we were to play little make-believe for a moment, pretend like I'm a skeptical sitting on the fence. Not sure if I want to go down this path kind of person. And I say, you know what, web, I think I have a really good handle on all of my costs so far. What would you hit me with as, as, as an example of something that people really didn't expect until they, until they were running coup costs and they had actually had that visibility, what are some of the things that people are surprised by? >>Yeah. Great question. There'd be a number, number one. I'd have, you know, one data point I want to get from you, which is, you know, for your organization or for all of your clusters, what is your cost efficiency? Can you answer that with a high degree of accuracy and by cost efficiency? >>And the answer is now. So tell me, tell me, tell me how to sign up for coupons. >>Yeah. And so the answer, the answer there is you can go get our community version, you know, you can be up and running in minutes, you don't have to share any data, right? Like it is, you know, simply a helmet install, but cost efficiency is this notion of, of every dollar that you are spending on provision resources. What percentage of those dollars are you actually utilizing? And we have, you know, we, we now have, you know, thousands of teams using our product and we've worked with, you know, hundreds of them really closely, you know, this is, you know, that's not the entire market, but in our large sample sizes, we regularly see teams start in the low 20% cost efficiency, meaning that approximately 80% is quote waste time and time. Again, we see teams just be shocked by this number. And again, most of it is not because they were measuring it and accurately or anything like that. Most teams again today still just don't have that visibility until they start working with this. >>So is that, is that sort of the, I in my house household, certain members seem to only believe that there is one position for a light switch, and that would be the on position. Is there, is this a bit of a parallel where, where folks are, are spinning up resources and then just out of sight, out of mind, maybe not spinning them down when not needed. Yeah. >>Yeah. It's, it's, that's definitely one class of the challenges I would say, you know, so today, if you look at our product, we have 14 different insights across like different dimensions of your infrastructure one, or, or I would say several of those relate to exactly what you just described, which is you spin up a VM, you spend a bit load balancer, you spin up an external IP address. You're using it. You're not paying for it. Another class is this notion of, again, I don't have an understanding of what my resources cost. I also don't have a great sense for how much my microservice or application will need. So I'm just going to turn on all the lights, which is, or I'm going to drastically over provision again, I don't know the cost, so I'm just going to kind of set it and forget it. And if my application is performing, you know, then you know, we're doing well here. Again, with this visibility, you can get much more specific, much more accurate, much more actionable with making that trade off, you know, again, down to the individual pod workload, you know, deployment, et cetera. >>So we've, we've touched on this a bit peripherally, but give me an example. You know, you, you run into someone who happens to be a happy user of coop cost. What's the dream story that you love to hear from them about what life was before was before coop costs and what life was like after? >>Yeah, there's a lot, a lot of different dimensions there. You know, one, one is, you know, working with an infrastructure team that, that used to get asked these questions a lot about, you know, why does this cost so much, or why are we spending this and Kubernetes or, or wire expenses growing the rate that they are, you know, like when this, when this works, you know, engineering teams or infrastructure teams, aren't getting asked those questions, right? The tool could cost itself is getting asked that and answering that. So I think one is infrastructure teams, not fielding those types of questions as much. Secondly, is just, you know, more and more teams rolling this out throughout their organization. And ultimately just getting, building a culture of awareness, like ownership, accountability. And then, you know, we just increasingly are seeing teams, you know, find this right balance between cost and performance again. So, you know, in certain cases, improving performance, when are resource bottlenecks in places and other places, you know, reducing costs, you know, by 10 plus million dollars, ultimately at the end of the day, we like to see just teams being more comfortable running their workloads in Kubernetes, right? That is the ultimate sign of success is just an organization, feels comfortable with how they're deploying, how they're managing, how they're spending in Kubernetes. Again, whether that be, you know, on-prem or transitioning from on-prem to a cloud in multiple clouds, et cetera. >>So we're talking to you today as part of the second season of the AWS startup showcase. What's, what's the relationship there with, with AWS? >>So it is the, the largest platform for coop costs being run today. So I believe, you know, at this point, at least a thousand different organizations running our product on AWS hosted clusters, whether they're, you know, ETS or, or self-managed, but you know, a growing number of those on, on EKS. And, you know, we've just, you know, absolutely loved working with the team across, I think at this point, you know, six or seven different groups from marketplace to their containers team, you know, obviously, you know, ETS and others, and just very much see them continuing to push the boundaries on what's possible from a scale and, you know, ease of use and, you know, just breadth of, of offering to this market. >>Well, we really look forward to having you back and hearing about some of these announcements, things that are, that are coming down the line. So we'll definitely have to touch base in the future, but just one, one final, more general question for you, where do you see Kubernetes in general going in 2022? Is it sort of a linear growth? Is there some, is there an inflection point that we see, you know, a good percentage of software that's running enterprises right now is already in that open source category, but what are your thoughts on Kubernetes in 2022? >>Yeah, I think, you know, the one word is everywhere is where I see Kubernetes in 2022, like very deep in the like large and really complex enterprises. Right. So I think you'll see just, you know, major bets there. And I think you'll continue to see more engineers adopted. And I think you'll also continue to see, you know, more and more flavors of it, right? So, you know, some teams find that running Kubernetes anymore serverless fashion is, is right for them. Others find that, you know, having full control, you know, at every part of the stack, including running their own autoscaler for example is really powerful. So I think just, you know, you'll see more and more options. And again, I think teams increasingly adopting the right, you know, abstraction level on top of Kubernetes that works for their workloads and their organizations >>Sounds good. We'll we'll, we'll come back in 2023 and we'll check and see how that, how that all panned out. Well, it's been great talking to you today as part of the startup showcase. Really appreciate it. Great to see you again. It's right about the time where I can still tell you happy new year, because we're still, we're still in January here. Hope you have a great 2022 with that from me, Dave Nicholson, part of the cube part of AWS startup showcase season two, I'd like to thank everyone for joining and stay with us for the best in hybrid tech coverage.

Published Date : Jan 17 2022

SUMMARY :

I'm Dave Nicholson, and this is part of the AWS startup showcase Thank you so much for having me really excited for the discussion. Good to see you. Still feeling the energy from that event. And I think just central to that question is what gives you guys in, in these areas and more where, you know, now have thousands of teams using our so what costs are you tracking? all the complexity of, you know, whatever deployment you have, whether you're using a spot So the infrastructure that goes into this calculation can be on premises or cloud sequel, or, you know, another external cloud service, we would make that connection this kind of, you know, just wait for say the finance team to review this at the end of As you deploy really say, you know, expect later this quarter for us to have more. we've always tracking costs has always been important, you know, even before the Dawn of cloud, Secret sauce is something you can't share super power. So yes, So just assigning that VM or, you know, tagging that VM The, the cloud and, you know, Kubernetes continue to be incredibly decisions on these, you know, inputs or data feeds. And I say, you know what, web, I think I have a really good handle you know, one data point I want to get from you, which is, you know, for your organization So tell me, tell me, tell me how to sign up for coupons. you know, hundreds of them really closely, you know, this is, So is that, is that sort of the, I in my house And if my application is performing, you know, then you know, What's the dream story that you love to hear from them about what And then, you know, we just increasingly So we're talking to you today as part of the second season of the AWS startup So I believe, you know, at this point, at least a thousand we see, you know, a good percentage of software that's running enterprises right now is already in that open source So I think just, you know, you'll see more and more options. Well, it's been great talking to you today as part of the startup showcase.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave NicholsonPERSON

0.99+

DavePERSON

0.99+

sixQUANTITY

0.99+

AWSORGANIZATION

0.99+

50QUANTITY

0.99+

50 applicationsQUANTITY

0.99+

GoogleORGANIZATION

0.99+

JanuaryDATE

0.99+

2022DATE

0.99+

Webb BrownPERSON

0.99+

five minutesQUANTITY

0.99+

hundredsQUANTITY

0.99+

SecondQUANTITY

0.99+

2023DATE

0.99+

oneQUANTITY

0.99+

Los AngelesLOCATION

0.99+

todayDATE

0.99+

late last yearDATE

0.99+

10 plus million dollarsQUANTITY

0.99+

ETSORGANIZATION

0.99+

later this yearDATE

0.99+

S3TITLE

0.99+

billions of dollarsQUANTITY

0.98+

KubernetesTITLE

0.98+

approximately 80%QUANTITY

0.98+

EKSORGANIZATION

0.98+

14 different insightsQUANTITY

0.98+

three thingsQUANTITY

0.98+

20%QUANTITY

0.98+

CNCFORGANIZATION

0.98+

second seasonQUANTITY

0.97+

applesORGANIZATION

0.97+

firstQUANTITY

0.97+

thousands of teamsQUANTITY

0.96+

one wordQUANTITY

0.96+

SecondlyQUANTITY

0.95+

later this quarterDATE

0.95+

one positionQUANTITY

0.94+

KubecostPERSON

0.93+

Webb brownPERSON

0.92+

seven different groupsQUANTITY

0.92+

single pointQUANTITY

0.91+

KubernetesPERSON

0.91+

season twoQUANTITY

0.9+

about two thirdsQUANTITY

0.9+

one classQUANTITY

0.9+

three kindQUANTITY

0.89+

KubernetesORGANIZATION

0.87+

new applicationsQUANTITY

0.84+

Qube cost webORGANIZATION

0.83+

single nodeQUANTITY

0.83+

endDATE

0.82+

Q costORGANIZATION

0.82+

single VMQUANTITY

0.81+

AzureTITLE

0.77+

HPAORGANIZATION

0.73+

GCPTITLE

0.7+

new yearEVENT

0.65+

Pat Conte, Opsani | AWS Startup Showcase


 

(upbeat music) >> Hello and welcome to this CUBE conversation here presenting the "AWS Startup Showcase: "New Breakthroughs in DevOps, Data Analytics "and Cloud Management Tools" featuring Opsani for the cloud management and migration track here today, I'm your host John Furrier. Today, we're joined by Patrick Conte, Chief Commercial Officer, Opsani. Thanks for coming on. Appreciate you coming on. Future of AI operations. >> Thanks, John. Great to be here. Appreciate being with you. >> So congratulations on all your success being showcased here as part of the Startups Showcase, future of AI operations. You've got the cloud scale happening. A lot of new transitions in this quote digital transformation as cloud scales goes next generation. DevOps revolution as Emily Freeman pointed out in her keynote. What's the problem statement that you guys are focused on? Obviously, AI involves a lot of automation. I can imagine there's a data problem in there somewhere. What's the core problem that you guys are focused on? >> Yeah, it's interesting because there are a lot of companies that focus on trying to help other companies optimize what they're doing in the cloud, whether it's cost or whether it's performance or something else. We felt very strongly that AI was the way to do that. I've got a slide prepared, and maybe we can take a quick look at that, and that'll talk about the three elements or dimensions of the problem. So we think about cloud services and the challenge of delivering cloud services. You've really got three things that customers are trying to solve for. They're trying to solve for performance, they're trying to solve for the best performance, and, ultimately, scalability. I mean, applications are growing really quickly especially in this current timeframe with cloud services and whatnot. They're trying to keep costs under control because certainly, it can get way out of control in the cloud since you don't own the infrastructure, and more importantly than anything else which is why it's at the bottom sort of at the foundation of all this, is they want their applications to be a really a good experience for their customers. So our customer's customer is actually who we're trying to solve this problem for. So what we've done is we've built a platform that uses AI and machine learning to optimize, meaning tune, all of the key parameters of a cloud application. So those are things like the CPU usage, the memory usage, the number of replicas in a Kubernetes or container environment, those kinds of things. It seems like it would be simple just to grab some values and plug 'em in, but it's not. It's actually the combination of them has to be right. Otherwise, you get delays or faults or other problems with the application. >> Andrew, if you can bring that slide back up for a second. I want to just ask one quick question on the problem statement. You got expenditures, performance, customer experience kind of on the sides there. Do you see this tip a certain way depending upon use cases? I mean, is there one thing that jumps out at you, Patrick, from your customer's customer's standpoint? Obviously, customer experience is the outcome. That's the app, whatever. That's whatever we got going on there. >> Sure. >> But is there patterns 'cause you can have good performance, but then budget overruns. Or all of them could be failing. Talk about this dynamic with this triangle. >> Well, without AI, without machine learning, you can solve for one of these, only one, right? So if you want to solve for performance like you said, your costs may overrun, and you're probably not going to have control of the customer experience. If you want to solve for one of the others, you're going to have to sacrifice the other two. With machine learning though, we can actually balance that, and it isn't a perfect balance, and the question you asked is really a great one. Sometimes, you want to over-correct on something. Sometimes, scalability is more important than cost, but what we're going to do because of our machine learning capability, we're going to always make sure that you're never spending more than you should spend, so we're always going to make sure that you have the best cost for whatever the performance and reliability factors that you you want to have are. >> Yeah, I can imagine. Some people leave services on. Happened to us one time. An intern left one of the services on, and like where did that bill come from? So kind of looked back, we had to kind of fix that. There's a ton of action, but I got to ask you, what are customers looking for with you guys? I mean, as they look at Opsani, what you guys are offering, what's different than what other people might be proposing with optimization solutions? >> Sure. Well, why don't we bring up the second slide, and this'll illustrate some of the differences, and we can talk through some of this stuff as well. So really, the area that we play in is called AIOps, and that's sort of a new area, if you will, over the last few years, and really what it means is applying intelligence to your cloud operations, and those cloud operations could be development operations, or they could be production operations. And what this slide is really representing is in the upper slide, that's sort of the way customers experience their DevOps model today. Somebody says we need an application or we need a feature, the developers pull down something from get. They hack an early version of it. They run through some tests. They size it whatever way they know that it won't fail, and then they throw it over to the SREs to try to tune it before they shove it out into production, but nobody really sizes it properly. It's not optimized, and so it's not tuned either. When it goes into production, it's just the first combination of settings that work. So what happens is undoubtedly, there's some type of a problem, a fault or a delay, or you push new code, or there's a change in traffic. Something happens, and then, you've got to figure out what the heck. So what happens then is you use your tools. First thing you do is you over-provision everything. That's what everybody does, they over-provision and try to soak up the problem. But that doesn't solve it because now, your costs are going crazy. You've got to go back and find out and try as best you can to get root cause. You go back to the tests, and you're trying to find something in the test phase that might be an indicator. Eventually your developers have to hack a hot fix, and the conveyor belt sort of keeps on going. We've tested this model on every single customer that we've spoken to, and they've all said this is what they experience on a day-to-day basis. Now, if we can go back to the side, let's talk about the second part which is what we do and what makes us different. So on the bottom of this slide, you'll see it's really a shift-left model. What we do is we plug in in the production phase, and as I mentioned earlier, what we're doing is we're tuning all those cloud parameters. We're tuning the CPU, the memory, the Replicas, all those kinds of things. We're tuning them all in concert, and we're doing it at machine speed, so that's how the customer gets the best performance, the best reliability at the best cost. That's the way we're able to achieve that is because we're iterating this thing in machine speed, but there's one other place where we plug in and we help the whole concept of AIOps and DevOps, and that is we can plug in in the test phase as well. And so if you think about it, the DevOps guy can actually not have to over-provision before he throws it over to the SREs. He can actually optimize and find the right size of the application before he sends it through to the SREs, and what this does is collapses the timeframe because it means the SREs don't have to hunt for a working set of parameters. They get one from the DevOps guys when they send it over, and this is how the future of AIOps is being really affected by optimization and what we call autonomous optimization which means that it's happening without humans having to press a button on it. >> John: Andrew, bring that slide back up. I want to just ask another question. Tuning in concert thing is very interesting to me. So how does that work? Are you telegraphing information to the developer from the autonomous workload tuning engine piece? I mean, how does the developer know the right knobs or where does it get that provisioning information? I see the performance lag. I see where you're solving that problem. >> Sure. >> How does that work? >> Yeah, so actually, if we go to the next slide, I'll show you exactly how it works. Okay, so this slide represents the architecture of a typical application environment that we would find ourselves in, and inside the dotted line is the customer's application namespace. That's where the app is. And so, it's got a bunch of pods. It's got a horizontal pod. It's got something for replication, probably an HPA. And so, what we do is we install inside that namespace two small instances. One is a tuning pod which some people call a canary, and that tuning pod joins the rest of the pods, but it's not part of the application. It's actually separate, but it gets the same traffic. We also install somebody we call Servo which is basically an action engine. What Servo does is Servo takes the metrics from whatever the metric system is is collecting all those different settings and whatnot from the working application. It could be something like Prometheus. It could be an Envoy Sidecar, or more likely, it's something like AppDynamics, or we can even collect metrics off of Nginx which is at the front of the service. We can plug into anywhere where those metrics are. We can pull the metrics forward. Once we see the metrics, we send them to our backend. The Opsani SaaS service is our machine learning backend. That's where all the magic happens, and what happens then is that service sees the settings, sends a recommendation to Servo, Servo sends it to the tuning pod, and we tune until we find optimal. And so, that iteration typically takes about 20 steps. It depends on how big the application is and whatnot, how fast those steps take. It could be anywhere from seconds to minutes to 10 to 20 minutes per step, but typically within about 20 steps, we can find optimal, and then we'll come back and we'll say, "Here's optimal, and do you want to "promote this to production," and the customer says, "Yes, I want to promote it to production "because I'm saving a lot of money or because I've gotten "better performance or better reliability." Then, all he has to do is press a button, and all that stuff gets sent right to the production pods, and all of those settings get put into production, and now he's now he's actually saving the money. So that's basically how it works. >> It's kind of like when I want to go to the beach, I look at the weather.com, I check the forecast, and I decide whether I want to go or not. You're getting the data, so you're getting a good look at the information, and then putting that into a policy standpoint. I get that, makes total sense. Can I ask you, if you don't mind, expanding on the performance and reliability and the cost advantage? You mentioned cost. How is that impacting? Give us an example of some performance impact, reliability, and cost impacts. >> Well, let's talk about what those things mean because like a lot of people might have different ideas about what they think those mean. So from a cost standpoint, we're talking about cloud spend ultimately, but it's represented by the settings themselves, so I'm not talking about what deal you cut with AWS or Azure or Google. I'm talking about whatever deal you cut, we're going to save you 30, 50, 70% off of that. So it doesn't really matter what cost you negotiated. What we're talking about is right-sizing the settings for CPU and memory, Replica. Could be Java. It could be garbage collection, time ratios, or heap sizes or things like that. Those are all the kinds of things that we can tune. The thing is most of those settings have an unlimited number of values, and this is why machine learning is important because, if you think about it, even if they only had eight settings or eight values per setting, now you're talking about literally billions of combinations. So to find optimal, you've got to have machine speed to be able to do it, and you have to iterate very, very quickly to make it happen. So that's basically the thing, and that's really one of the things that makes us different from anybody else, and if you put that last slide back up, the architecture slide, for just a second, there's a couple of key words at the bottom of it that I want to want to focus on, continuous. So continuous really means that we're on all the time. We're not plug us in one time, make a change, and then walk away. We're actually always measuring and adjusting, and the reason why this is important is in the modern DevOps world, your traffic level is going to change. You're going to push new code. Things are going to happen that are going to change the basic nature of the software, and you have to be able to tune for those changes. So continuous is very important. Second thing is autonomous. This is designed to take pressure off of the SREs. It's not designed to replace them, but to take the pressure off of them having to check pager all the time and run in and make adjustments, or try to divine or find an adjustment that might be very, very difficult for them to do so. So we're doing it for them, and that scale means that we can solve this for, let's say, one big monolithic application, or we can solve it for literally hundreds of applications and thousands of microservices that make up those applications and tune them all at the same time. So the same platform can be used for all of those. You originally asked about the parameters and the settings. Did I answer the question there? >> You totally did. I mean, the tuning in concert. You mentioned early as a key point. I mean, you're basically tuning the engine. It's not so much negotiating a purchase SaaS discount. It's essentially cost overruns by the engine, either over burning or heating or whatever you want to call it. I mean, basically inefficiency. You're tuning the core engine. >> Exactly so. So the cost thing is I mentioned is due to right-sizing the settings and the number of Replicas. The performance is typically measured via latency, and the reliability is typically measured via error rates. And there's some other measures as well. We have a whole list of them that are in the application itself, but those are the kinds of things that we look for as results. When we do our tuning, we look for reducing error rates, or we look for holding error rates at zero, for example, even if we improve the performance or we improve the cost. So we're looking for the best result, the best combination result, and then a customer can decide if they want to do so to actually over-correct on something. We have the whole concept of guard rail, so if performance is the most important thing, or maybe some customers, cost is the most important thing, they can actually say, "Well, give us the best cost, "and give us the best performance and the best reliability, "but at this cost," and we can then use that as a service-level objective and tune around it. >> Yeah, it reminds me back in the old days when you had filtering white lists of black lists of addresses that can go through, say, a firewall or a device. You have billions of combinations now with machine learning. It's essentially scaling the same concept to unbelievable. These guardrails are now in place, and that's super cool and I think really relevant call-out point, Patrick, to kind of highlight that. At this kind of scale, you need machine learning, you need the AI to essentially identify quickly the patterns or combinations that are actually happening so a human doesn't have to waste their time that can be filled by basically a bot at that point. >> So John, there's just one other thing I want to mention around this, and that is one of the things that makes us different from other companies that do optimization. Basically, every other company in the optimization space creates a static recommendation, basically their recommendation engines, and what you get out of that is, let's say it's a manifest of changes, and you hand that to the SREs, and they put it into effect. Well, the fact of the matter is is that the traffic could have changed then. It could have spiked up, or it could have dropped below normal. You could have introduced a new feature or some other code change, and at that point in time, you've already instituted these changes. They may be completely out of date. That's why the continuous nature of what we do is important and different. >> It's funny, even the language that we're using here: network, garbage collection. I mean, you're talking about tuning an engine, am operating system. You're talking about stuff that's moving up the stack to the application layer, hence this new kind of eliminating of these kind of siloed waterfall, as you pointed out in your second slide, is kind of one integrated kind of operating environment. So when you have that or think about the data coming in, and you have to think about the automation just like self-correcting, error-correcting, tuning, garbage collection. These are words that we've kind of kicking around, but at the end of the day, it's an operating system. >> Well in the old days of automobiles, which I remember cause I'm I'm an old guy, if you wanted to tune your engine, you would probably rebuild your carburetor and turn some dials to get the air-oxygen-gas mix right. You'd re-gap your spark plugs. You'd probably make sure your points were right. There'd be four or five key things that you would do. You couldn't do them at the same time unless you had a magic wand. So we're the magic wand that basically, or in modern world, we're sort of that thing you plug in that tunes everything at once within that engine which is all now electronically controlled. So that's the big differences as you think about what we used to do manually, and now, can be done with automation. It can be done much, much faster without humans having to get their fingernails greasy, let's say. >> And I think the dynamic versus static is an interesting point. I want to bring up the SRE which has become a role that's becoming very prominent in the DevOps kind of plus world that's happening. You're seeing this new revolution. The role of the SRE is not just to be there to hold down and do the manual configuration. They had a scale. They're a developer, too. So I think this notion of offloading the SRE from doing manual tasks is another big, important point. Can you just react to that and share more about why the SRE role is so important and why automating that away through when you guys have is important? >> The SRE role is becoming more and more important, just as you said, and the reason is because somebody has to get that application ready for production. The DevOps guys don't do it. That's not their job. Their job is to get the code finished and send it through, and the SREs then have to make sure that that code will work, so they have to find a set of settings that will actually work in production. Once they find that set of settings, the first one they find that works, they'll push it through. It's not optimized at that point in time because they don't have time to try to find optimal, and if you think about it, the difference between a machine learning backend and an army of SREs that work 24-by-seven, we're talking about being able to do the work of many, many SREs that never get tired, that never need to go play video games, to unstress or whatever. We're working all the time. We're always measuring, adjusting. A lot of the companies we talked to do a once-a-month adjustment on their software. So they put an application out, and then they send in their SREs once a month to try to tune the application, and maybe they're using some of these other tools, or maybe they're using just their smarts, but they'll do that once a month. Well, gosh, they've pushed code probably four times during the month, and they probably had a bunch of different spikes and drops in traffic and other things that have happened. So we just want to help them spend their time on making sure that the application is ready for production. Want to make sure that all the other parts of the application are where they should be, and let us worry about tuning CPU, memory, Replica, job instances, and things like that so that they can work on making sure that application gets out and that it can scale, which is really important for them, for their companies to make money is for the apps to scale. >> Well, that's a great insight, Patrick. You mentioned you have a lot of great customers, and certainly if you have your customer base are early adopters, pioneers, and grow big companies because they have DevOps. They know that they're seeing a DevOps engineer and an SRE. Some of the other enterprises that are transforming think the DevOps engineer is the SRE person 'cause they're having to get transformed. So you guys are at the high end and getting now the new enterprises as they come on board to cloud scale. You have a huge uptake in Kubernetes, starting to see the standardization of microservices. People are getting it, so I got to ask you can you give us some examples of your customers, how they're organized, some case studies, who uses you guys, and why they love you? >> Sure. Well, let's bring up the next slide. We've got some customer examples here, and your viewers, our viewers, can probably figure out who these guys are. I can't tell them, but if they go on our website, they can sort of put two and two together, but the first one there is a major financial application SaaS provider, and in this particular case, they were having problems that they couldn't diagnose within the stack. Ultimately, they had to apply automation to it, and what we were able to do for them was give them a huge jump in reliability which was actually the biggest problem that they were having. We gave them 5,000 hours back a month in terms of the application. They were they're having pager duty alerts going off all the time. We actually gave them better performance. We gave them a 10% performance boost, and we dropped their cloud spend for that application by 72%. So in fact, it was an 80-plus % price performance or cost performance improvement that we gave them, and essentially, we helped them tune the entire stack. This was a hybrid environment, so this included VMs as well as more modern architecture. Today, I would say the overwhelming majority of our customers have moved off of the VMs and are in a containerized environment, and even more to the point, Kubernetes which we find just a very, very high percentage of our customers have moved to. So most of the work we're doing today with new customers is around that, and if we look at the second and third examples here, those are examples of that. In the second example, that's a company that develops websites. It's one of the big ones out in the marketplace that, let's say, if you were starting a new business and you wanted a website, they would develop that website for you. So their internal infrastructure is all brand new stuff. It's all Kubernetes, and what we were able to do for them is they were actually getting decent performance. We held their performance at their SLO. We achieved a 100% error-free scenario for them at runtime, and we dropped their cost by 80%. So for them, they needed us to hold-serve, if you will, on performance and reliability and get their costs under control because everything in that, that's a cloud native company. Everything there is cloud cost. So the interesting thing is it took us nine steps because nine of our iterations to actually get to optimal. So it was very, very quick, and there was no integration required. In the first case, we actually had to do a custom integration for an underlying platform that was used for CICD, but with the- >> John: Because of the hybrid, right? >> Patrick: Sorry? >> John: Because it was hybrid, right? >> Patrick: Yes, because it was hybrid, exactly. But within the second one, we just plugged right in, and we were able to tune the Kubernetes environment just as I showed in that architecture slide, and then the third one is one of the leading application performance monitoring companies on the market. They have a bunch of their own internal applications and those use a lot of cloud spend. They're actually running Kubernetes on top of VMs, but we don't have to worry about the VM layer. We just worry about the Kubernetes layer for them, and what we did for them was we gave them a 48% performance improvement in terms of latency and throughput. We dropped their error rates by 90% which is pretty substantial to say the least, and we gave them a 50% cost delta from where they had been. So this is the perfect example of actually being able to deliver on all three things which you can't always do. It has to be, sort of all applications are not created equal. This was one where we were able to actually deliver on all three of the key objectives. We were able to set them up in about 25 minutes from the time we got started, no extra integration, and needless to say, it was a big, happy moment for the developers to be able to go back to their bosses and say, "Hey, we have better performance, "better reliability. "Oh, by the way, we saved you half." >> So depending on the stack situation, you got VMs and Kubernetes on the one side, cloud-native, all Kubernetes, that's dream scenario obviously. Not many people like that. All the new stuff's going cloud-native, so that's ideal, and then the mixed ones, Kubernetes, but no VMs, right? >> Yeah, exactly. So Kubernetes with no VMs, no problem. Kubernetes on top of VMs, no problem, but we don't manage the VMs. We don't manage the underlay at all, in fact. And the other thing is we don't have to go back to the slide, but I think everybody will remember the slide that had the architecture, and on one side was our cloud instance. The only data that's going between the application and our cloud instance are the settings, so there's never any data. There's never any customer data, nothing for PCI, nothing for HIPPA, nothing for GDPR or any of those things. So no personal data, no health data. Nothing is passing back and forth. Just the settings of the containers. >> Patrick, while I got you here 'cause you're such a great, insightful guest, thank you for coming on and showcasing your company. Kubernetes real quick. How prevalent is this mainstream trend is because you're seeing such great examples of performance improvements. SLAs being met, SLOs being met. How real is Kubernetes for the mainstream enterprise as they're starting to use containers to tip their legacy and get into the cloud-native and certainly hybrid and soon to be multi-cloud environment? >> Yeah, I would not say it's dominant yet. Of container environments, I would say it's dominant now, but for all environments, it's not. I think the larger legacy companies are still going through that digital transformation, and so what we do is we catch them at that transformation point, and we can help them develop because as we remember from the AIOps slide, we can plug in at that test level and help them sort of pre-optimize as they're coming through. So we can actually help them be more efficient as they're transforming. The other side of it is the cloud-native companies. So you've got the legacy companies, brick and mortar, who are desperately trying to move to digitization. Then, you've got the ones that are born in the cloud. Most of them aren't on VMs at all. Most of them are on containers right from the get-go, but you do have some in the middle who have started to make a transition, and what they've done is they've taken their native VM environment and they've put Kubernetes on top of it so that way, they don't have to scuttle everything underneath it. >> Great. >> So I would say it's mixed at this point. >> Great business model, helping customers today, and being a bridge to the future. Real quick, what licensing models, how to buy, promotions you have for Amazon Web Services customers? How do people get involved? How do you guys charge? >> The product is licensed as a service, and the typical service is an annual. We license it by application, so let's just say you have an application, and it has 10 microservices. That would be a standard application. We'd have an annual cost for optimizing that application over the course of the year. We have a large application pack, if you will, for let's say applications of 20 services, something like that, and then we also have a platform, what we call Opsani platform, and that is for environments where the customer might have hundreds of applications and-or thousands of services, and we can plug into their deployment platform, something like a harness or Spinnaker or Jenkins or something like that, or we can plug into their their cloud Kubernetes orchestrator, and then we can actually discover the apps and optimize them. So we've got environments for both single apps and for many, many apps, and with the same platform. And yes, thanks for reminding me. We do have a promotion for for our AWS viewers. If you reference this presentation, and you look at the URL there which is opsani.com/awsstartupshowcase, can't forget that, you will, number one, get a free trial of our software. If you optimize one of your own applications, we're going to give you an Oculus set of goggles, the augmented reality goggles. And we have one other promotion for your viewers and for our joint customers here, and that is if you buy an annual license, you're going to get actually 15 months. So that's what we're putting on the table. It's actually a pretty good deal. The Oculus isn't contingent. That's a promotion. It's contingent on you actually optimizing one of your own services. So it's not a synthetic app. It's got to be one of your own apps, but that's what we've got on the table here, and I think it's a pretty good deal, and I hope your guys take us up on it. >> All right, great. Get Oculus Rift for optimizing one of your apps and 15 months for the price of 12. Patrick, thank you for coming on and sharing the future of AIOps with you guys. Great product, bridge to the future, solving a lot of problems. A lot of use cases there. Congratulations on your success. Thanks for coming on. >> Thank you so much. This has been excellent, and I really appreciate it. >> Hey, thanks for sharing. I'm John Furrier, your host with theCUBE. Thanks for watching. (upbeat music)

Published Date : Sep 22 2021

SUMMARY :

for the cloud management and Appreciate being with you. of the Startups Showcase, and that'll talk about the three elements kind of on the sides there. 'cause you can have good performance, and the question you asked An intern left one of the services on, and find the right size I mean, how does the and the customer says, and the cost advantage? and that's really one of the things I mean, the tuning in concert. So the cost thing is I mentioned is due to in the old days when you had and that is one of the things and you have to think about the automation So that's the big differences of offloading the SRE and the SREs then have to make sure and certainly if you So most of the work we're doing today "Oh, by the way, we saved you half." So depending on the stack situation, and our cloud instance are the settings, and get into the cloud-native that are born in the cloud. So I would say it's and being a bridge to the future. and the typical service is an annual. and 15 months for the price of 12. and I really appreciate it. I'm John Furrier, your host with theCUBE.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Emily FreemanPERSON

0.99+

PatrickPERSON

0.99+

JohnPERSON

0.99+

AndrewPERSON

0.99+

John FurrierPERSON

0.99+

Pat ContePERSON

0.99+

10%QUANTITY

0.99+

50%QUANTITY

0.99+

Patrick ContePERSON

0.99+

15 monthsQUANTITY

0.99+

secondQUANTITY

0.99+

90%QUANTITY

0.99+

AWSORGANIZATION

0.99+

thousandsQUANTITY

0.99+

fourQUANTITY

0.99+

nine stepsQUANTITY

0.99+

30QUANTITY

0.99+

OculusORGANIZATION

0.99+

100%QUANTITY

0.99+

72%QUANTITY

0.99+

48%QUANTITY

0.99+

10 microservicesQUANTITY

0.99+

second partQUANTITY

0.99+

FirstQUANTITY

0.99+

second slideQUANTITY

0.99+

first caseQUANTITY

0.99+

TodayDATE

0.99+

Amazon Web ServicesORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

oneQUANTITY

0.99+

20 servicesQUANTITY

0.99+

PrometheusTITLE

0.99+

second exampleQUANTITY

0.99+

second oneQUANTITY

0.99+

five keyQUANTITY

0.99+

OneQUANTITY

0.99+

firstQUANTITY

0.99+

third oneQUANTITY

0.99+

80-plus %QUANTITY

0.99+

eight settingsQUANTITY

0.99+

OpsaniPERSON

0.99+

third examplesQUANTITY

0.99+

twoQUANTITY

0.99+

todayDATE

0.99+

servicesQUANTITY

0.99+

50QUANTITY

0.99+

eight valuesQUANTITY

0.99+

bothQUANTITY

0.99+

nineQUANTITY

0.98+

three elementsQUANTITY

0.98+

ServoORGANIZATION

0.98+

80%QUANTITY

0.98+

opsani.com/awsstartupshowcaseOTHER

0.98+

first oneQUANTITY

0.98+

two small instancesQUANTITY

0.98+

10QUANTITY

0.97+

three thingsQUANTITY

0.97+

once a monthQUANTITY

0.97+

one timeQUANTITY

0.97+

70%QUANTITY

0.97+

GDPRTITLE

0.97+

zeroQUANTITY

0.97+

ServoTITLE

0.97+

about 20 stepsQUANTITY

0.97+

12QUANTITY

0.96+

KubernetesTITLE

0.96+

four timesQUANTITY

0.96+

Real World Experiences with HPE GreenLake, Paulo Rego & Carlos Leite


 

>>Hello, uh, welcome to everyone. Uh, my name is Carlos lights. I'm a HP managing director, Portugal. Um, I have the privilege today where we make Hable, which is the business to business director for Altis Portugal. Uh, Paul, thank you very much. I want to start by faith you for your presence and availability to be with us today and to be able to share a few words about our partnership in particular use of our it consumption model, uh, as a service. So HP ring like, Oh, uh, maybe it's a good, uh, is a good way to start explaining a little bit about Altis right out. This is a multinational organization. Uh, you have presence in four countries, France, Portugal, Israel, and Dominica in a week, but today we will focus on, uh, on Portugal. Right. So could you please polo, uh, tell us about the Altis Portugal and, uh, your company's vision follow. >>Okay. Hello, Carlos, thank you very much for the invitation and regarding your question. Well, all this protocol was born as a telephone operator and after a few decades of transformation, namely on the B2B markets is now the leading Portuguese player in ICT. We have positioned ourselves as the preferred partner for digital transformation of organizations, both private and public. We have a large portfolio of an ICT portfolio of products and customized solutions from IOT, BPO, security payments, and many other areas. And we are committed to achieve a more competitive, collaborative, digital, and low carbon economy through our in-house products and our partner solutions like HBA, uh, regarding cloud. We have a decade of cloud presence in the market, uh, based on five data centers network that we have here in Portugal that are operating since 1999. Uh, so we have made this cloud journey with a strong focus on managing value added services, serving both in years and also top organizations. Uh, we started to believe that as a country, we cannot achieve the level of modernization and automation of operational models, or even the relation of company's cost structure without the throng and widespread adoption of a model as a set of display, as you go a lot of liken services, >>Paul, um, you know, um, as well, companies are currently, uh, facing many challenge, uh, and looking for solutions that address to their needs, but also, uh, should be future proof. Right? So how do you see everybody's cloud role on the elping, uh, achieve business goals and requirements? >>Well, uh, cloud approach is, are becoming a priority on organizations and customers are increasingly aware of the, of their solutions to their needs. So what coffee has shown us is how important is the agility flexibility, and thank to markets we all needed to bring, as far as you could, new solutions to the market, increase digital means and accomplished the use new workplace information because we are all working now, I'm in the office, I mean at house. And so what we see here in Portugal is that cloud enabled organizations were better prepared to this challenge and prior management visitations within companies, I think that will probably be now overcome by new decision-makers and some, so some verticals like healthcare banking got public administration, they have tight retirements, tight requirements on compliance that the security control. And I think that these are some of the verticals where only in a nightmarish cloud approach we'll compare unite with all these called VC cities. >>And so I think that under the present situation, budgets constraints, efficiency, and predictability are even more present at decision-making process. And I don't think that traditional it models will no longer cope with this challenge. And so therefore, uh, every thought strategy are presently delivering customers the best of both worlds. They want simplified it provisioning and operational LSD city, but never, but then nevertheless are compelled to keep that close to home when that the protection is critical and compliance, and that the governments are to shop every cloud service services supported by the right partners tools like GreenLake central. I think that will be the key to achieve a single as a service experience from edge to plow. They allow the security and of control of a non-brand model, along with the corporate agility and the OPEX flexibility so much demanded by our clients. >>So BOLO, uh, regarding the experience, uh, fortunately we have a ready common contracts and common customers, right? And the, well, from your experience, uh, what do you think are the benefits customers are getting from each beaming? Like, can you share some examples use cases? >>Okay. Of course. Well, uh, as I said before, HBA brings to the table a key differentiator, which is the GreenLake central. We have one portal to manage all cloud on prem having at the same time, the control, the easy to use and better capacity to plan future needs. So it serves both it finance and legal teams giving the proper visibility to all of these stakeholders. In terms of examples, we have customers that are using GreenLake to suppose, suppose private and public sector. We have implementations both at Altis and customers' premises that the centers on a very diverse type of business apps from mission critical help, sometimes legacy to SAP or some cloud native apps. For instance, we have solutions of stars as a service for the energy sector on the transport sector and a data center as a service on public administration on customers that need to keep the it on prem because of compliance, but look for the cloud flexibility and the model as a service. >>Good. I see, uh, I see in fact, uh, a positive, a very positive future, uh, in our partnership, but, uh, w what is your opinion about that, uh, uh, how you see the partnership with HP and, uh, and the position to, to give you the ability to even have more success? >>Well, I'm putting this question, talking a little bit about the near future. I think that the short to medium term, the future, uh, will be concentrating on solving possible endemic crisis and sell both customers and ourselves service providers. Uh, we faced this challenge and I think that we were able to cope with these challenges on 2021. And we are very supported on technology and digital solutions. And I think that with the help of HPA, we are being able to, to pass this reality check, uh, and under such critical circumstances. But now we need to step forward. We need to leverage the increase e-comm levels. Remote work breath is a way of in there. So we need to go towards the digital transformation of organizations. We need to extend the infrastructure to where business happens. Uh, we have to infuse cloud closer to the network. We are telco allowing for a new class of cloud native and innovative applications for customers. >>Some verticals where LTC is aiming for a new position, also urge for closer collaboration with called partners like HPA, for instance, e-health will bring us new opportunities in the near future. And the proper ICT foundations are crucial for the new health care services, patient focused approaches and that the driven decisions and being cloud markets made sure. Now, I think that some companies still count on their partners to facilitate the cloud journey, helping them underline the business cases on cloud users apart consulting skills to facilitate change, bring peers use cases that can easily apply NHPA is the most value for out. It says much to bring the new cloud technology, but also the existing experience, the use casing, the use cases and helping us go through the learning curve in other markets. So, uh, for the future, we hope for the best, and we will count on a leading partner like HPA to drivers too, >>On these uncertain, but promising future. Good. I want to reinforce the, thank you very much for your shares. Share this content, this information with us. Uh, it was a real, it is a real pleasure to work with you. Uh, it's a pleasure for all your work without these as a company, but as well, personally, we, your follow and the, because I think it's important to wherever address relationship, and we have now the trust relationship with the organization and with personally as well. So thank you very much, Paulo, and have a nice day. Thank you. Thank you. Thank you very much..

Published Date : Mar 17 2021

SUMMARY :

I want to start by faith you for your presence and availability We have a decade of cloud presence in the market, uh, everybody's cloud role on the elping, uh, achieve business goals and requirements? flexibility, and thank to markets we all needed to bring, as far as you And I don't think that traditional it models will no longer cope with this challenge. at the same time, the control, the easy to use and better capacity to plan future needs. uh, uh, how you see the partnership with HP and, uh, I think that the short to medium term, the future, uh, will be concentrating So, uh, for the future, we hope for the best, and we will count on a leading partner like HPA I want to reinforce the, thank you very much for your shares.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
CarlosPERSON

0.99+

PaulPERSON

0.99+

PortugalLOCATION

0.99+

HPAORGANIZATION

0.99+

PauloPERSON

0.99+

HPORGANIZATION

0.99+

2021DATE

0.99+

AltisORGANIZATION

0.99+

Carlos LeitePERSON

0.99+

bothQUANTITY

0.99+

telcoORGANIZATION

0.99+

1999DATE

0.99+

todayDATE

0.99+

Altis PortugalORGANIZATION

0.99+

Paulo RegoPERSON

0.98+

both worldsQUANTITY

0.98+

IsraelLOCATION

0.97+

four countriesQUANTITY

0.97+

Carlos lightsPERSON

0.97+

singleQUANTITY

0.95+

HPE GreenLakeORGANIZATION

0.95+

one portalQUANTITY

0.94+

DominicaLOCATION

0.94+

HBAORGANIZATION

0.94+

FranceLOCATION

0.93+

five data centersQUANTITY

0.93+

NHPAORGANIZATION

0.91+

OPEXORGANIZATION

0.91+

PortugueseOTHER

0.9+

both customersQUANTITY

0.9+

GreenLakeORGANIZATION

0.86+

eachQUANTITY

0.84+

HableORGANIZATION

0.79+

premORGANIZATION

0.74+

GreenLakeTITLE

0.72+

LTCORGANIZATION

0.66+

a weekQUANTITY

0.63+

decadesQUANTITY

0.53+

SAPTITLE

0.49+