Patrick Bergstrom & Yasmin Rajabi | KubeCon + CloudNativeCon NA 2022
>>Good morning and welcome back to the Cube where we are excited to be broadcasting live all week from Detroit to Michigan at Cuban slash cloud Native con. Depending on who you're asking, Lisa, it's day two things are buzzing. How are you feeling? >>Good, excited. Ready for day two, ready to have more great conversations to see how this community is expanding, how it's evolving, and how it's really supporting it itself. >>Yeah, Yeah. This is a very supportive community. Something we talked a lot about. And speaking of community, we've got some very bold and brave folks over here. We've got this CTO and the head of product from Storm Forge, and they are on a mission to automate Kubernetes. Now automatic and Kubernetes are not words that go in the same sentence very often, so please welcome Patrick and Yasmin. Thank you both for being here. Hello. How you doing? >>Thanks for having us. >>Thanks for having us. >>Talk about what you guys are doing. Cause as you said, Kubernetes auto spelling is anything but auto. >>Yeah. >>The, what are some of the challenges? How do you help >>Eliminate this? Yeah, so the mission at Storm Forge is primarily automatic resource configuration and optimization essentially. So we started as a machine learning company first. And it's kind of an interesting story cuz we're one of those startups that has pivoted a few times. And so we were running our machine learning workloads. Most >>Have, I think, >>Right? Yeah. Yeah. We were, we started out running our machine learning workloads and moving them into Kubernetes. And then we weren't quite sure how to correctly adjust and size our containers. And so our ML team, we've got three PhDs and applied mathematics. They said, Well, hang on, we could write an algorithm for that. And so they did. And then, Oh, I love this. Yeah. And then we said, Well holy cow, that's actually really useful. I wonder if other people would like that. And that's kind of where we got our start. >>You solved your own problem and then you built a business >>Around it. Yeah, exactly. >>That is fantastic. Is, is that driving product development at Storm Forge still? That kind of attitude? >>I mean that kind of attitude definitely drives product development, but we're, you know, balancing that with what the users are, the challenges that they have, especially at large scale. We deal with a lot of large enterprises and for us as a startup, we can relate to the problems that come with Kubernetes when you're trying to scale it. But when you're talking about the scale of some of these larger enterprises, it's just a different mentality. So we're trying to balance that of how we take that input into how we build our product. Talk >>About that, like the, the end user input and how you're taking that in, because of course it's only going to be a, you know, more of a symbiotic relationship when that customer feedback is taken and >>Acted on. Yeah, totally. And for us, because we use machine learning, it's a lot of building confidence with our users. So making sure that they understand how we look at the data, how we come up with the recommendations, and actually deploy those changes in their environment. There's a lot of trust that needs to be built there. So being able to go back to our users and say, Okay, we're presenting you this type of data, give us your feedback and building it alongside them has helped a lot in these >>Relationships. Absolutely. You said the word trust, and that's something that we talk about at every >>Show. I was gonna jump on that too. It's >>Not, Yeah, it's not a buzzword. It's not, It shouldn't be. Yeah. It really should be, I wanna say lived and breathed, but that's probably grammatically incorrect. >>We're not a gram show. It's okay darling. Yeah, thank >>You. It should be truly embodied. >>Yeah. And I, I think it's, it's not even unique to just what we do, but across tech in general, right? Like when I talk about SRE and building SRE teams, one of the things I mentioned is you have to build that trust first. And with machine learning, I think it can be really difficult too for a couple different reasons. Like one, it tends to be a black box if it's actually true machine learning. Totally. Which ours is. But the other piece that we run into. Yeah. And the other piece we run into though is, is what I was an executive at United Health Group before I joined Storm Forge. And I would get companies that would come to me and try to sell me machine learning and I would kind of look at it and say, Well no, that's just a basic decision tree. Or like, that's a super basic whole winter forecast, right? Like that's not actually machine learning. And that's one of the things that we actually find ourselves kind of battling a little bit when we talk about what we do in building that trust. >>Talk a little bit about the latest release as you guys had a very active September. Here we are. And towards the, I think end of October. Yeah. What are some of the, the new things that have come out? New integrations, new partnerships. Give us a scoop on that. >>Yeah, well I guess I'll start and then I'll probably hand it over to you. But like the, the big thing for us is we talked about automating Kubernetes in the very beginning, right? Like Kubernetes has got a vpa it's >>A wild sentence anyway. Yeah, yeah. >>It it >>Has. We're not gonna get over at the whole show. Yeah. >>It as a VPA built in, it has an HPA built in and, and when you look at the data and even when you read the documentation from Google, it explicitly says never the two should meet. Right. Because you'll end up thrashing and they'll fight each other. Well the big release we just announced is with our machine learning, we can now do both. And so we vertically scale your pods to the correct up. Yeah. >>Follow status. I love that. >>Yeah, we can, we can scale your pods to the correct size and still allow you to enable the HPA and we'll make recommendations for your scaling points and your thresholds on the HPA as well so that they can work together to really truly maximize your efficiency that without sacrificing your performance and your reliability of the applications that you're running. That >>Sounds like a massive differentiator for >>Storm launch, which I would say it is. Yeah. I think as far as I know, we're the first in the industry that can do this. Yeah. >>And >>From very singularity vibes too. You know, the machines are learning, teaching themselves and doing it all automatically. Yep. Gets me very >>Excited. >>Yeah, absolutely. And from a customer demand perspective, what's the feedback been? Yeah, it's been a few >>Weeks. Yeah, it's been really great actually. And a lot of why we went down this path was user driven because they're doing horizontal scale and they want to be able to vertically size as they're scaling. So if you put yourself in the shoes of someone that's configuring Kubernetes, you're usually guessing on what you're setting your CPU requests and limits do. But horizontal scale makes sense. You're either adding more things or removing more things. And so once they actually are scaled out as a large environment and they have to rethink, how am I gonna resize this now? It's just not possible. It's so many thousands of settings across all the different environments and you're only thinking about CPU memory, You're not thinking about a lot of things. It's just, but once you scale that out, it's a big challenge. So they came to us and said, Okay, you're doing, cuz we were doing vertical scaling before and now we enable vertical and horizontal. And so they came to us and said, I love what you're doing about right sizing, but we wanna be able to do this while also horizontally scaling. And so the way that our software works is we give you the recommendations for what the setting should be and then allow Kubernetes to continue to add and remove replicas as needed. So it's not like we're going in and making changes to Kubernetes, but we make changes to the configuration settings so that it's the most optimal from a resource perspective. >>Efficiency has been a real big theme of the show. Yeah. And it's clear that that's a focus for you. Everyone here wants to do more faster Of course. And innovation, that's the thing to do that sometimes we need partners. You just announced an integration with Datadog. Tell us about that. Yeah, >>Absolutely. Yeah. So the way our platform works is we need data of course, right? So they're, they're a great partner for us and we use them both as an input and an output. So we pull in metrics from Datadog to provide recommendations and we'll actually display all those within the Datadog portal. Cause we have a lot of users that are like, Look, Datadog's my single pane of glass and I hate using that word, but they get all their insights there. They can see their recommendations and then actually go deploy those. Whether they wanna automatically have the recommendations deployed or go in and actually push a button. >>So give me an example of a customer that is using the, the new release and some of the business outcomes they're achieving. I imagine one of the things that you're enabling is just closing that ES skills gap. But from a business level perspective, how are they gaining like competitive advantages to be able to get products to market faster, for example? >>Yeah, so one of the customers that was actually part of our press release and launch and spoke about us at a webinar, they are a SaaS product and deal with really bursty workloads. And so their cloud costs have been growing 40% year over year. And their platform engineering team is basically enabled to provide the automation for developers and in their environment, but also to reduce those costs. So they want to, it's that trade off of resiliency and cost performance. And so they came to us and said, Look, we know we're over provisioned, but we don't know how to tackle that problem without throwing tons of humans at the problem. And so we worked with them and just on a single app found 60% savings and we're working now to kind of deploy that across their entire production workload. But that allows them to then go back and get more out of the, the budget that they already have and they can kind of reallocate that in other areas, >>Right? So there can be chop line and bottom >>Line impact. Yeah. And I, I think there's some really direct impact to the carbon emissions of an organization as well. That's a good point. When you can reduce your compute consumption by 60%. >>I love this. We haven't talked about this at all during the show. Yeah. And I'm really glad that you brought this up. All of the things that power this use energy. Yeah. >>What is it like seven to 8% of all electricity in the world is consumed by data centers. Like it's crazy. Yeah. Yeah. And so like that's wild. Yeah. Yeah. So being able to make a reduction in impact there too, especially with organizations that are trying to sign green pledges and everything else. >>It's hard. Yeah. ESG initiatives are huge. >>Absolut, >>It's >>A whole lot. A lot of companies have ESG initiatives where they can't even go out and do an RFP with a business, Right. If they don't have an actual active starting, impactful ESG program. Yes. Yeah. >>And the RFPs that we have to fill out, we have to tell them how they'll help. >>Yeah. Yes. It's so, yeah, I mean I was really struck when I looked on your website and I saw 54% average cost reduction for Yeah. For your cloud operations. I hadn't even thought about it from a power perspective. Yeah. I mean, imagine if we cut that to 3% of the world's power grid. That is just, that is very compelling. Speaking of compelling and exciting future things, talk to us about what's next? What's got you pumped for 2023 and and what lies >>Ahead? Oh man. Well that seems like a product conversation for sure. >>Well, we're super excited about extending what we do to other platforms, other metrics. So we optimize a lot right now around CPU and memory, but we can also give people insights into, you know, limiting kills, limiting CPU throttling, so extending the metrics. And when you look at hba and horizontal scale today, most of it is done with cpu, but there are some organizations out there that are scaling on custom metrics. So being able to take in more data to provide more recommendations and kind of extend what we can do from an optimization standpoint. >>That's, yeah, that's cool. And what house you most excited on the show floor? Anything? Anything that you've seen? Any keynotes? >>There's, Well, I haven't had a lot of time to go to the keynotes unfortunately, but it's, >>Well, I'm shock you've been busy or something, right? Much your time here. >>I can't imagine why. But no, there's, it's really interesting to see all the vendors that are popping up around Kubernetes focus specifically with security is always something that's really interesting to me. And automating CICD and how they continue to dive into that automation devs, SEC ops continues to be a big thing for a lot of organizations. Yeah. Yeah. >>I I do, I think it's interesting when we marry, Were you guys here last year? >>I was not here. >>No. So at, at the smaller version of this in Los Angeles. Yeah. I, I was really struck because there was still a conversation of whether or not we were all in on Kubernetes as, as kind of a community and a society this year. And I'm curious if you feel this way too. Everyone feels committed. Yeah. Yeah. I I I feel like there's no question that Kubernetes is the tool that we are gonna be using. >>Yeah. I I think so. And I think a lot of that is actually being unlocked by some of these vendors that are being partners and helping people get the most outta Kubernetes, you know, especially at the larger enterprise organizations. Like they want to do it, but the skills gap is a very real problem. Right. And so figuring out, like Jasmine talked about figuring out how do we, you know, optimize or set up the correct settings without throwing thousands of humans at it. Never mind the fact you'll never find a thousand people that wanna do that all day every day. >>I was gonna, It's a fold endeavor for those >>People study, right? Yeah. And, and being able to close some of those gaps, whether it's optimization, security, DevOps, C I C D. As we get more of those partners like I just talked about on the floor, then you see more and more enterprises being more open to leaning into Kubernetes a little bit. >>Yeah. Yeah. We've seen, we've had some great conversations the last day and, and today as well with organizations that are history companies like Ford Motor Companies for >>Example. Yeah. Right. >>Just right behind us. One of their EVs and, and it's, they're becoming technology companies that happen to do cars or home >>Here. I had a nice job with 'em this morning. Yes. With that storyline, honestly. >>Yes. That when we now have such a different lens into these organizations, how they're using technologies, advanced technologies, Kubernetes, et cetera, to really become data companies. Yeah. Because they have to be, well the consumers on the other end expect a Home Depot or a Ford or whomever or your bank Yeah. To know who you are. I want the information right here whenever I need it so I can do the transaction I need and I want you to also deliver me information that is relevant to me. Yeah. Because there, there's no patience anymore. Yeah. >>And we partner with a lot of big FinTech companies and it's, it's very much that. It's like how do we continue to optimize? But then as they look at transitioning off of older organizations and capabilities, whether that's, they have a physical data center that's racked to the gills and they can't do anything about that, so they wanna move to cloud or they're just dipping their toe into even private cloud with Kubernetes in their own instances. A lot of it is how do we do this right? Like how do we lean in and, Yeah. >>Yeah. Well I think you said it really well that the debate seems to be over in terms of do we go in on Kubernetes? That that was a theme that I think we felt that yesterday, even on on day one of the keynotes. The community seems to be just craving more. I think that was another thing that we felt yesterday was all of the contributors and the collaborators, people want to be able to help drive this community forward because it's, it's a flywheel of symbiosis for all of the vendors here. The maintainers and, and really businesses in any industry can benefit. >>Yeah. It's super validating. I mean if you just look at the floor, there's like 20 different booths that talk about cost reporting for Kubernetes. So not only have people moved, but now they're dealing with those challenges at scale. And I think for us it's very validating because there's so many vendors that are looking into the reporting of this and showing you the problem that you have. And then where we can help is, okay, now you know, you have a problem, here's how we can fix it for you. >>Yeah. Yeah. That, that sort of dealing with challenges at scale that you set, I think that's also what we're hearing. Yeah. And seeing and feeling on the show floor. >>Yeah, absolutely. >>What can folks see and, and touch and feel in your booth? >>We have some demos there you can play around with the product. We're giving away a Lego set so we've let >>Gotta gets >>Are right now we're gonna have to get some Lego, We do a swag segment at the end of the day every day. Now we've >>Some cool socks. >>Yep. Socks are hot. Let's, let's actually talk about scale internally as our closing question. What's going on at Storm Forge? If someone's watching right now, they're excited. Are you hiring? We are hiring. Yeah. How can they stalk you? What's the >>School? Absolutely. So you can check us out on Storm forge.io. We're certainly hiring across the engineering organization. We're hiring across the UX a product organization. We're dealing, like I said, we've got some really big customers that we're, we're working through with some really fun challenges. And we're looking to continue to build on what we do and do new innovative things like especially cuz like I said, we are a machine learning organization first. And so for me it's like how do I collect all the data that I can and then let's find out what's interesting in there that we can help people with. Whether that's cpu, memory, custom metrics, like as said, preventing kills, driving availability, reliability, What can we do to, to kind of make a little bit more transparent the stuff that's going on underneath the covers in Kubernetes for the decision makers in these organizations. >>Yes. Transparency is a goal of >>Many. >>Yeah, absolutely. Well, and you mentioned fun. If this conversation is any representation, it would be very fun to be working on both of your teams. We, we have a lot of fun Ya. Patrick, thank you so much for joining. Thanks for having us, Lisa, As usual, thanks for being here with me. My pleasure. And thank you to all of you for turning into the Cubes live show from Detroit. My name's Savannah Peterson and we'll be back in a few.
SUMMARY :
How are you feeling? community is expanding, how it's evolving, and how it's really supporting it itself. Forge, and they are on a mission to automate Kubernetes. Talk about what you guys are doing. And so we were running our machine learning workloads. And then we weren't quite sure how to correctly adjust and size our containers. Yeah, exactly. Is, is that driving product development at Storm Forge still? I mean that kind of attitude definitely drives product development, but we're, you know, balancing that with what the users are, So making sure that they understand how we look at the data, You said the word trust, and that's something that we talk about at every It's Yeah. Yeah, thank And that's one of the things that we actually find ourselves kind of battling Talk a little bit about the latest release as you guys had a very active September. But like the, the big thing for us is we talked about automating Yeah, yeah. Yeah. And so we vertically scale your pods to the correct up. I love that. Yeah, we can, we can scale your pods to the correct size and still allow you to enable the HPA Yeah. You know, the machines are learning, teaching themselves and doing it all automatically. And from a customer demand perspective, what's the feedback been? And so they came to us and said, I love what you're doing about right sizing, And innovation, that's the thing to do that sometimes we they're a great partner for us and we use them both as an input and an output. I imagine one of the things that you're And so they came to us and said, Look, we know we're over provisioned, When you can reduce your compute consumption by 60%. And I'm really glad that you brought this up. And so like that's wild. It's hard. Yeah. I mean, imagine if we cut that to 3% of the world's power grid. Well that seems like a product conversation for sure. And when you look at hba and horizontal scale today, most of it is done with cpu, And what house you most excited on the show floor? Much your time here. And automating CICD and how they continue to dive into that automation devs, And I'm curious if you feel this way too. And I think a lot of that is actually being unlocked by some of these vendors that are being partners and DevOps, C I C D. As we get more of those partners like I just talked about on the floor, and today as well with organizations that are history companies like Ford Motor Companies for happen to do cars or home With that storyline, honestly. do the transaction I need and I want you to also deliver me information that is relevant to me. And we partner with a lot of big FinTech companies and it's, it's very much that. I think that was another thing that we felt yesterday was all of the contributors and And I think for us it's very validating because there's so many vendors that And seeing and feeling on the show floor. We have some demos there you can play around with the product. Are right now we're gonna have to get some Lego, We do a swag segment at the end of the day every day. Yeah. And so for me it's like how do I collect all the data And thank you to all of
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Patrick | PERSON | 0.99+ |
Detroit | LOCATION | 0.99+ |
Ford | ORGANIZATION | 0.99+ |
Savannah Peterson | PERSON | 0.99+ |
Michigan | LOCATION | 0.99+ |
40% | QUANTITY | 0.99+ |
Lisa | PERSON | 0.99+ |
Datadog | ORGANIZATION | 0.99+ |
54% | QUANTITY | 0.99+ |
Yasmin Rajabi | PERSON | 0.99+ |
Storm Forge | ORGANIZATION | 0.99+ |
seven | QUANTITY | 0.99+ |
60% | QUANTITY | 0.99+ |
Yasmin | PERSON | 0.99+ |
United Health Group | ORGANIZATION | 0.99+ |
Patrick Bergstrom | PERSON | 0.99+ |
Los Angeles | LOCATION | 0.99+ |
Jasmine | PERSON | 0.99+ |
yesterday | DATE | 0.99+ |
first | QUANTITY | 0.99+ |
last year | DATE | 0.99+ |
20 different booths | QUANTITY | 0.99+ |
3% | QUANTITY | 0.99+ |
ORGANIZATION | 0.99+ | |
Storm Forge | ORGANIZATION | 0.99+ |
September | DATE | 0.99+ |
Home Depot | ORGANIZATION | 0.99+ |
KubeCon | EVENT | 0.99+ |
2023 | DATE | 0.99+ |
both | QUANTITY | 0.99+ |
CloudNativeCon | EVENT | 0.99+ |
two | QUANTITY | 0.98+ |
Lego | ORGANIZATION | 0.98+ |
Ford Motor Companies | ORGANIZATION | 0.98+ |
one | QUANTITY | 0.98+ |
Kubernetes | TITLE | 0.98+ |
day two | QUANTITY | 0.97+ |
this year | DATE | 0.97+ |
8% | QUANTITY | 0.97+ |
HPA | ORGANIZATION | 0.96+ |
single app | QUANTITY | 0.96+ |
today | DATE | 0.95+ |
ESG | TITLE | 0.94+ |
single pane | QUANTITY | 0.93+ |
thousands of humans | QUANTITY | 0.92+ |
end of October | DATE | 0.91+ |
SRE | ORGANIZATION | 0.9+ |
three PhDs | QUANTITY | 0.9+ |
One | QUANTITY | 0.89+ |
NA 2022 | EVENT | 0.87+ |
thousands | QUANTITY | 0.85+ |
thousand people | QUANTITY | 0.8+ |
Cubes | ORGANIZATION | 0.79+ |
Cube | LOCATION | 0.78+ |
this morning | DATE | 0.78+ |
day one | QUANTITY | 0.75+ |
Kubernetes | ORGANIZATION | 0.73+ |
Matt Provo & Patrick Bergstrom, StormForge | Kubecon + Cloudnativecon Europe 2022
>>The cube presents, Coon and cloud native con Europe 22, brought to you by the cloud native computing foundation. >>Welcome to Melissa Spain. And we're at cuon cloud native con Europe, 2022. I'm Keith Townsend. And my co-host en Rico senior Etti en Rico's really proud of me. I've called him en Rico and said IK, every session, senior it analyst giga, O we're talking to fantastic builders at Cuban cloud native con about the projects and the efforts en Rico up to this point, it's been all about provisioning insecurity. What, what conversation have we been missing? >>Well, I mean, I, I think, I think that, uh, uh, we passed the point of having the conversation of deployment of provisioning. You know, everybody's very skilled, actually everything is done at day two. They are discovering that, well, there is a security problem. There is an observability problem. And in fact, we are meeting with a lot of people and there are a lot of conversation with people really needing to understand what is happening. I mean, in their classroom, what, why it is happening and all the, the questions that come with it. I mean, and, uh, the more I talk with, uh, people in the, in the show floor here, or even in the, you know, in the various sessions is about, you know, we are growing, the, our clusters are becoming bigger and bigger. Uh, applications are becoming, you know, bigger as well. So we need to know, understand better what is happening. It's not only, you know, about cost it's about everything at the >>End. So I think that's a great set up for our guests, max, Provo, founder, and CEO of storm for forge and Patrick Britton, Bergstrom, Brookstone. Yeah, I spelled it right. I didn't say it right. Berg storm CTO. We're at Q con cloud native con we're projects are discussed, built and storm forge. I I've heard the pitch before, so forgive me. And I'm, I'm, I'm, I'm, I'm, I'm kind of torn. I have service mesh. What do I need more like, what problem is storm for solving? >>You wanna take it? >>Sure, absolutely. So it it's interesting because, uh, my background is in the enterprise, right? I was an executive at United health group. Um, before that I worked at best buy. Um, and one of the issues that we always had was, especially as you migrate to the cloud, it seems like the CPU dial or the memory dial is your reliability dial. So it's like, oh, I just turned that all the way to the right and everything's hunky Dory. Right. Uh, but then we run into the issue like you and I were just talking about where it gets very, very expensive, very quickly. Uh, and so my first conversations with Matt and the storm forge group, and they were telling me about the product and, and what we're dealing with. I said, that is the problem statement that I have always struggled with. And I wish this existed 10 years ago when I was dealing with EC two costs, right? And now with Kubernetes, it's the same thing. It's so easy to provision. So realistically, what it is is we take your raw telemetry data and we essentially monitor the performance of your application. And then we can tell you using our machine learning algorithms, the exact configuration that you should be using for your application to achieve the results that you're looking for without over provisioning. So we reduce your consumption of CPU of memory and production, which ultimately nine times outta 10, actually I would say 10 out of 10 reduces your cost significantly without sacrificing reliability. >>So can your solution also help to optimize the application in the long run? Because yes, of course, yep. You know, the lowing fluid is, you know, optimize the deployment. Yeah. But actually the long term is optimizing the application. Yes. Which is the real problem. >>Yep. So we actually, um, we're fine with the, the former of what you just said, but we exist to do the latter. And so we're squarely and completely focused at the application layer. Um, we are, uh, as long as you can track or understand the metrics you care about for your application, uh, we can optimize against it. Um, we love that we don't know your application. We don't know what the SLA and SLO requirements are for your app. You do. And so in, in our world, it's about empowering the developer into the process, not automating them out of it. And I think sometimes AI and machine learning sort of gets a bad wrap from that standpoint. And so, uh, we've at this point, the company's been around, you know, since 2016, uh, kind of from the very early days of Kubernetes, we've always been, you know, squarely focused on Kubernetes using our core machine learning, uh, engine to optimize metrics at the application layer, uh, that people care about and, and need to need to go after. And the truth of the matter is today. And over time, you know, setting a cluster up on Kubernetes has largely been solved. Um, and yet the promise of, of Kubernetes around portability and flexibility, uh, downstream when you operationalize the complexity, smacks you in the face. And, uh, and that's where, where storm forge comes in. And so we're a vertical, you know, kind of vertically oriented solution. Um, that's, that's absolutely focused on solving that problem. >>Well, I don't want to play, actually. I want to play the, uh, devils advocate here and, you know, >>You wouldn't be a good analyst if you didn't. >>So the, the problem is when you talk with clients, users, they, there are many of them still working with Java with, you know, something that is really tough. Mm-hmm <affirmative>, I mean, we loved all of us loved Java. Yeah, absolutely. Maybe 20 years ago. Yeah. But not anymore, but still they have developers. They are porting applications, microservices. Yes. But not very optimized, etcetera. C cetera. So it's becoming tough. So how you can interact with these kind of yeah. Old hybrid or anyway, not well in generic applications. >>Yeah. We, we do that today. We actually, part of our platform is we offer performance testing in a lower environment and stage. And we like Matt was saying, we can use any metric that you care about and we can work with any configuration for that application. So the perfect example is Java, you know, you have to worry about your heap size, your garbage collection tuning. Um, and one of the things that really struck, struck me very early on about the storm forage product is because it is true machine learning. You remove the human bias from that. So like a lot of what I did in the past, especially around SRE and, and performance tuning, we were only as good as our humans were because of what they knew. And so we were, we kind of got stuck in these paths of making the same configuration adjustments, making the same changes to the application, hoping for different results. But then when you apply machine learning capability to that, the machine will recommend things you never would've dreamed of. And you get amazing results out of >>That. So both me and an Rico have been doing this for a long time. Like I have battled to my last breath, the, the argument when it's a bare metal or a VM. Yeah. Look, I cannot give you any more memory. Yeah. And the, the argument going all the way up to the CIO and the CIO basically saying, you know what, Keith you're cheap, my developer resources expensive, my bigger box. Yep. Uh, buying a bigger box in the cloud to your point is no longer a option because it's just expensive. Talk to me about the carrot or the stick as developers are realizing that they have to be more responsible. Where's the culture change coming from? So is it, that is that if it, is it the shift in responsibility? >>I think the center of the bullseye for us is within those sets of decisions, not in a static way, but in an ongoing way, especially, um, especially as the development of applications becomes more and more rapid. And the management of them, our, our charge and our belief wholeheartedly is that you shouldn't have to choose, you should not have to choose between costs or performance. You should not have to choose where your, you know, your applications live, uh, in a public private or, or hybrid cloud environment. And so we want to empower people to be able to sit in the middle of all of that chaos and for those trade-offs and those difficult interactions to no, no longer be a thing. You know, we're at, we're at a place now where we've done, you know, hundreds of deployments and never once have we met a developer who said, I'm really excited to get outta bed and come to work every day and manually tune my application. <laugh> One side, secondly, we've never met, uh, you know, uh, a manager or someone with budget that said, uh, please don't, you know, increase the value of my investment that I've made to lift and shift us over mm-hmm <affirmative>, you know, to the cloud or to Kubernetes or, or some combination of both. And so what we're seeing is the converging of these groups, um, at, you know, their happy place is the lack of needing to be able to, uh, make those trade offs. And that's been exciting for us. So, >>You know, I'm listening and looks like that your solution is right in the middle in application per performance management, observability. Yeah. And, uh, and monitoring. So it's a little bit of all of this. >>So we, we, we, we want to be, you know, the Intel inside of all of that, mm-hmm, <affirmative>, we don't, you know, we often get lumped into one of those categories. It used to be APM a lot. We sometimes get a, are you observability or, and we're really not any of those things in and of themselves, but we, instead of invested in deep integrations and partnerships with a lot of those, uh, with a lot of that tooling, cuz in a lot of ways, the, the tool chain is hardening, uh, in a cloud native and, and Kubernetes world. And so, you know, integrating in intelligently staying focused and great at what we solve for, but then seamlessly partnering and not requiring switching for, for our users who have already invested likely in a APM or observability. >>So to go a little bit deeper. Sure. What does it mean integration? I mean, do you provide data to this, you know, other applications in, in the environment or are they supporting you in the work that you >>Yeah, we're, we're a data consumer for the most part. Um, in fact, one of our big taglines is take your observability and turn it into actionability, right? Like how do you take the it's one thing to collect all of the data, but then how do you know what to do with it? Right. So to Matt's point, um, we integrate with folks like Datadog. Um, we integrate with Prometheus today. So we want to collect that telemetry data and then do something useful with it for you. >>But, but also we want Datadog customers. For example, we have a very close partnership with, with Datadog, so that in your existing data dog dashboard, now you have yeah. This, the storm for capability showing up in the same location. Yep. And so you don't have to switch out. >>So I was just gonna ask, is it a push pull? What is the developer experience? When you say you provide developer, this resolve ML, uh, learnings about performance mm-hmm <affirmative> how do they receive it? Like what, yeah, what's the, what's the, what's the developer experience >>They can receive it. So we have our own, we used to for a while we were CLI only like any good developer tool. Right. Uh, and you know, we have our own UI. And so it is a push in that, in, in a lot of cases where I can come to one spot, um, I've got my applications and every time I'm going to release or plan for a release or I have released, and I want to take, pull in, uh, observability data from a production standpoint, I can visualize all of that within the storm for UI and platform, make decisions. We allow you to, to set your, you know, kind of comfort level of automation that you're, you're okay with. You can be completely set and forget, or you can be somewhere along that spectrum. And you can say, as long as it's within, you know, these thresholds, go ahead and release the application or go ahead and apply the configuration. Um, but we also allow you to experience, uh, the same, a lot of the same functionality right now, you know, in Grafana in Datadog, uh, and a bunch of others that are coming. >>So I've talked to Tim Crawford who talks to a lot of CIOs and he's saying one of the biggest challenges, or if not, one of the biggest challenges CIOs are facing are resource constraints. Yeah. They cannot find the developers to begin with to get this feedback. How are you hoping to address this biggest pain point for CIOs? Yeah. >>Development? >>Just take that one. Yeah, absolutely. That's um, so like my background, like I said, at United health group, right. It's not always just about cost savings. In fact, um, the way that I look about at some of these tech challenges, especially when we talk about scalability, there's kind of three pillars that I consider, right? There's the tech scalability, how am I solving those challenges? There's the financial piece, cuz you can only throw money at a problem for so long. And it's the same thing with the human piece. I can only find so many bodies and right now that pool is very small. And so we are absolutely squarely in that footprint of, we enable your team to focus on the things that they matter, not manual tuning like Matt said. And then there are other resource constraints that I think that a lot of folks don't talk about too. >>Like we were, you were talking about private cloud for instance. And so having a physical data center, um, I've worked with physical data centers that companies I've worked for have owned where it is literally full wall to wall. You can't rack any more servers in it. And so their biggest option is, well, I could spend 1.2 billion to build a new one if I wanted to. Or if you had a capability to truly optimize your compute to what you needed and free up 30% of your capacity of that data center. So you can deploy additional name spaces into your cluster. Like that's a huge opportunity. >>So either out of question, I mean, may, maybe it, it doesn't sound very intelligent at this point, but so is it an ongoing process or is it something that you do at the very beginning mean you start deploying this. Yeah. And maybe as a service. Yep. Once in a year I say, okay, let's do it again and see if something changes. Sure. So one spot 1, 1, 1 single, you know? >>Yeah. Um, would you recommend somebody performance tests just once a year? >>Like, so that's my thing is, uh, previous at previous roles I had, uh, my role was you performance test, every single release. And that was at a minimum once a week. And if your thing did not get faster, you had to have an executive exception to get it into production. And that's the space that we wanna live in as well as part of your C I C D process. Like this should be continuous verification every time you deploy, we wanna make sure that we're recommending the perfect configuration for your application in the name space that you're deploying >>Into. And I would be as bold as to say that we believe that we can be a part of adding, actually adding a step in the C I C D process that's connected to optimization and that no application should be released monitored and sort of, uh, analyzed on an ongoing basis without optimization being a part of that. And again, not just from a cost perspective, yeah. Cost end performance, >>Almost a couple of hundred vendors on this floor. You know, you mentioned some of the big ones, data, dog, et cetera. But what happens when one of the up and comings out of nowhere, completely new data structure, some imaginable way to click to elementry data. Yeah. How do, how do you react to that? >>Yeah. To us it's zeros and ones. Yeah. Uh, and you know, we're, we're, we're really, we really are data agnostic from the standpoint of, um, we're not, we we're fortunate enough to, from the design of our algorithm standpoint, it doesn't get caught up on data structure issues. Um, you know, as long as you can capture it and make it available, uh, through, you know, one of a series of inputs, what one, one would be load or performance tests, uh, could be telemetry, could be observability if we have access to it. Um, honestly the messier, the, the better from time to time, uh, from a machine learning standpoint, um, it, it, it's pretty powerful to see we've, we've never had a deployment where we, uh, where we saved less than 30% while also improving performance by at least 10%. But the typical results for us are 40 to 60% savings and, you know, 30 to 40% improvement in performance. >>And what happens if the application is, I, I mean, yes, Kubernetes is the best thing of the world, but sometimes we have to, you know, external data sources or, or, you know, we have to connect with external services anyway. Mm-hmm <affirmative> yeah. So can you, you know, uh, can you provide an indication also on, on, on this particular application, like, you know, where the problem could >>Be? Yeah, yeah. And that, that's absolutely one of the things that we look at too, cuz it's um, especially when you talk about resource consumption, it's never a flat line, right? Like depending on your application, depending on the workloads that you're running, um, it varies from sometimes minute to minute, day to day, or it could be week to week even. Um, and so especially with some of the products that we have coming out with what we want to do, you know, partnering with, uh, you know, integrating heavily with the HPA and being able to handle some of those bumps and not necessarily bumps, but bursts and being able to do it in a way that's intelligent so that we can make sure that, like I said, it's the perfect configuration for the application regardless of the time of day that you're operating in or what your traffic patterns look like. Um, or you know, what your disc looks like, right? Like cuz with our, our low environment testing, any metric you throw at us, we can, we can optimize for. >>So Madden Patrick, thank you for stopping by. Yeah. Yes. We can go all day. Because day two is I think the biggest challenge right now. Yeah. Not just in Kubernetes, but application replatforming and re and transformation. Very, very difficult. Most CTOs and S that I talked to, this is the challenge space from Valencia Spain. I'm Keith Townsend, along with my host en Rico senior. And you're watching the queue, the leader in high tech coverage.
SUMMARY :
brought to you by the cloud native computing foundation. And we're at cuon cloud native you know, in the various sessions is about, you know, we are growing, I I've heard the pitch before, and one of the issues that we always had was, especially as you migrate to the cloud, You know, the lowing fluid is, you know, optimize the deployment. And so we're a vertical, you know, devils advocate here and, you know, So the, the problem is when you talk with clients, users, So the perfect example is Java, you know, you have to worry about your heap size, And the, the argument going all the way up to the CIO and the CIO basically saying, you know what, that I've made to lift and shift us over mm-hmm <affirmative>, you know, to the cloud or to Kubernetes or, You know, I'm listening and looks like that your solution is right in the middle in all of that, mm-hmm, <affirmative>, we don't, you know, we often get lumped into one of those categories. this, you know, other applications in, in the environment or are they supporting Like how do you take the it's one thing to collect all of the data, And so you don't have to switch out. Um, but we also allow you to experience, How are you hoping to address this And it's the same thing with the human piece. Like we were, you were talking about private cloud for instance. is it something that you do at the very beginning mean you start deploying this. And that's the space that we wanna live in as well as part of your C I C D process. actually adding a step in the C I C D process that's connected to optimization and that no application You know, you mentioned some of the big ones, data, dog, Um, you know, as long as you can capture it and make it available, or, you know, we have to connect with external services anyway. we want to do, you know, partnering with, uh, you know, integrating heavily with the HPA and being able to handle some So Madden Patrick, thank you for stopping by.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Tim Crawford | PERSON | 0.99+ |
Keith Townsend | PERSON | 0.99+ |
30 | QUANTITY | 0.99+ |
40 | QUANTITY | 0.99+ |
1.2 billion | QUANTITY | 0.99+ |
Matt | PERSON | 0.99+ |
Matt Provo | PERSON | 0.99+ |
Datadog | ORGANIZATION | 0.99+ |
storm for forge | ORGANIZATION | 0.99+ |
Patrick Bergstrom | PERSON | 0.99+ |
2016 | DATE | 0.99+ |
Java | TITLE | 0.99+ |
10 | QUANTITY | 0.99+ |
Melissa Spain | PERSON | 0.99+ |
nine times | QUANTITY | 0.99+ |
Valencia Spain | LOCATION | 0.99+ |
40% | QUANTITY | 0.99+ |
less than 30% | QUANTITY | 0.99+ |
10 years ago | DATE | 0.98+ |
United health group | ORGANIZATION | 0.98+ |
both | QUANTITY | 0.98+ |
20 years ago | DATE | 0.98+ |
one | QUANTITY | 0.98+ |
Keith | PERSON | 0.98+ |
once a year | QUANTITY | 0.98+ |
once a week | QUANTITY | 0.98+ |
HPA | ORGANIZATION | 0.98+ |
2022 | DATE | 0.98+ |
Coon | ORGANIZATION | 0.98+ |
30% | QUANTITY | 0.98+ |
first conversations | QUANTITY | 0.97+ |
Cloudnativecon | ORGANIZATION | 0.97+ |
60% | QUANTITY | 0.97+ |
Kubernetes | TITLE | 0.97+ |
Etti | PERSON | 0.97+ |
today | DATE | 0.96+ |
Patrick Britton | PERSON | 0.96+ |
Kubecon | ORGANIZATION | 0.96+ |
StormForge | ORGANIZATION | 0.95+ |
data dog | ORGANIZATION | 0.94+ |
Prometheus | TITLE | 0.94+ |
three pillars | QUANTITY | 0.94+ |
secondly | QUANTITY | 0.94+ |
Rico | ORGANIZATION | 0.93+ |
Q con cloud | ORGANIZATION | 0.93+ |
hundreds of deployments | QUANTITY | 0.92+ |
day two | QUANTITY | 0.92+ |
Europe | LOCATION | 0.92+ |
Kubernetes | ORGANIZATION | 0.92+ |
Intel | ORGANIZATION | 0.92+ |
one spot | QUANTITY | 0.89+ |
at least 10% | QUANTITY | 0.87+ |
one thing | QUANTITY | 0.85+ |
hundred vendors | QUANTITY | 0.83+ |
Once in a year | QUANTITY | 0.83+ |
cuon cloud native con | ORGANIZATION | 0.81+ |
Rico | LOCATION | 0.81+ |
Brookstone | ORGANIZATION | 0.8+ |
Grafana | ORGANIZATION | 0.8+ |
Berg storm CTO | ORGANIZATION | 0.8+ |
SRE | TITLE | 0.79+ |
SLA | TITLE | 0.79+ |
Bergstrom | ORGANIZATION | 0.79+ |
cloud native con | ORGANIZATION | 0.78+ |
single release | QUANTITY | 0.77+ |
storm forge group | ORGANIZATION | 0.75+ |
1 | QUANTITY | 0.75+ |
One side | QUANTITY | 0.74+ |
EC two | TITLE | 0.74+ |
1 single | QUANTITY | 0.74+ |
Patrick | PERSON | 0.74+ |
Matt Hicks, Red Hat - Red Hat Summit 2017
>> Announcer: Live from Boston, Massachusetts it's the Cube, covering Red Hat Summit 2017. Brought to you by Red Hat. >> Welcome back to Boston, everybody. This is Red Hat Summit and this is the Cube, the leader in live tech coverage. I'm Dave Vellante, with my co-host, Stu Miniman and Matt Hicks is here. Is the Vice President of the software engineering for OpenShift and management, at Red Hat. Matt, welcome to the Cube. >> Thank you very much, good to be here. >> So this is where all the action is, is management and management of Clouds and inter Clouds and intra clouds, and it's the sort of next big battleground and you guys seem to be, doin really well there. Have a lot of momentum. >> It's been a good year. I think it's going to be a great year going forward, cause it, it adds a lot of customer value you know, they're seeing the drive to get applications across all these environments, and I think we've hit a good balance of what we can provide in OpenShift, or middle work portfolio management and you hear a lot of customers talking about it all through summits. >> Well we saw some pretty sick demos this morning. I got to ask ya, it was basically the reference model, was okay, got some web logic, and web sphere apps. You know, wink, wink. And you want to modernize them, and so you guys just showed like a five click modernization process. Is it really that simple? Are people really, really doing that? >> Yeah. We have customers that have moved thousands of applications like that, and they're all different sorts of applications. But going from, a proprietary EE stack to getting on something closer to EAP. To deploying it on OpenShift, that is our bread and butter. And it's great because EAP can take advantage of OpenShift, lets customers re-platform the apps that they have. And like we said on Key Net, it sort of frees up your time then to start building the fun stuff. Building the next apps, and you know we've had a ton of success with that. >> Matt so we had the opportunity to talk to some of the innovation award winners. What we haven't actually gotten to cover too much yet, is all the news. So there were a number of announcements in your space, wonder if you could help us, kind of unpack for our audience. >> Sure thing. So we, You will hear a lot about the, just the enterprise production adoption, of the new technologies. Because one of the things for us, it's easy to come up and talk about new technologies. We like actually bringing customers up that have taken that new technology to production. So that's one of the big themes you'll see here at Summit. We launched OpenShift IO. Which for us actually had great success of OpenShift as Hybrid Net platform, Prod. But as you heard from United Health Group, Optum this morning. They have 10,000 plus developers to roll that out to. And we knew we needed to close the gap on how to get empowered developers. So OpenShift IO was the new Cloud based services for that. We will also announce and talk about our container health index. So when you start really making the bed on containers, how do you know what's inside of em, how do you get a simple grading system to understand like A through F. How well maintained is this. As well as being able to look under the covers and understand what goes into that A or what goes into that F. >> And maybe explain that a little bit more, because I think about like, you know, okay, I remember like in the virtualization world, I understood that. So many of containers live a lot shorter life, so, is there, is this just a dashboard that rolls that up, because I want to know probably the general health of what's going on, because there's no way humans going to be able to keep track of it. And I mean, we're not all Google with two billion containers, being brought up and killed every week. But it tends to be, at least from what I've seen, tell me if you see otherwise, that most containers are still much shorter lived than OS's. Or you know, VM4B4. >> You know I think that's, it's one of the advantages. Is that they can be pretty volatile, like that effect. You know, we have capabilities, like in OpenShift, like Image Streams driven to say, "How do you respond and incorporate this?" At the end of the day, if you can grab a container that in our world has an A rating, no security vulnerabilities today, and in a week, you could have multiple critical CVE's, that have been open that now affect that container. And so the benefit of containers is, you can re-roll em, and you can consume that update, but if you don't know about it, and you stay on that old version, you carry the same risk as if you had an out of date OS, that was very static. >> Yeah, I think that answers back to, you know, Ben Gustav, that golden image. And they would pardon that, and they'd leave it that way for two to five years. Right. And we all laughed because my friends in the security space is like, that's the biggest problem we have, is you're not ready for that. So this is, understanding what you've got out there, being able to address that, remediate, you know, push out changes, or know like hey, if you haven't, this is what you're at risk of. >> Absolutely. And that creates for us, it creates this foundation of, both trust between our customers and Red Hat, with their consuming. But then also between Red Hat and our ISV's. Because most of out ISV's, they're not in the Linux business or they're building specialized middle work capabilities on our products. So it's equally important for them to understand that if they're on an out of date version of RHEL, and they've embedded that into their container, that can cause as many problems, and they need to apply the updates in their stack as our customers. >> But that kind of gets to the business model a little bit. And you're engineering, but so I have an engineering question. But, I think most people in our audience understands that you know, Red Hat is a company built on, open source. And you know people say, "Why buy the cows, the milk is free." Well you've perfected that model, you know, 2.4 billion dollars in revenue. Three billion dollars in bookings. So you're obviously doing something right, although, not many have been able to, actually nobody's been able to create a business model like this. My question is from an engineering stand point. When, you're built on open source, and you're not, driven toward a proprietary mindset of okay, let's lock them in to the next REV. How does that change, sort of the engineering mindset, the culture and the protocol going forward. >> I love it. I have been in Red Hat 11 plus years, and everyday you're not tied into, dropping a new feature and pushing customers to that new version for revenue. And so it changes our mindset of, how do we provide value across the entire range of supported offerings that we have. In the case of RHEL, you could stay on some versions of RHEL for quite a while, and we provide value there in keeping that thing working. But at the same point, we're constantly moving this along, adding new innovation. We're able to provide value there. And it, as an engineer, it is refreshing. Sorry. >> I'll chat for a minute. So you, you know, a lot of companies that are 20 plus years old, are criticized. Oh, they don't, innovate. You hear that all the time. They do incremental R and D. And it's true. They may spend a lot on R and D, but R and D is like a feature here, or another feature there. Design, to just keep putting the crumbs out. And what you're saying is, incremental is not, really fundamental part of your plan. >> Absolutely. We can, you know, we want to provide the same value for our customer if they're on RHEL six, or they're looking towards the next major version of RHEL. And they can move anywhere on that life cycle, and that's what they get as part of their subscription. Same thing with OpenShift. And that choice of customers, of being able to take a product, consume anywhere on the life cycle of it, it's good for customers and it's nice for us, because they're just different ways that you innovate. Of driving like, the next new great feature. Then you have other customers, that you are going to provide value through stability. >> So, when you, we go to a lot of these events, as you can imagine. And when you talk to the traditional, you know, software players, you get this massive dose, of well we do that too. We do containers, and, you know, we do Cloud, and we do Hybrid, and. So help us understand, the difference between how they do it and how you do Cloud. >> I think for us, if we picked containers, you know, I was talking to a group of customers this morning of every upstream technology we pick, that we're going to pull together into our products, We don't just pick em up and re-package em and give em to a customer, because we're a support business. So if it breaks at 3 a.m and I have to re-roll a kernel to be able to fix it, I have to understand every piece in the stack. So we start with, we're going to drive a contributor position in the technologies. We pick our bets and we go all in on those areas. So Cooper Netties will carry you know with Google as you know a great technical partner, we run the majority of the SIGs with them. We have a top contributor position, and that we invest really heavily in understanding that technology inside and out. And I think that's what shows in the customer value of we could certainly take stuff, repackage it and ship it. It doesn't carry the same value as being able to work with a customer, drive new features into the product and keep them running in PROD. >> Matt so you mentioned Cooper Netties. And I was actually a little surprised this morning in the key note, I didn't hear Cooper Netties. And I think the reason was, because I heard a lot about OpenShift, and that's just your mechanism for rolling that out there. I'm assuming your customers kind of understand that maybe you could help, you know, explain that a little bit more. >> Absolutely. And so, OpenShift is our enterprise, distribution Cooper Netties is, and that's sort of the business we're in. We have Linux and RHEL is our enterprise distribution of that. We now have Cooper Netties, this really popular community. OpenShift is our distribution of that, and for our customers. >> I was just saying, I guess you couldn't call it RECK. Which, Red Hat Enterprise, Cooper Nettie, probably wouldn't be a good idea. >> The world changes too fast. You pick names a long time ago. But it's a nice motto, because we know it. It's what we've done for a long time, and it builds on everything we've done with RHEL and it connects our middleware portfolio as well. So I've been on the op side, and I've been on the development side, and I love seeing us address stuff right in the gap there for customers. And I think that's why we're seeing so much customer traction. It's a sweet spot for where they've had pain, and it adds a lot of value for em. >> Could you speak a little bit of your customers. Where are they with containers, Cooper Netties, that whole adoption. >> A lot of them in production. Which is nice. It's nice from a support business, because if you have excitement, or if you have early traction, we're a subscription business, so we want to make sure you know, the more customers use it, the more you know, they're going to grow and actually utilize it. And when you hear customers like UHG saying, the 4000 projects built on OpenShift there. Those are, they have built up significant deployments on that, and Barkways, and I know we have a whole list of em that are here today. And so I like that fact of, it's not just a cool technology. Customers have taken all the way into production. And they're being really successful with it. which as an engineer you love. You want to see people using your products and solving problems with them. >> Absolutely. Matt you talked about the ethos of commitment and committers, to open source projects. One of the challenges for a company like yours, is you got to support a lot of different projects. So though, you saying, you make your bets. We've talked a lot about okay, will there ever be another Red Hat that emerges in the big data space. You see Cloud air, and Hortonworks, and they're always sort of lookin at those guys, as possibility. But they always sight the challenge of having to support so many projects. How do you manage that and did you, you've been with Red Hat for a while, did you hit a tipping point, at some point? Cause I mean certainly you have software margin, 80, 90% you know margins. You got a great operating you know margins. So you've crossed that chasm so to speak to pick a bromide, but, others have had such a challenge. Is it because they have to support those projects and it just takes a long time? And you guys baked over 20 years. I wonder if you can give us some insight there. >> You know, I think it's as much art as it is science, I would love to say. Like this is a you know, cold formula that we apply but, we have a good gut feeling for, if you're going to back a technology, or an upstream project, you want to make sure that it's going to expand beyond your own investment, and we've certainly made a lot of wrong bets that the technology doesn't evolve. But you've got to be able to change, and when we see some of the early indicators like in Cooper Netties. Those are the ones where, we like how it's governed, we like how it's structured, we like the other players that are in there, and that's just been one of the unique aspect of Red Hat, is we pick pretty well. >> So Matt, I'm wondering if you're willing to comment, we were at Dockercon a couple of weeks ago, they've done a shift to, how they're managing kind of, but the Moby project to do the open source stuff, what's your take on that? What's Red Hat's positioning there? It's been an interesting dynamic between Docker and Red Hat to watch the last couple of years. >> Yeah you know, I think Moby for us, it's one of, it's about 16 hundred different upstream projects that we pull in across our portfolios. And so, we're certainly watching it, and we're seeing them evolve. We've been involved for the technology for a while now, but we don't necessarily know where that's going to go right now. But we certainly look a it like we do, you know the whole, breath of open source projects we pull in. >> What else is on your horizon? What's exciting you these days? >> You know, I think just seeing the reality of Hybrid Cloud becoming, it's becoming real for our customers. Where they're able, you know, you probably saw some of the Amazon announcements today where, you're able to take services, that might be in the public Cloud and now pull them on Premise. You heard customers talk about taking OpenShift and running that all the way out to the public Cloud. And we love that aspect, because you know, being able to use infrastructure to power applications, I think it's going to change IT and, then all the pieces that emanate around that, it's exciting for ISV's, it's exciting you know, around our management products from Ansible to Cloud forms. It's just a lot that we can do there. >> On the management products, you know, what Dave said, one of the Bromides out there, when I became an analyst seven years ago, it's like we can say, well it's security and management are the biggest problems we have. I feel like I can go to that well anytime I need to do. How are we doing in industry and management. Obviously you've got your position, but you know, as the surface area of the landscape is just expanding exponentially, every. You talked about how many customers are multi Cloud today. So you know, we know there's not a single thing that can do everything but, how are we doing as an industry, in Red Hot specifically? >> I think form Red hat's position, we've had a lot of success with Ansible. Just becoming a core automation technology, cause I think the one common thread is, you have so many choices, you have so many pieces, you have to start automating them. How we did IT 15 years ago, just will not. It won't scale anymore. I think building up from that stack. How you move to policy based management, that's earlier in the space. But there is a ton of capabilities and we've seen customers using, you know from our perspective, it's combining Cloud forms on orchestration, and satellite for content, Ansible for automation. Because I describe it, so I have the operation teams that run our OpenShift online environments. That's a, a relatively small group of people that manages millions of applications. And they change faster than a human could push a button. And so, as customers get into that world, you know we're certainly not in the Google world yet, but when you get that 4A it changes how you have to manage it. It has to become automated, it has to become policy driven, and then it's fun. I like it. Like doing ops in the 90s versus how you do it today. It is refreshing as an operator to just have these tools are your fingertips. >> High frequency application development. Matt thanks very >> It really is! >> Much for coming on the Cube. It's great to see you, and congratulations and good luck going forward. >> Fantastic, thanks S. >> You're welcome. Alright keep it right there everybody. Stu and I will be right back with our next guest. This is Cube, we're live from Red Hat Summit in Boston. We'll be right back. (upbeat music)
SUMMARY :
Brought to you by Red Hat. Is the Vice President of the software engineering and you guys seem to be, doin really well there. it adds a lot of customer value you know, and so you guys just showed like a five click and you know we've had a ton of success with that. wonder if you could help us, kind of unpack for our audience. So when you start really making the bed on containers, because I think about like, you know, At the end of the day, if you can grab a container Yeah, I think that answers back to, you know, that can cause as many problems, and they need to apply that you know, Red Hat is a company built on, open source. In the case of RHEL, you could stay on some versions you know, a lot of companies that are 20 plus years old, you know, we want to provide the same value And when you talk to the traditional, you know, if we picked containers, you know, Matt so you mentioned Cooper Netties. Cooper Netties is, and that's sort of the business we're in. I was just saying, I guess you couldn't call it RECK. and I've been on the development side, Could you speak a little bit of your customers. the more you know, they're going to grow And you guys baked over 20 years. Like this is a you know, cold formula that we apply but, but the Moby project to do the open source stuff, Yeah you know, I think Moby for us, and running that all the way out to the public Cloud. So you know, we know there's not a single thing Like doing ops in the 90s versus how you do it today. Matt thanks very Much for coming on the Cube. Stu and I will be right back with our next guest.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Vellante | PERSON | 0.99+ |
United Health Group | ORGANIZATION | 0.99+ |
Dave | PERSON | 0.99+ |
Matt Hicks | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
80 | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
Ben Gustav | PERSON | 0.99+ |
Ansible | ORGANIZATION | 0.99+ |
Matt | PERSON | 0.99+ |
2.4 billion dollars | QUANTITY | 0.99+ |
RHEL | TITLE | 0.99+ |
Stu | PERSON | 0.99+ |
Dockercon | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
20 plus years | QUANTITY | 0.99+ |
RHEL six | TITLE | 0.99+ |
3 a.m | DATE | 0.99+ |
Three billion dollars | QUANTITY | 0.99+ |
Boston | LOCATION | 0.99+ |
Boston, Massachusetts | LOCATION | 0.99+ |
thousands | QUANTITY | 0.99+ |
Hortonworks | ORGANIZATION | 0.99+ |
Linux | TITLE | 0.99+ |
Docker | ORGANIZATION | 0.99+ |
two billion containers | QUANTITY | 0.98+ |
4000 projects | QUANTITY | 0.98+ |
five years | QUANTITY | 0.98+ |
OpenShift | TITLE | 0.98+ |
Cooper Netties | ORGANIZATION | 0.98+ |
Red Hat Enterprise | ORGANIZATION | 0.98+ |
Red Hot | ORGANIZATION | 0.98+ |
today | DATE | 0.98+ |
seven years ago | DATE | 0.98+ |
Red Hat Summit 2017 | EVENT | 0.98+ |
five click | QUANTITY | 0.98+ |
10,000 plus developers | QUANTITY | 0.98+ |
11 plus years | QUANTITY | 0.98+ |
Red Hat Summit | EVENT | 0.97+ |
Moby | ORGANIZATION | 0.97+ |
15 years ago | DATE | 0.97+ |
Red hat | ORGANIZATION | 0.97+ |
EAP | TITLE | 0.97+ |
One | QUANTITY | 0.97+ |
Cooper Nettie | ORGANIZATION | 0.96+ |
both | QUANTITY | 0.96+ |
OpenShift IO | TITLE | 0.96+ |
one | QUANTITY | 0.96+ |
over 20 years | QUANTITY | 0.95+ |
90s | DATE | 0.95+ |
Cooper Netties | PERSON | 0.94+ |
90% | QUANTITY | 0.94+ |
single thing | QUANTITY | 0.92+ |
Cube | ORGANIZATION | 0.92+ |
OpenShift | ORGANIZATION | 0.91+ |
UHG | ORGANIZATION | 0.9+ |
one common thread | QUANTITY | 0.9+ |
applications | QUANTITY | 0.89+ |