Image Title

Search Results for StormForge:

Matt Provo & Patrick Bergstrom, StormForge | Kubecon + Cloudnativecon Europe 2022


 

>> Instructor: "theCUBE" presents KubeCon and CloudNativeCon Europe 2022, brought to you by Red Hat, the Cloud Native Computing Foundation and its ecosystem partners. >> Welcome to Valencia, Spain and we're at KubeCon, CloudNativeCon Europe 2022. I'm Keith Townsend, and my co-host, Enrico Signoretti. Enrico's really proud of me. I've called him Enrico instead of Enrique every session. >> Every day. >> Senior IT analyst at GigaOm. We're talking to fantastic builders at KubeCon, CloudNativeCon Europe 2022 about the projects and their efforts. Enrico, up to this point, it's been all about provisioning, insecurity, what conversation have we been missing? >> Well, I mean, I think that we passed the point of having the conversation of deployment, of provisioning. Everybody's very skilled, actually everything is done at day two. They are discovering that, well, there is a security problem. There is an observability problem a and in fact, we are meeting with a lot of people and there are a lot of conversation with people really needing to understand what is happening. I mean, in their cluster work, why it is happening and all the questions that come with it. And the more I talk with people in the show floor here or even in the various sessions is about, we are growing so that our clusters are becoming bigger and bigger, applications are becoming bigger as well. So we need to now understand better what is happening. As it's not only about cost, it's about everything at the end. >> So I think that's a great set up for our guests, Matt Provo, founder and CEO of StormForge and Patrick Brixton? >> Bergstrom. >> Bergstrom. >> Yeah. >> I spelled it right, I didn't say it right, Bergstrom, CTO. We're at KubeCon, CloudNativeCon where projects are discussed, built and StormForge, I've heard the pitch before, so forgive me. And I'm kind of torn. I have service mesh. What do I need more, like what problem is StormForge solving? >> You want to take it? >> Sure, absolutely. So it's interesting because, my background is in the enterprise, right? I was an executive at UnitedHealth Group before that I worked at Best Buy and one of the issues that we always had was, especially as you migrate to the cloud, it seems like the CPU dial or the memory dial is your reliability dial. So it's like, oh, I just turned that all the way to the right and everything's hunky-dory, right? But then we run into the issue like you and I were just talking about, where it gets very very expensive very quickly. And so my first conversations with Matt and the StormForge group, and they were telling me about the product and what we're dealing with. I said, that is the problem statement that I have always struggled with and I wish this existed 10 years ago when I was dealing with EC2 costs, right? And now with Kubernetes, it's the same thing. It's so easy to provision. So realistically what it is, is we take your raw telemetry data and we essentially monitor the performance of your application, and then we can tell you using our machine learning algorithms, the exact configuration that you should be using for your application to achieve the results that you're looking for without over-provisioning. So we reduce your consumption of CPU, of memory and production which ultimately nine times out of 10, actually I would say 10 out of 10, reduces your cost significantly without sacrificing reliability. >> So can your solution also help to optimize the application in the long run? Because, yes, of course-- >> Yep. >> The lowering fluid as you know optimize the deployment. >> Yeah. >> But actually the long-term is optimizing the application. >> Yes. >> Which is the real problem. >> Yep. >> So, we're fine with the former of what you just said, but we exist to do the latter. And so, we're squarely and completely focused at the application layer. As long as you can track or understand the metrics you care about for your application, we can optimize against it. We love that we don't know your application, we don't know what the SLA and SLO requirements are for your app, you do, and so, in our world it's about empowering the developer into the process, not automating them out of it and I think sometimes AI and machine learning sort of gets a bad rap from that standpoint. And so, at this point the company's been around since 2016, kind of from the very early days of Kubernetes, we've always been, squarely focused on Kubernetes, using our core machine learning engine to optimize metrics at the application layer that people care about and need to go after. And the truth of the matter is today and over time, setting a cluster up on Kubernetes has largely been solved. And yet the promise of Kubernetes around portability and flexibility, downstream when you operationalize, the complexity smacks you in the face and that's where StormForge comes in. And so we're a vertical, kind of vertically oriented solution, that's absolutely focused on solving that problem. >> Well, I don't want to play, actually. I want to play the devils advocate here and-- >> You wouldn't be a good analyst if you didn't. >> So the problem is when you talk with clients, users, there are many of them still working with Java, something that is really tough. I mean, all of us loved Java. >> Yeah, absolutely. >> Maybe 20 years ago. Yeah, but not anymore, but still they have developers, they have porting applications, microservices. Yes, but not very optimized, et cetera, cetera, et cetera. So it's becoming tough. So how you can interact with this kind of old hybrid or anyway, not well engineered applications. >> Yeah. >> We do that today. We actually, part of our platform is we offer performance testing in a lower environment and stage and we, like Matt was saying, we can use any metric that you care about and we can work with any configuration for that application. So perfect example is Java, you have to worry about your heap size, your garbage collection tuning and one of the things that really struck me very early on about the StormForge product is because it is true machine learning. You remove the human bias from that. So like a lot of what I did in the past, especially around SRE and performance tuning, we were only as good as our humans were because of what they knew. And so, we kind of got stuck in these paths of making the same configuration adjustments, making the same changes to the application, hoping for different results. But then when you apply machine learning capability to that the machine will recommend things you never would've dreamed of. And you get amazing results out of that. >> So both me and Enrico have been doing this for a long time. Like, I have battled to my last breath the argument when it's a bare metal or a VM, look, I cannot give you any more memory. >> Yeah. >> And the argument going all the way up to the CIO and the CIO basically saying, you know what, Keith you're cheap, my developer resources are expensive, buy bigger box. >> Yeah. >> Yap. >> Buying a bigger box in the cloud to your point is no longer a option because it's just expensive. >> Yeah. >> Talk to me about the carrot or the stick as developers are realizing that they have to be more responsible. Where's the culture change coming from? Is it the shift in responsibility? >> I think the center of the bullseye for us is within those sets of decisions, not in a static way, but in an ongoing way, especially as the development of applications becomes more and more rapid and the management of them. Our charge and our belief wholeheartedly is that you shouldn't have to choose. You should not have to choose between costs or performance. You should not have to choose where your applications live, in a public private or hybrid cloud environment. And so, we want to empower people to be able to sit in the middle of all of that chaos and for those trade offs and those difficult interactions to no longer be a thing. We're at a place now where we've done hundreds of deployments and never once have we met a developer who said, "I'm really excited to get out of bed and come to work every day and manually tune my application." One side, secondly, we've never met, a manager or someone with budget that said, please don't increase the value of my investment that I've made to lift and shift us over to the cloud or to Kubernetes or some combination of both. And so what we're seeing is the converging of these groups, their happy place is the lack of needing to be able to make those trade offs, and that's been exciting for us. >> So, I'm listening and looks like that your solution is right in the middle in application performance, management, observability. >> Yeah. >> And, monitoring. >> Yeah. >> So it's a little bit of all of this. >> Yeah, so we want to be, the intel inside of all of that, we often get lumped into one of those categories, it used to be APM a lot, we sometimes get, are you observability or and we're really not any of those things, in and of themselves, but we instead we've invested in deep integrations and partnerships with a lot of that tooling 'cause in a lot of ways, the tool chain is hardening in a cloud native and in Kubernetes world. And so, integrating in intelligently, staying focused and great at what we solve for, but then seamlessly partnering and not requiring switching for our users who have already invested likely, in a APM or observability. >> So to go a little bit deeper. What does it mean integration? I mean, do you provide data to this, other applications in the environment or are they supporting you in the work that you do. >> Yeah, we're a data consumer for the most part. In fact, one of our big taglines is take your observability and turn it into action ability, right? Like how do you take that, it's one thing to collect all of the data, but then how do you know what to do with it, right? So to Matt's point, we integrate with folks like Datadog, we integrate with Prometheus today. So we want to collect that telemetry data and then do something useful with it for you. >> But also we want Datadog customers, for example, we have a very close partnership with Datadog so that in your existing Datadog dashboard, now you have-- >> Yeah. >> The StormForge capability showing up in the same location. >> Yep. >> And so you don't have to switch out. >> So I was just going to ask, is it a push pull? What is the developer experience when you say you provide developer this resolve ML learnings about performance, how do they receive it? Like, what's the developer experience. >> They can receive it, for a while we were CLI only, like any good developer tool. >> Right. >> And, we have our own UI. And so it is a push in a lot of cases where I can come to one spot, I've got my applications and every time I'm going to release or plan for a release or I have released and I want to pull in observability data from a production standpoint, I can visualize all of that within the StormForge UI and platform, make decisions, we allow you to set your, kind of comfort level of automation that you're okay with. You can be completely set and forget or you can be somewhere along that spectrum and you can say, as long as it's within, these thresholds, go ahead and release the application or go ahead and apply the configuration. But we also allow you to experience the same, a lot of the same functionality right now, in Grafana, in Datadog and a bunch of others that are coming. >> So I've talked to Tim Crawford who talks to a lot of CIOs and he's saying one of the biggest challenges or if not, one of the biggest challenges CIOs are facing are resource constraints. >> Yeah. >> They cannot find the developers to begin with to get this feedback. How are you hoping to address this biggest pain point for CIOs-- >> Yeah.6 >> And developers? >> You should take that one. >> Yeah, absolutely. So like my background, like I said at UnitedHealth Group, right. It's not always just about cost savings. In fact, the way that I look about at some of these tech challenges, especially when we talk about scalability there's kind of three pillars that I consider, right? There's the tech scalability, how am I solving those challenges? There's the financial piece 'cause you can only throw money at a problem for so long and it's the same thing with the human piece. I can only find so many bodies and right now that pool is very small, and so, we are absolutely squarely in that footprint of we enable your team to focus on the things that they matter, not manual tuning like Matt said. And then there are other resource constraints that I think that a lot of folks don't talk about too. Like, you were talking about private cloud for instance and so having a physical data center, I've worked with physical data centers that companies I've worked for have owned where it is literally full, wall to wall. You can't rack any more servers in it, and so their biggest option is, well, I could spend $1.2 billion to build a new one if I wanted to, or if you had a capability to truly optimize your compute to what you needed and free up 30% of your capacity of that data center. So you can deploy additional name spaces into your cluster, like that's a huge opportunity. >> So I have another question. I mean, maybe it doesn't sound very intelligent at this point, but, so is it an ongoing process or is it something that you do at the very beginning, I mean you start deploying this. >> Yeah. >> And maybe as a service. >> Yep. >> Once in a year I say, okay, let's do it again and see if something change it. >> Sure. >> So one spot, one single.. >> Yeah, would you recommend somebody performance test just once a year? Like, so that's my thing is, at previous roles, my role was to do performance test every single release, and that was at a minimum once a week and if your thing did not get faster, you had to have an executive exception to get it into production and that's the space that we want to live in as well as part of your CICD process, like this should be continuous verification, every time you deploy, we want to make sure that we're recommending the perfect configuration for your application in the name space that you're deploying into. >> And I would be as bold as to say that we believe that we can be a part of adding, actually adding a step in the CICD process that's connected to optimization and that no application should be released, monitored, and sort of analyzed on an ongoing basis without optimization being a part of that. And again, not just from a cost perspective, but for cost and performance. >> Almost a couple of hundred vendors on this floor. You mentioned some of the big ones Datadog, et cetera, but what happens when one of the up and comings out of nowhere, completely new data structure, some imaginative way to click to telemetry data. >> Yeah. >> How do, how do you react to that? >> Yeah, to us it's zeros and ones. >> Yeah. >> And, we really are data agnostic from the standpoint of, we're fortunate enough from the design of our algorithm standpoint, it doesn't get caught up on data structure issues, as long as you can capture it and make it available through one of a series of inputs, one would be load or performance tests, could be telemetry, could be observability, if we have access to it. Honestly, the messier the better from time to time from a machine learning standpoint, it's pretty powerful to see. We've never had a deployment where we saved less than 30%, while also improving performance by at least 10%. But the typical results for us are 40 to 60% savings and 30 to 40% improvement in performance. >> And what happens if the application is, I mean, yes Kubernetes is the best thing of the world but sometimes we have to, external data sources or, we have to connect with external services anyway. >> Yeah. >> So, can you provide an indication also on this particular application, like, where the problem could be? >> Yeah. >> Yeah, and that's absolutely one of the things that we look at too, 'cause it's, especially when you talk about resource consumption it's never a flat line, right? Like depending on your application, depending on the workloads that you're running it varies from sometimes minute to minute, day to day, or it could be week to week even. And so, especially with some of the products that we have coming out with what we want to do, integrating heavily with the HPA and being able to handle some of those bumps and not necessarily bumps, but bursts and being able to do it in a way that's intelligent so that we can make sure that, like I said, it's the perfect configuration for the application regardless of the time of day that you're operating in or what your traffic patterns look like, or, what your disc looks like, right. Like 'cause with our low environment testing, any metric you throw at us, we can optimize for. >> So Matt and Patrick, thank you for stopping by. >> Yeah. >> Yes. >> We can go all day because day two is I think the biggest challenge right now, not just in Kubernetes but application re-platforming and transformation, very, very difficult. Most CTOs and EASs that I talked to, this is the challenge space. From Valencia, Spain, I'm Keith Townsend, along with my host Enrico Signoretti and you're watching "theCube" the leader in high-tech coverage. (whimsical music)

Published Date : May 19 2022

SUMMARY :

brought to you by Red Hat, and we're at KubeCon, about the projects and their efforts. And the more I talk with I've heard the pitch and then we can tell you know optimize the deployment. is optimizing the application. the complexity smacks you in the face I want to play the devils analyst if you didn't. So the problem is when So how you can interact and one of the things that last breath the argument and the CIO basically saying, Buying a bigger box in the cloud Is it the shift in responsibility? and the management of them. that your solution is right in the middle we sometimes get, are you observability or in the work that you do. consumer for the most part. showing up in the same location. What is the developer experience for a while we were CLI only, and release the application and he's saying one of the They cannot find the developers and it's the same thing or is it something that you do Once in a year I say, okay, and that's the space and that no application You mentioned some of the and 30 to 40% improvement in performance. Kubernetes is the best thing of the world so that we can make So Matt and Patrick, Most CTOs and EASs that I talked to,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Keith TownsendPERSON

0.99+

EnricoPERSON

0.99+

Enrico SignorettiPERSON

0.99+

MattPERSON

0.99+

JeffPERSON

0.99+

Tim CrawfordPERSON

0.99+

PatrickPERSON

0.99+

2003DATE

0.99+

Keith TownsendPERSON

0.99+

UnitedHealth GroupORGANIZATION

0.99+

40QUANTITY

0.99+

AlexPERSON

0.99+

Jeff FrickPERSON

0.99+

Santa ClaraLOCATION

0.99+

30QUANTITY

0.99+

$1.2 billionQUANTITY

0.99+

Alex WolfPERSON

0.99+

EnriquePERSON

0.99+

StormForgeORGANIZATION

0.99+

Alexander WolfPERSON

0.99+

Silicon ValleyLOCATION

0.99+

ACGORGANIZATION

0.99+

JanuaryDATE

0.99+

Matt ProvoPERSON

0.99+

Red HatORGANIZATION

0.99+

Santa CruzLOCATION

0.99+

Cloud Native Computing FoundationORGANIZATION

0.99+

Patrick BergstromPERSON

0.99+

Best BuyORGANIZATION

0.99+

30%QUANTITY

0.99+

first timeQUANTITY

0.99+

BergstromORGANIZATION

0.99+

nine timesQUANTITY

0.99+

10QUANTITY

0.99+

Valencia, SpainLOCATION

0.99+

300 peopleQUANTITY

0.99+

millionsQUANTITY

0.99+

DatadogORGANIZATION

0.99+

JavaTITLE

0.99+

GigaOmORGANIZATION

0.99+

Baskin School of EngineeringORGANIZATION

0.99+

two thingsQUANTITY

0.99+

third yearQUANTITY

0.99+

Mountain View, CaliforniaLOCATION

0.99+

KubeConEVENT

0.99+

ACGSVORGANIZATION

0.99+

bothQUANTITY

0.99+

once a weekQUANTITY

0.99+

less than 30%QUANTITY

0.99+

ACGSV GROW! AwardsEVENT

0.98+

2016DATE

0.98+

oneQUANTITY

0.98+

KubernetesTITLE

0.98+

40%QUANTITY

0.98+

Santa Cruz UC Santa Cruz School of EngineeringORGANIZATION

0.98+

todayDATE

0.98+

ACG Silicon ValleyORGANIZATION

0.98+

60%QUANTITY

0.98+

once a yearQUANTITY

0.98+

one spotQUANTITY

0.98+

10 years agoDATE

0.97+

Patrick BrixtonPERSON

0.97+

PrometheusTITLE

0.97+

20 years agoDATE

0.97+

CloudNativeCon Europe 2022EVENT

0.97+

secondlyQUANTITY

0.97+

one singleQUANTITY

0.96+

first conversationsQUANTITY

0.96+

millions of dollarsQUANTITY

0.96+

ACGSV GROW! Awards 2018EVENT

0.96+

Matt Provo & Patrick Bergstrom, StormForge | Kubecon + Cloudnativecon Europe 2022


 

>>The cube presents, Coon and cloud native con Europe 22, brought to you by the cloud native computing foundation. >>Welcome to Melissa Spain. And we're at cuon cloud native con Europe, 2022. I'm Keith Townsend. And my co-host en Rico senior Etti en Rico's really proud of me. I've called him en Rico and said IK, every session, senior it analyst giga, O we're talking to fantastic builders at Cuban cloud native con about the projects and the efforts en Rico up to this point, it's been all about provisioning insecurity. What, what conversation have we been missing? >>Well, I mean, I, I think, I think that, uh, uh, we passed the point of having the conversation of deployment of provisioning. You know, everybody's very skilled, actually everything is done at day two. They are discovering that, well, there is a security problem. There is an observability problem. And in fact, we are meeting with a lot of people and there are a lot of conversation with people really needing to understand what is happening. I mean, in their classroom, what, why it is happening and all the, the questions that come with it. I mean, and, uh, the more I talk with, uh, people in the, in the show floor here, or even in the, you know, in the various sessions is about, you know, we are growing, the, our clusters are becoming bigger and bigger. Uh, applications are becoming, you know, bigger as well. So we need to know, understand better what is happening. It's not only, you know, about cost it's about everything at the >>End. So I think that's a great set up for our guests, max, Provo, founder, and CEO of storm for forge and Patrick Britton, Bergstrom, Brookstone. Yeah, I spelled it right. I didn't say it right. Berg storm CTO. We're at Q con cloud native con we're projects are discussed, built and storm forge. I I've heard the pitch before, so forgive me. And I'm, I'm, I'm, I'm, I'm, I'm kind of torn. I have service mesh. What do I need more like, what problem is storm for solving? >>You wanna take it? >>Sure, absolutely. So it it's interesting because, uh, my background is in the enterprise, right? I was an executive at United health group. Um, before that I worked at best buy. Um, and one of the issues that we always had was, especially as you migrate to the cloud, it seems like the CPU dial or the memory dial is your reliability dial. So it's like, oh, I just turned that all the way to the right and everything's hunky Dory. Right. Uh, but then we run into the issue like you and I were just talking about where it gets very, very expensive, very quickly. Uh, and so my first conversations with Matt and the storm forge group, and they were telling me about the product and, and what we're dealing with. I said, that is the problem statement that I have always struggled with. And I wish this existed 10 years ago when I was dealing with EC two costs, right? And now with Kubernetes, it's the same thing. It's so easy to provision. So realistically, what it is is we take your raw telemetry data and we essentially monitor the performance of your application. And then we can tell you using our machine learning algorithms, the exact configuration that you should be using for your application to achieve the results that you're looking for without over provisioning. So we reduce your consumption of CPU of memory and production, which ultimately nine times outta 10, actually I would say 10 out of 10 reduces your cost significantly without sacrificing reliability. >>So can your solution also help to optimize the application in the long run? Because yes, of course, yep. You know, the lowing fluid is, you know, optimize the deployment. Yeah. But actually the long term is optimizing the application. Yes. Which is the real problem. >>Yep. So we actually, um, we're fine with the, the former of what you just said, but we exist to do the latter. And so we're squarely and completely focused at the application layer. Um, we are, uh, as long as you can track or understand the metrics you care about for your application, uh, we can optimize against it. Um, we love that we don't know your application. We don't know what the SLA and SLO requirements are for your app. You do. And so in, in our world, it's about empowering the developer into the process, not automating them out of it. And I think sometimes AI and machine learning sort of gets a bad wrap from that standpoint. And so, uh, we've at this point, the company's been around, you know, since 2016, uh, kind of from the very early days of Kubernetes, we've always been, you know, squarely focused on Kubernetes using our core machine learning, uh, engine to optimize metrics at the application layer, uh, that people care about and, and need to need to go after. And the truth of the matter is today. And over time, you know, setting a cluster up on Kubernetes has largely been solved. Um, and yet the promise of, of Kubernetes around portability and flexibility, uh, downstream when you operationalize the complexity, smacks you in the face. And, uh, and that's where, where storm forge comes in. And so we're a vertical, you know, kind of vertically oriented solution. Um, that's, that's absolutely focused on solving that problem. >>Well, I don't want to play, actually. I want to play the, uh, devils advocate here and, you know, >>You wouldn't be a good analyst if you didn't. >>So the, the problem is when you talk with clients, users, they, there are many of them still working with Java with, you know, something that is really tough. Mm-hmm <affirmative>, I mean, we loved all of us loved Java. Yeah, absolutely. Maybe 20 years ago. Yeah. But not anymore, but still they have developers. They are porting applications, microservices. Yes. But not very optimized, etcetera. C cetera. So it's becoming tough. So how you can interact with these kind of yeah. Old hybrid or anyway, not well in generic applications. >>Yeah. We, we do that today. We actually, part of our platform is we offer performance testing in a lower environment and stage. And we like Matt was saying, we can use any metric that you care about and we can work with any configuration for that application. So the perfect example is Java, you know, you have to worry about your heap size, your garbage collection tuning. Um, and one of the things that really struck, struck me very early on about the storm forage product is because it is true machine learning. You remove the human bias from that. So like a lot of what I did in the past, especially around SRE and, and performance tuning, we were only as good as our humans were because of what they knew. And so we were, we kind of got stuck in these paths of making the same configuration adjustments, making the same changes to the application, hoping for different results. But then when you apply machine learning capability to that, the machine will recommend things you never would've dreamed of. And you get amazing results out of >>That. So both me and an Rico have been doing this for a long time. Like I have battled to my last breath, the, the argument when it's a bare metal or a VM. Yeah. Look, I cannot give you any more memory. Yeah. And the, the argument going all the way up to the CIO and the CIO basically saying, you know what, Keith you're cheap, my developer resources expensive, my bigger box. Yep. Uh, buying a bigger box in the cloud to your point is no longer a option because it's just expensive. Talk to me about the carrot or the stick as developers are realizing that they have to be more responsible. Where's the culture change coming from? So is it, that is that if it, is it the shift in responsibility? >>I think the center of the bullseye for us is within those sets of decisions, not in a static way, but in an ongoing way, especially, um, especially as the development of applications becomes more and more rapid. And the management of them, our, our charge and our belief wholeheartedly is that you shouldn't have to choose, you should not have to choose between costs or performance. You should not have to choose where your, you know, your applications live, uh, in a public private or, or hybrid cloud environment. And so we want to empower people to be able to sit in the middle of all of that chaos and for those trade-offs and those difficult interactions to no, no longer be a thing. You know, we're at, we're at a place now where we've done, you know, hundreds of deployments and never once have we met a developer who said, I'm really excited to get outta bed and come to work every day and manually tune my application. <laugh> One side, secondly, we've never met, uh, you know, uh, a manager or someone with budget that said, uh, please don't, you know, increase the value of my investment that I've made to lift and shift us over mm-hmm <affirmative>, you know, to the cloud or to Kubernetes or, or some combination of both. And so what we're seeing is the converging of these groups, um, at, you know, their happy place is the lack of needing to be able to, uh, make those trade offs. And that's been exciting for us. So, >>You know, I'm listening and looks like that your solution is right in the middle in application per performance management, observability. Yeah. And, uh, and monitoring. So it's a little bit of all of this. >>So we, we, we, we want to be, you know, the Intel inside of all of that, mm-hmm, <affirmative>, we don't, you know, we often get lumped into one of those categories. It used to be APM a lot. We sometimes get a, are you observability or, and we're really not any of those things in and of themselves, but we, instead of invested in deep integrations and partnerships with a lot of those, uh, with a lot of that tooling, cuz in a lot of ways, the, the tool chain is hardening, uh, in a cloud native and, and Kubernetes world. And so, you know, integrating in intelligently staying focused and great at what we solve for, but then seamlessly partnering and not requiring switching for, for our users who have already invested likely in a APM or observability. >>So to go a little bit deeper. Sure. What does it mean integration? I mean, do you provide data to this, you know, other applications in, in the environment or are they supporting you in the work that you >>Yeah, we're, we're a data consumer for the most part. Um, in fact, one of our big taglines is take your observability and turn it into actionability, right? Like how do you take the it's one thing to collect all of the data, but then how do you know what to do with it? Right. So to Matt's point, um, we integrate with folks like Datadog. Um, we integrate with Prometheus today. So we want to collect that telemetry data and then do something useful with it for you. >>But, but also we want Datadog customers. For example, we have a very close partnership with, with Datadog, so that in your existing data dog dashboard, now you have yeah. This, the storm for capability showing up in the same location. Yep. And so you don't have to switch out. >>So I was just gonna ask, is it a push pull? What is the developer experience? When you say you provide developer, this resolve ML, uh, learnings about performance mm-hmm <affirmative> how do they receive it? Like what, yeah, what's the, what's the, what's the developer experience >>They can receive it. So we have our own, we used to for a while we were CLI only like any good developer tool. Right. Uh, and you know, we have our own UI. And so it is a push in that, in, in a lot of cases where I can come to one spot, um, I've got my applications and every time I'm going to release or plan for a release or I have released, and I want to take, pull in, uh, observability data from a production standpoint, I can visualize all of that within the storm for UI and platform, make decisions. We allow you to, to set your, you know, kind of comfort level of automation that you're, you're okay with. You can be completely set and forget, or you can be somewhere along that spectrum. And you can say, as long as it's within, you know, these thresholds, go ahead and release the application or go ahead and apply the configuration. Um, but we also allow you to experience, uh, the same, a lot of the same functionality right now, you know, in Grafana in Datadog, uh, and a bunch of others that are coming. >>So I've talked to Tim Crawford who talks to a lot of CIOs and he's saying one of the biggest challenges, or if not, one of the biggest challenges CIOs are facing are resource constraints. Yeah. They cannot find the developers to begin with to get this feedback. How are you hoping to address this biggest pain point for CIOs? Yeah. >>Development? >>Just take that one. Yeah, absolutely. That's um, so like my background, like I said, at United health group, right. It's not always just about cost savings. In fact, um, the way that I look about at some of these tech challenges, especially when we talk about scalability, there's kind of three pillars that I consider, right? There's the tech scalability, how am I solving those challenges? There's the financial piece, cuz you can only throw money at a problem for so long. And it's the same thing with the human piece. I can only find so many bodies and right now that pool is very small. And so we are absolutely squarely in that footprint of, we enable your team to focus on the things that they matter, not manual tuning like Matt said. And then there are other resource constraints that I think that a lot of folks don't talk about too. >>Like we were, you were talking about private cloud for instance. And so having a physical data center, um, I've worked with physical data centers that companies I've worked for have owned where it is literally full wall to wall. You can't rack any more servers in it. And so their biggest option is, well, I could spend 1.2 billion to build a new one if I wanted to. Or if you had a capability to truly optimize your compute to what you needed and free up 30% of your capacity of that data center. So you can deploy additional name spaces into your cluster. Like that's a huge opportunity. >>So either out of question, I mean, may, maybe it, it doesn't sound very intelligent at this point, but so is it an ongoing process or is it something that you do at the very beginning mean you start deploying this. Yeah. And maybe as a service. Yep. Once in a year I say, okay, let's do it again and see if something changes. Sure. So one spot 1, 1, 1 single, you know? >>Yeah. Um, would you recommend somebody performance tests just once a year? >>Like, so that's my thing is, uh, previous at previous roles I had, uh, my role was you performance test, every single release. And that was at a minimum once a week. And if your thing did not get faster, you had to have an executive exception to get it into production. And that's the space that we wanna live in as well as part of your C I C D process. Like this should be continuous verification every time you deploy, we wanna make sure that we're recommending the perfect configuration for your application in the name space that you're deploying >>Into. And I would be as bold as to say that we believe that we can be a part of adding, actually adding a step in the C I C D process that's connected to optimization and that no application should be released monitored and sort of, uh, analyzed on an ongoing basis without optimization being a part of that. And again, not just from a cost perspective, yeah. Cost end performance, >>Almost a couple of hundred vendors on this floor. You know, you mentioned some of the big ones, data, dog, et cetera. But what happens when one of the up and comings out of nowhere, completely new data structure, some imaginable way to click to elementry data. Yeah. How do, how do you react to that? >>Yeah. To us it's zeros and ones. Yeah. Uh, and you know, we're, we're, we're really, we really are data agnostic from the standpoint of, um, we're not, we we're fortunate enough to, from the design of our algorithm standpoint, it doesn't get caught up on data structure issues. Um, you know, as long as you can capture it and make it available, uh, through, you know, one of a series of inputs, what one, one would be load or performance tests, uh, could be telemetry, could be observability if we have access to it. Um, honestly the messier, the, the better from time to time, uh, from a machine learning standpoint, um, it, it, it's pretty powerful to see we've, we've never had a deployment where we, uh, where we saved less than 30% while also improving performance by at least 10%. But the typical results for us are 40 to 60% savings and, you know, 30 to 40% improvement in performance. >>And what happens if the application is, I, I mean, yes, Kubernetes is the best thing of the world, but sometimes we have to, you know, external data sources or, or, you know, we have to connect with external services anyway. Mm-hmm <affirmative> yeah. So can you, you know, uh, can you provide an indication also on, on, on this particular application, like, you know, where the problem could >>Be? Yeah, yeah. And that, that's absolutely one of the things that we look at too, cuz it's um, especially when you talk about resource consumption, it's never a flat line, right? Like depending on your application, depending on the workloads that you're running, um, it varies from sometimes minute to minute, day to day, or it could be week to week even. Um, and so especially with some of the products that we have coming out with what we want to do, you know, partnering with, uh, you know, integrating heavily with the HPA and being able to handle some of those bumps and not necessarily bumps, but bursts and being able to do it in a way that's intelligent so that we can make sure that, like I said, it's the perfect configuration for the application regardless of the time of day that you're operating in or what your traffic patterns look like. Um, or you know, what your disc looks like, right? Like cuz with our, our low environment testing, any metric you throw at us, we can, we can optimize for. >>So Madden Patrick, thank you for stopping by. Yeah. Yes. We can go all day. Because day two is I think the biggest challenge right now. Yeah. Not just in Kubernetes, but application replatforming and re and transformation. Very, very difficult. Most CTOs and S that I talked to, this is the challenge space from Valencia Spain. I'm Keith Townsend, along with my host en Rico senior. And you're watching the queue, the leader in high tech coverage.

Published Date : May 18 2022

SUMMARY :

brought to you by the cloud native computing foundation. And we're at cuon cloud native you know, in the various sessions is about, you know, we are growing, I I've heard the pitch before, and one of the issues that we always had was, especially as you migrate to the cloud, You know, the lowing fluid is, you know, optimize the deployment. And so we're a vertical, you know, devils advocate here and, you know, So the, the problem is when you talk with clients, users, So the perfect example is Java, you know, you have to worry about your heap size, And the, the argument going all the way up to the CIO and the CIO basically saying, you know what, that I've made to lift and shift us over mm-hmm <affirmative>, you know, to the cloud or to Kubernetes or, You know, I'm listening and looks like that your solution is right in the middle in all of that, mm-hmm, <affirmative>, we don't, you know, we often get lumped into one of those categories. this, you know, other applications in, in the environment or are they supporting Like how do you take the it's one thing to collect all of the data, And so you don't have to switch out. Um, but we also allow you to experience, How are you hoping to address this And it's the same thing with the human piece. Like we were, you were talking about private cloud for instance. is it something that you do at the very beginning mean you start deploying this. And that's the space that we wanna live in as well as part of your C I C D process. actually adding a step in the C I C D process that's connected to optimization and that no application You know, you mentioned some of the big ones, data, dog, Um, you know, as long as you can capture it and make it available, or, you know, we have to connect with external services anyway. we want to do, you know, partnering with, uh, you know, integrating heavily with the HPA and being able to handle some So Madden Patrick, thank you for stopping by.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Tim CrawfordPERSON

0.99+

Keith TownsendPERSON

0.99+

30QUANTITY

0.99+

40QUANTITY

0.99+

1.2 billionQUANTITY

0.99+

MattPERSON

0.99+

Matt ProvoPERSON

0.99+

DatadogORGANIZATION

0.99+

storm for forgeORGANIZATION

0.99+

Patrick BergstromPERSON

0.99+

2016DATE

0.99+

JavaTITLE

0.99+

10QUANTITY

0.99+

Melissa SpainPERSON

0.99+

nine timesQUANTITY

0.99+

Valencia SpainLOCATION

0.99+

40%QUANTITY

0.99+

less than 30%QUANTITY

0.99+

10 years agoDATE

0.98+

United health groupORGANIZATION

0.98+

bothQUANTITY

0.98+

20 years agoDATE

0.98+

oneQUANTITY

0.98+

KeithPERSON

0.98+

once a yearQUANTITY

0.98+

once a weekQUANTITY

0.98+

HPAORGANIZATION

0.98+

2022DATE

0.98+

CoonORGANIZATION

0.98+

30%QUANTITY

0.98+

first conversationsQUANTITY

0.97+

CloudnativeconORGANIZATION

0.97+

60%QUANTITY

0.97+

KubernetesTITLE

0.97+

EttiPERSON

0.97+

todayDATE

0.96+

Patrick BrittonPERSON

0.96+

KubeconORGANIZATION

0.96+

StormForgeORGANIZATION

0.95+

data dogORGANIZATION

0.94+

PrometheusTITLE

0.94+

three pillarsQUANTITY

0.94+

secondlyQUANTITY

0.94+

RicoORGANIZATION

0.93+

Q con cloudORGANIZATION

0.93+

hundreds of deploymentsQUANTITY

0.92+

day twoQUANTITY

0.92+

EuropeLOCATION

0.92+

KubernetesORGANIZATION

0.92+

IntelORGANIZATION

0.92+

one spotQUANTITY

0.89+

at least 10%QUANTITY

0.87+

one thingQUANTITY

0.85+

hundred vendorsQUANTITY

0.83+

Once in a yearQUANTITY

0.83+

cuon cloud native conORGANIZATION

0.81+

RicoLOCATION

0.81+

BrookstoneORGANIZATION

0.8+

GrafanaORGANIZATION

0.8+

Berg storm CTOORGANIZATION

0.8+

SRETITLE

0.79+

SLATITLE

0.79+

BergstromORGANIZATION

0.79+

cloud native conORGANIZATION

0.78+

single releaseQUANTITY

0.77+

storm forge groupORGANIZATION

0.75+

1QUANTITY

0.75+

One sideQUANTITY

0.74+

EC twoTITLE

0.74+

1 singleQUANTITY

0.74+

PatrickPERSON

0.74+

Charley Dublin, Acquia | StormForge Series


 

(upbeat music) >> We're back with Charley Dublin. He's the Vice President of Product Management at Acquia. Great to see you, Charley welcome to theCUBE. >> Nice to meet you Dave. >> Acquia, tell us about the company. >> Sure, so Acquia is the largest and best provider of Drupal hosting capabilities. We rank number two in the digital experience platform space, just behind Adobe. Very strong business growing well and innovating every day. >> Drupal open source, super deep high quality content management system. And more experience, you call it an experience platform. >> An experience platform, open, flexible. We want our customers to have choice the ability to solve their problems how they want leveraging the power of the open source community. >> What were the big challenges? Just describe your, kind of the business drivers. We're going to talk about StormForge but the things that you were facing some of the challenges that's kind of led you to StormForge. >> Sure, so our objective first is to provide the best experience with Drupal. So that entails lots of capabilities around ease of use for Drupal itself. But that has to run on a world class platform. It has to be the most performance. It has to be the most secure. It needs to be flexible to enable customers to run Drupal however they want to run Drupal. And so that involves the ability to support thousands of different kinds of modules that come out of the community. We want our customers to have choice with Drupal and to be able to support those choices on our platform. >> So optionality is key. Sometimes that creates other challenges. Like you've got one of everything. How do you deal with that challenge? >> That's a great question. Every strength is a form of weakness. And so our objective is really first to provide that choice but to do it in a cost efficient way. So we try to provide reference architectures for customers, opinionation for our customers to standardize take out some of the complexity that they might have if everything were a snowflake. But our objective is really to support their needs and err on the side of that flexibility. >> So you guys had to go through a major replatforming effort around containers and Kubernetes can you talk about that and what role StormForge played? >> Sure, so tied to the last point, our objective is to provide customers the highest performance, most secure platform. The entire industry of course is moving to Kubernetes and leveraging containers. We are a large consumer of AWS Services and are undergoing a major replatforming away from Legacy AWS towards Kubernetes and containers. And so that major replatforming effort is intending to enable customers to run applications how they want to and the power of Kubernetes and containers is to support that. And so we looked at StormForge as a way for us to right size resource capacity to support our customer's applications. >> I love it, AWS is now Legacy. But Andy Jassy one time said that if they had to redo Amazon they'd it in Lambda using serverless and so, it's been around a long time now. Okay so what were the outcomes that you were seeking? Was it, better management, cost reduction and how'd that go? >> Our customers run a wide range of applications. We support customers leveraging Drupal in every industry. Globally we do business in 30 different countries. And so what you have is a very wide range of applications and consumer and consumption models. And so we felt that leveraging StormForge would put us in a position where we'd be able to right size resource to those different kinds of applications. Essentially let the platform align to how customers wanted to operate their applications. And so StormForge's capability in conjunction with Kubernetes and containers really puts us in a position where customers are able to get the performance that they want, and when they need it on demand. A lot of the auto scaling capabilities that you get from Kubernetes and containers supports that. And so it really enables customers to run their applications how they want to functionally, as well as from a performance perspective. >> So this move toward containers and microservices sort of modern application development coincides with a modern platform like StormForge. And so there are, I'm sure there are alternatives out there, why StormForge? Maybe you could explain a little bit more about why, from your perspective what it does and why you chose them. >> So we leverage AWS in many respects in terms of the underlying platform, but we are a very strong DIY for how that platform supports Drupal applications. We view our expertise as being the best of Drupal. And so we felt like for us to true really maximize Kubernetes and containers and the power of those underlying technologies. On the one hand allows us to automate more and do more for customers. On the other side of it, it puts a tremendous burden on the level of expertise in order to do that well for every customer every day at scale. And so that at scale part of that was the challenge. And so we leverage StormForge to enable us to rightsize applications for performance, provide us cost benefits, allocate what you need when you need it for our customers. And that at scale piece is a critical part. We could do elements of it internally. We tried to do elements of that internally, but as you start getting to scale from, a few apps to hundreds of apps to certainly across our fleet of tens of thousands of applications, you really need something that leverages machine learning. You really need a technology that's integrated well within AWS and StormForge provided that solution. >> Make sure I got this right. So it sounds like you sort of from a skill standpoint transitioned or applied your skills from turning knobs if you will, to automation and scale. >> Correct. >> And what was that like? Was the team leaning into that, loving it? Was it a, a challenging thing for you guys to get there? >> That's a good question. The benefit in the way that StormForge applies it. So they leverage machine learning to enable us to make better decisions. So we still have the control elements, but we have much greater insight into what that would mean ahead of time before customers would be affected. So we still have the knobs we need, but we're able to do it at scale. And then from the automation point, it allows us to focus our deep expertise on making Drupal and the core hosting platform capabilities awesome. Sort of the stuff and resource allocation resource consumption. That's an enabler we can outsource that to StormForge >> This is not batch it's, you're basically doing this in sort of near realtime Optimize Live, is the capability, maybe you can describe what it is. >> So Optimize Live is new, we're in testing with that. We've done extensive testing with StormForge on the core call it decision making logic that allows for the right sizing of consumption and resources for our customer application. So that has already been tested. So the core engine's been tested. Optimize Live allows us to do that in real time to make policy decisions across our fleet on what's the right trade off between performance cost, other parameters. Again, it informs our decision making and our management of our platform. That would be very, very difficult otherwise. Without StormForge we'd have to do massive data aggregation. We'd have to have machine learning and additional infrastructure to manage to derive this information, and, and, and. And that is not our core business. We don't want to be doing that. We want insights to manage our platform to enable customers and StormForge for provides that. >> So it's kind of human in the loop thing. Hey, here's what like our recommendation or here's some options that you might want to, here's a path that you want to go down, but it's not taking that action for you necessarily. You don't want that. You want to make sure that the experts are have a hand in it still, is that correct? >> Correct, you still want the experts to have a hand in it but you don't want them to have a hand in it on each individual app. You need that, that machine learning capability that insight that allows you to do that at scale. >> So if you had to step back and think about your relationship with StormForge what was the business impact of bringing them in? >> First, from a time to market perspective we're able to get to market with a higher performing more cost effective solution earlier. So there's that benefit. Second benefit to the earlier point is that we're able to make resource allocation decisions focused on where our core competency is, not into the guts of Kubernetes containers and the like. Third is that the machine learning talent that StormForge brings to the table is world class. I've run machine learning teams, data science teams and would put them in the top 1% of any team that I've worked with in terms of their expertise. The logic and decision making and insights is outstanding. So we can get to the best decision, the optimal decision much more quickly. And then when you accompany that with the newer product in Optimize Live with that automation component you mentioned, all the better. So we're able to make decisions quicker, get it implemented in our platform and realize the benefits. What customers get from that is much better performance of their applications. More real time, higher, able to scale more dynamically. What we get is resource efficiency and our network and platform efficiency. We're not over allocating a capacity that costs us more money than we should. We're under allocating capacity that could have a lower performance solution for our customers. >> So that puts money in your pocket and your customers are happier. So there are higher renewal rates, less churn, high air prices over time as you add more capabilities. >> That's correct. >> What's it like, new application approach, Kubernetes containers, fine. Okay I need a modern platform but it's a relatively new company StormForge. What's it like working with them? >> Their talent level is world class. I wasn't familiar with them when I joined Acquia came to know them and been very impressed. There's many other providers in the market that will speak to some similar capabilities and will make many claims. But from our assessment our view is that they're the right partner for us, they're the right size, they're flexible, excellent team. They've evolved their technology roadmap very quickly. They deliver on their promises and commits a very good team to work with. So I've been very impressed for such an early stage company to deliver and to support our business so rapidly. So I think that's a strength. And then I think again the quality that people that's been manifested in the product itself, it's a high quality product. I think it's unique to the market. >> So Napoleon Hill famous writer, thinker, he wrote "Think and Grow Rich." If you haven't read it, check it out. One of his concepts is this a lever, small lever can move a big rock. It can be very powerful. Do you see StormForge as having that kind of effect on your business that change on your business? >> I do. Like I said, I think the engagement with them has proven, and this isn't, debatable based on the results that we've had with them. We ran that team through the ringer to validate the technology. Again, we'd heard lots of promises from other companies. Ran that team through the ringer with extensive testing across many customers, large and small, many use cases, to really stress test their capabilities. And they came out well ahead of any metric we put forth even well ahead of claims that they had coming into the engagement. They exceeded that. And so that's why I'm here. Why I'm an advocate. Why I think they're an outstanding company with a tremendous amount of potential. >> Thinking about, what can you tell us about where you want to take the company and the partnership with StormForge. >> I think the main next step is for us to engage with StormForge to drive automation drive decisioning, as we expand and move more and more customers over to our new platform. We're going to uncover use cases, different challenges as we go. So I think the, it's a learning process for both both sides, but I think the it's been successful so far and has a lot potential. >> Sounds like you had a great business and a great new partnership. So thanks so much for coming on theCUBE, appreciate it. >> Thank you very much, appreciate your time. >> My pleasure. And thank you for watching theCUBE, you're global leader in enterprise tech coverage. (upbeat music)

Published Date : Feb 23 2022

SUMMARY :

Great to see you, Charley Sure, so Acquia is the largest And more experience, you call the ability to solve their but the things that you were facing And so that involves the Sometimes that creates other challenges. and err on the side of that flexibility. and the power of Kubernetes and containers that if they had to redo And so what you have is a very And so there are, and the power of those So it sounds like you sort outsource that to StormForge is the capability, maybe that allows for the right sizing of here's a path that you want to go down, experts to have a hand in it Third is that the machine learning talent So that puts money in your pocket but it's a relatively and to support our business so rapidly. as having that kind of the engagement with them has proven, and the partnership with StormForge. We're going to uncover use cases, Sounds like you had a great business Thank you very much, And thank you for watching theCUBE,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
CharleyPERSON

0.99+

AcquiaORGANIZATION

0.99+

Andy JassyPERSON

0.99+

DavePERSON

0.99+

AWSORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

StormForgeORGANIZATION

0.99+

AdobeORGANIZATION

0.99+

SecondQUANTITY

0.99+

thousandsQUANTITY

0.99+

30 different countriesQUANTITY

0.99+

FirstQUANTITY

0.99+

hundreds of appsQUANTITY

0.99+

bothQUANTITY

0.99+

DrupalTITLE

0.99+

ThirdQUANTITY

0.99+

Charley DublinPERSON

0.99+

LambdaTITLE

0.98+

Think and Grow RichTITLE

0.98+

both sidesQUANTITY

0.98+

OneQUANTITY

0.98+

firstQUANTITY

0.98+

Optimize LiveTITLE

0.97+

Napoleon HillPERSON

0.96+

applicationsQUANTITY

0.95+

tens of thousandsQUANTITY

0.94+

StormForgeTITLE

0.93+

1%QUANTITY

0.91+

each individual appQUANTITY

0.91+

one timeQUANTITY

0.86+

KubernetesORGANIZATION

0.86+

KubernetesTITLE

0.85+

oneQUANTITY

0.84+

theCUBEORGANIZATION

0.83+

PresidentPERSON

0.58+

LegacyORGANIZATION

0.55+

twoQUANTITY

0.53+

LiveTITLE

0.49+

Matt Provo, StormForge


 

(bright upbeat music) >> The adoption of container orchestration platforms is accelerating at a rate as fast or faster than any category in enterprise IT. Survey data from Enterprise Technology Research shows Kubernetes specifically leads the pack into both spending velocity and market share. Now like virtualization in its early days, containers bring many new performance and tuning challenges in particular insuring consistent and predictable application performance is tricky especially because containers, they're so flexible and they enable portability. Things are constantly changing. DevOps pros have to way through a sea of observability data and tuning the environment becomes a continuous exercise of trial and error. This endless cycle taxes resources and kills operational efficiency. So teams often just capitulate and simply dial up and throw unnecessary resources at the problem. StormForge is a company founded mid last decade that is attacking these issues with a combination of machine learning and data analysis. And with me to talk about a new offering that directly addresses these concerns is Matt Provo, founder and CEO of StormForge. Matt, welcome to theCUBE. Good to see you. >> Good to see you. Thanks for having me. >> Yeah, so we saw you guys at a KubeCon sort of first introduce you to our community but add a little color to my intro there if you will. >> Yeah, well, Semi stole my thunder but I'm okay with that. Absolutely agree with everything you said in the intro. You know, the problem that we have set out to solve which is tailor made for the use of real machine learning not machine learning kind of as a marketing tag is connected to how workloads on Kubernetes are really managed from a resource efficiency standpoint. And so a number of years ago, we built the core machine learning engine and have now turned that into a platform around how Kubernetes resources are managed at scale. And so organizations today as they're moving more workloads over, sort of drink the Kool-Aid of the flexibility that comes with Kubernetes and how many knobs you can turn. And developers in many ways love it. Once they start to operationalize the use of Kubernetes and move workloads from pre-production into production, they run into a pretty significant complexity wall. And this is where StormForge comes in to try to help them manage those resources more effectively in ensuring and implementing the right kind of automation that empowers developers into the process ultimately does not automate them out of it. >> So you've got news. You had launch coming to further address these problems. Tell us about that. >> Yeah, so historically, you know, like any machine learning engine, we think about data inputs and what kind of data is going to feed our system to be able to draw the appropriate insights out for the user. And so historically we've kind of been single threaded on load and performance tests in a pre-production environment. And there's been a lot of adoption of that, a lot of excitement around it and frankly amazing results. My vision has been for us to be able to close the loop, however, between data coming out of pre-production and the associated optimizations and data coming out of production environment and our ability to optimize that. A lot of our users along the way have said these results in pre-production are fantastic. How do I know they reflect reality of what my application is going to experience in a production environment? And so we're super excited to announce kind of the a second core module for our platform called Optimize Live. The data input for that is observability and telemetry data coming out of APM platforms and other data sources. >> So this is like Nirvana. So I wonder if we could talk a little bit more about the challenges that this addresses. I mean, I've been around a while and it really have observed... And I used to ask, you know, technology companies all the time. Okay, so you're telling me beforehand what the optimal configuration should be and resource allocation. What happens if something changes? >> Yeah. >> And then it's always, always a pause. >> Yeah. >> And Kubernetes is more of a rapidly changing environment than anything we've ever seen. So specifically the problem you're addressing. Maybe talk about that a little bit. >> Yeah, so we view what happens in pre-production as sort of the experimentation phase. And our machine learning is allowing the user to experiment in scenario plan. What we're doing with Optimize Live and adding the the production piece is what we kind of also call kind of our observation phase. And so you need to be able to run the appropriate checks and balances between those two environments to ensure that what you're actually deploying and monitoring from an application performance, from a cost standpoint is with your SLOs and your SLAs as well as your business objectives. And so that's the entire point of this edition is to allow our users to experience hopefully the Nirvana associated with that because it's an exciting opportunity for them and really something that no else is doing from the standpoint of closing that loop. >> So you said front machine learning not as a marketing tag. So I want you to sort of double click on that. What's different than how other companies approach this problem? >> Yeah, I mean, part of it is a bias for me and a frustration as a founder of the reason I started the company in the first place. I think machine learning or AI gets tagged to a lot of stuff. It's very buzzwordy. It looks good. I'm fortunate to have found a number of folks from the outset of the company with, you know, PhDs in Applied Mathematics and a focus on actually building real AI at the core that is connected to solving the right kind of actual business problems. And so, you know, for the first three or four years of the company's history, we really operated as a lab. And that was our focus. We then decided, we're trying to connect a fantastic team with differentiated technology to the right market timing. And when we saw all these pain points around how fast the adoption of containers and Kubernetes have taken place but the pain that the developers are running into, we actually found for ourselves that this was the perfect use case. >> So how specifically does Optimize Live work? Can you add a little detail on that? >> Yes, so when you... Many organizations today have an existing monitoring APM observability suite really in place. They've also got a metric source. So this could be something like Datadog or Prometheus. And once that data starts flowing, there's an out of the box or kind of a piece of Kubernetes that ships with it called the VPA or the Vertical Pod Autoscaler. And less than, really than 1% of Kubernetes users take advantage of the VPA mostly because it's really challenging to configure and it's not super compatible with the the tool set or, you know, the ecosystem of tools in a Kubernetes environment. And so our biggest competitor is the VPA. And what's happening in this environment or in this world for developers is they're having to make decisions on a number of different metrics or resource elements typically things like memory and CPU. And they have to decide what are the requests I'm going to allow application and what are the limits? So what are those thresholds that I'm going to be okay with so that I can, again, try to hit my business objectives and keep in line with my SLAs? And to your earlier point in the intro, it's often guesswork. You know, they either have to rely on out of the box recommendations that ship with the databases and other services that they are using or it's a super manual process to go through and try to configure and tune this. And so with Optimize Live, we're making that one click. And so we're continuously and consistently observing and watching the data that's flowing through these tools and we're serving back recommendations for the user. They can choose to let those recommendations automatically patch and deploy or they can retain some semblance of control over are the recommendations and manually deploy them into their environment themselves. And we, again, really believe that the user knows their application. They know the goals that they have and we don't. But we have a system that's smart enough to align with the business objectives and ultimately provide the relevant recommendations at that point. >> So the business objectives are an input from the application team? >> Yep. >> And then your system is smart enough to adapt and address those. >> Application over application, right? And so the thresholds in any given organization across their different ecosystem of apps or environment could be different. The business objectives could be different. And so we don't want to predefine that for people. We want to give them the opportunity to build those thresholds in and then allow the machine learning to learn and to send recommendations within those bounds. >> And we're going to hear later from a customer who's hosting a Drupal, one of the largest Drupal hosts. So it's all do it yourself across thousands of customers so it's, you know, very unpredictable. I want to make something clear though as to where you fit in the ecosystem. You're not an observability platform, you leverage observability platforms, right? So talk about that and where you fit into the ecosystem. >> Yeah, so it's a great point. We're also, you know, a series B startup and growing. We've the choice to be very intentionally focused on the problems that we've solve. And we've chosen to partner or integrate otherwise. And so we do get put into the APM category from time to time. We are really an intelligence platform. And that intelligence and insights that we're able to draw is because of the core machine learning we've built over the years. And we also don't want organizations or users to have to switch from tools and investments that they've already made. And so we were never going to catch up to to Datadog or Dynatrace or Splunk or AppDynamics or some of the other. And we're totally fine with that. They've got great market share and penetration. They do solve real problems. Instead, we felt like users would want a seamless integration into the tools they're already using. And so we view ourselves as kind of the Intel inside for that kind of a scenario. And it takes observability and APM data and insights that were somewhat reactive. They're visualized and somewhat reactive. And we add that proactive nature onto it, the insights and ultimately the appropriate level of automation. >> So when I think, Matt, about cloud native and I go back to the sort of origins of CNCF who's a, you know, handful of companies. And now you look at the participants it'll, you know, make your eyes bleed. How do you address dealing with all those companies and what is the partnership strategy? >> Yeah, it's so interesting because it's just that even that CNCF landscape has exploded. It was not too long ago where it was as small or smaller than the FinOps landscape today which by the way, the FinOps piece is also on a a neck breaking, you know, growth curve. We, I do see, although there are a lot of companies and a lot of tools, we're starting to see a significant amount of consistency or hardening of the tool chain, you know, with our customers and users. And so we've made strategic and intentional decisions on deep partnerships in some cases like OEM uses of our technology and certainly, you know, intelligent and seamless integrations into a few. So, you know, we'll be announcing a really exciting partnership with AWS and that specifically what they're doing with EKS, their Kubernetes distribution and services. We've got a deep partnership and integration with Datadog and then with Prometheus and specifically a few other cloud providers that are operating, manage Prometheus environments. >> Okay, so where do you want to take this thing? You're not taking the observability guys head on, smart move. So many of those even entering the market now. But what is the vision? >> Yeah, so we've had this debate a lot as well 'cause it's super difficult to create a category. You know, on one hand, you know, I have a lot of respect for founders and companies that do that. On the other hand from a market timing standpoint, you know we fit into AIOps, that's really where we fit. You know, we've made a bet on the future of Kubernetes and what that's going to look like. And so from a containers and Kubernetes standpoint, that's our bet. But we're an AIOps platform. You know, we'll continue getting better at the problems we solve with machine learning and we'll continue adding data inputs. So we'll go, you know, we'll go beyond the application layer which is really where we play now. We'll add, you know, kind of whole cluster optimization capabilities across the full stack. And the way we will get there is by continuing to add different data inputs that make sense across the different layers of the stack. And it's exciting. We can stay vertically oriented on the problems that we're really good at solving but we can become more applicable and compatible over time. >> So that's your next concentric circle. As the observability vendors expand their observation space, you can just play right into that. >> Yeah. >> The more data you get because your purpose built to solving these types of problems. >> Yeah, so you can imagine a world right now out of observability, we're taking things like telemetry data pretty quickly. You can imagine a world where we take traces and logs and other data inputs as that ecosystem continues to grow, it just feeds our own, you know, we are reliant on data. >> Excellent, Matt, thank you so much. >> Thanks for having me. >> Appreciate for coming on. Okay, keep it right there in a moment. We're going to hear from a customer with a highly diverse and constantly changing environment that I mentioned earlier. They went through a major replatforming with Kubernetes on AWS. You're watching theCUBE, you are leader in enterprise tech coverage. (bright upbeat music)

Published Date : Feb 23 2022

SUMMARY :

and CEO of StormForge. Good to see you. Yeah, so we saw you guys at a KubeCon that empowers developers into the process You had launch coming to and the associated optimizations And I used to ask, you know, And Kubernetes is more of And so that's the entire So I want you to sort And so, you know, for the And so our biggest competitor is the VPA. is smart enough to adapt And so the thresholds in as to where you fit in the ecosystem. We've the choice to be and I go back to the or hardening of the tool chain, you know, Okay, so where do you And the way we will get there As the observability vendors to solving these types of problems. as that ecosystem continues to grow, and constantly changing environment

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
AWSORGANIZATION

0.99+

StormForgeORGANIZATION

0.99+

MattPERSON

0.99+

Matt ProvoPERSON

0.99+

KubeConEVENT

0.99+

DynatraceORGANIZATION

0.99+

firstQUANTITY

0.99+

DatadogORGANIZATION

0.99+

SplunkORGANIZATION

0.99+

four yearsQUANTITY

0.99+

AppDynamicsORGANIZATION

0.99+

CNCFORGANIZATION

0.98+

mid last decadeDATE

0.98+

PrometheusTITLE

0.98+

KubernetesTITLE

0.98+

DevOpsTITLE

0.97+

one clickQUANTITY

0.97+

first threeQUANTITY

0.97+

SemiPERSON

0.97+

two environmentsQUANTITY

0.97+

DrupalTITLE

0.96+

bothQUANTITY

0.96+

thousands of customersQUANTITY

0.95+

Kool-AidORGANIZATION

0.94+

EKSORGANIZATION

0.92+

todayDATE

0.92+

of years agoDATE

0.88+

1%QUANTITY

0.86+

oneQUANTITY

0.85+

DatadogTITLE

0.85+

Optimize LiveTITLE

0.84+

second core moduleQUANTITY

0.83+

IntelORGANIZATION

0.82+

LiveTITLE

0.8+

singleQUANTITY

0.76+

Enterprise TechnologyORGANIZATION

0.74+

AIOpsORGANIZATION

0.72+

theCUBETITLE

0.72+

theCUBEORGANIZATION

0.72+

KubernetesORGANIZATION

0.68+

doubleQUANTITY

0.6+

OptimizeORGANIZATION

0.59+

NirvanaORGANIZATION

0.46+

FinOpsORGANIZATION

0.42+

FinOpsTITLE

0.41+

NirvanaTITLE

0.39+

DV Stormforge Outro


 

okay we're set to wrap up this session on solving k8's complexity gap optimizing with machine learning brought to you by stormforge you know containers they're all about simplifying the packaging of application components and the world needed an abstraction layer to simplify the management of all these containers that are being created and deployed hence the explosive adoption of kubernetes which rose from a series of improbable events to take the application development world by storm we heard today how stormforge is introducing optimize live marrying data from pre-production environments with telemetry data from observability platforms in production settings and using machine intelligence to accelerate insights on which actions to take to improve application performance now being able to correlate what you thought was going to happen and be optimal in a pre-production environment and then iterating on what's actually happening in a real world production setting and bridging the gap between those two worlds that's new and that's exciting you know unlike the days of virtualization where this type of optimization took the better part of a decade and a ton of tribal knowledge in today's world that time to optimization is being compressed by companies like stormforge combining data with ai and cloud native apis to leverage an ecosystem of innovations in observability to accelerate high quality application delivery kubernetes is storming the castle and there's no stopping it stormforge and a host of companies are stepping up to help customers take advantage of this wave by delivering technologies that help predict and manage customer experiences and accelerate innovation remember all these sessions will be available immediately on demand at thecube.net and at stormforge.io thanks for watching solving the kubernetes complexity gap by optimizing with machine learning brought to you by stormforge and thecube your leader in enterprise tech coverage we'll see you next time

Published Date : Feb 21 2022

SUMMARY :

to you by stormforge you know containers

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
thecube.netOTHER

0.99+

stormforge.ioOTHER

0.98+

two worldsQUANTITY

0.97+

todayDATE

0.97+

stormforgeORGANIZATION

0.95+

k8COMMERCIAL_ITEM

0.78+

a decade andQUANTITY

0.75+

a tonQUANTITY

0.72+

thecubeORGANIZATION

0.65+

tribalQUANTITY

0.58+

Charley Dublin, Acquia | StormForge


 

(upbeat music) >> We're back with Charley Dublin. He's the Vice President of Product Management at Acquia. Great to see you, Charley welcome to theCUBE. >> Nice to meet you Dave. >> Acquia, tell us about the company. >> Sure, so Acquia is the largest and best provider of Drupal hosting capabilities. We rank number two in the digital experience platform space, just behind Adobe. Very strong business growing well and innovating every day. >> Drupal open source, super deep high quality content management system. And more experience, you call it an experience platform. >> An experience platform, open, flexible. We want our customers to have choice the ability to solve their problems how they want leveraging the power of the open source community. >> What were the big challenges? Just describe your, kind of the business drivers. We're going to talk about StormForge but the things that you were facing some of the challenges that's kind of led you to StormForge. >> Sure, so our objective first is to provide the best experience with Drupal. So that entails lots of capabilities around ease of use for Drupal itself. But that has to run on a world class platform. It has to be the most performance. It has to be the most secure. It needs to be flexible to enable customers to run Drupal however they want to run Drupal. And so that involves the ability to support thousands of different kinds of modules that come out of the community. We want our customers to have choice with Drupal and to be able to support those choices on our platform. >> So optionality is key. Sometimes that creates other challenges. Like you've got one of everything. How do you deal with that challenge? >> That's a great question. Every strength is a form of weakness. And so our objective is really first to provide that choice but to do it in a cost efficient way. So we try to provide reference architectures for customers, opinionation for our customers to standardize take out some of the complexity that they might have if everything were a snowflake. But our objective is really to support their needs and err on the side of that flexibility. >> So you guys had to go through a major replatforming effort around containers and Kubernetes can you talk about that and what role StormForge played? >> Sure, so tied to the last point, our objective is to provide customers the highest performance, most secure platform. The entire industry of course is moving to Kubernetes and leveraging containers. We are a large consumer of AWS Services and are undergoing a major replatforming away from Legacy AWS towards Kubernetes and containers. And so that major replatforming effort is intending to enable customers to run applications how they want to and the power of Kubernetes and containers is to support that. And so we looked at StormForge as a way for us to right size resource capacity to support our customer's applications. >> I love it, AWS is now Legacy. But Andy Jassy one time said that if they had to redo Amazon they'd it in Lambda using serverless and so, it's been around a long time now. Okay so what were the outcomes that you were seeking? Was it, better management, cost reduction and how'd that go? >> Our customers run a wide range of applications. We support customers leveraging Drupal in every industry. Globally we do business in 30 different countries. And so what you have is a very wide range of applications and consumer and consumption models. And so we felt that leveraging StormForge would put us in a position where we'd be able to right size resource to those different kinds of applications. Essentially let the platform align to how customers wanted to operate their applications. And so StormForge's capability in conjunction with Kubernetes and containers really puts us in a position where customers are able to get the performance that they want, and when they need it on demand. A lot of the auto scaling capabilities that you get from Kubernetes and containers supports that. And so it really enables customers to run their applications how they want to functionally, as well as from a performance perspective. >> So this move toward containers and microservices sort of modern application development coincides with a modern platform like StormForge. And so there are, I'm sure there are alternatives out there, why StormForge? Maybe you could explain a little bit more about why, from your perspective what it does and why you chose them. >> So we leverage AWS in many respects in terms of the underlying platform, but we are a very strong DIY for how that platform supports Drupal applications. We view our expertise as being the best of Drupal. And so we felt like for us to true really maximize Kubernetes and containers and the power of those underlying technologies. On the one hand allows us to automate more and do more for customers. On the other side of it, it puts a tremendous burden on the level of expertise in order to do that well for every customer every day at scale. And so that at scale part of that was the challenge. And so we leverage StormForge to enable us to rightsize applications for performance, provide us cost benefits, allocate what you need when you need it for our customers. And that at scale piece is a critical part. We could do elements of it internally. We tried to do elements of that internally, but as you start getting to scale from, a few apps to hundreds of apps to certainly across our fleet of tens of thousands of applications, you really need something that leverages machine learning. You really need a technology that's integrated well within AWS and StormForge provided that solution. >> Make sure I got this right. So it sounds like you sort of from a skill standpoint transitioned or applied your skills from turning knobs if you will, to automation and scale. >> Correct. >> And what was that like? Was the team leaning into that, loving it? Was it a, a challenging thing for you guys to get there? >> That's a good question. The benefit in the way that StormForge applies it. So they leverage machine learning to enable us to make better decisions. So we still have the control elements, but we have much greater insight into what that would mean ahead of time before customers would be affected. So we still have the knobs we need, but we're able to do it at scale. And then from the automation point, it allows us to focus our deep expertise on making Drupal and the core hosting platform capabilities awesome. Sort of the stuff and resource allocation resource consumption. That's an enabler we can outsource that to StormForge >> This is not batch it's, you're basically doing this in sort of near realtime Optimize Live, is the capability, maybe you can describe what it is. >> So Optimize Live is new, we're in testing with that. We've done extensive testing with StormForge on the core call it decision making logic that allows for the right sizing of consumption and resources for our customer application. So that has already been tested. So the core engine's been tested. Optimize Live allows us to do that in real time to make policy decisions across our fleet on what's the right trade off between performance cost, other parameters. Again, it informs our decision making and our management of our platform. That would be very, very difficult otherwise. Without StormForge we'd have to do massive data aggregation. We'd have to have machine learning and additional infrastructure to manage to derive this information, and, and, and. And that is not our core business. We don't want to be doing that. We want insights to manage our platform to enable customers and StormForge for provides that. >> So it's kind of human in the loop thing. Hey, here's what like our recommendation or here's some options that you might want to, here's a path that you want to go down, but it's not taking that action for you necessarily. You don't want that. You want to make sure that the experts are have a hand in it still, is that correct? >> Correct, you still want the experts to have a hand in it but you don't want them to have a hand in it on each individual app. You need that, that machine learning capability that insight that allows you to do that at scale. >> So if you had to step back and think about your relationship with StormForge what was the business impact of bringing them in? >> First, from a time to market perspective we're able to get to market with a higher performing more cost effective solution earlier. So there's that benefit. Second benefit to the earlier point is that we're able to make resource allocation decisions focused on where our core competency is, not into the guts of Kubernetes containers and the like. Third is that the machine learning talent that StormForge brings to the table is world class. I've run machine learning teams, data science teams and would put them in the top 1% of any team that I've worked with in terms of their expertise. The logic and decision making and insights is outstanding. So we can get to the best decision, the optimal decision much more quickly. And then when you accompany that with the newer product in Optimize Live with that automation component you mentioned, all the better. So we're able to make decisions quicker, get it implemented in our platform and realize the benefits. What customers get from that is much better performance of their applications. More real time, higher, able to scale more dynamically. What we get is resource efficiency and our network and platform efficiency. We're not over allocating a capacity that costs us more money than we should. We're under allocating capacity that could have a lower performance solution for our customers. >> So that puts money in your pocket and your customers are happier. So there are higher renewal rates, less churn, high air prices over time as you add more capabilities. >> That's correct. >> What's it like, new application approach, Kubernetes containers, fine. Okay I need a modern platform but it's a relatively new company StormForge. What's it like working with them? >> Their talent level is world class. I wasn't familiar with them when I joined Acquia came to know them and been very impressed. There's many other providers in the market that will speak to some similar capabilities and will make many claims. But from our assessment our view is that they're the right partner for us, they're the right size, they're flexible, excellent team. They've evolved their technology roadmap very quickly. They deliver on their promises and commits a very good team to work with. So I've been very impressed for such an early stage company to deliver and to support our business so rapidly. So I think that's a strength. And then I think again the quality that people that's been manifested in the product itself, it's a high quality product. I think it's unique to the market. >> So Napoleon Hill famous writer, thinker, he wrote "Think and Grow Rich." If you haven't read it, check it out. One of his concepts is this a lever, small lever can move a big rock. It can be very powerful. Do you see StormForge as having that kind of effect on your business that change on your business? >> I do. Like I said, I think the engagement with them has proven, and this isn't, debatable based on the results that we've had with them. We ran that team through the ringer to validate the technology. Again, we'd heard lots of promises from other companies. Ran that team through the ringer with extensive testing across many customers, large and small, many use cases, to really stress test their capabilities. And they came out well ahead of any metric we put forth even well ahead of claims that they had coming into the engagement. They exceeded that. And so that's why I'm here. Why I'm an advocate. Why I think they're an outstanding company with a tremendous amount of potential. >> Thinking about, what can you tell us about where you want to take the company and the partnership with StormForge. >> I think the main next step is for us to engage with StormForge to drive automation drive decisioning, as we expand and move more and more customers over to our new platform. We're going to uncover use cases, different challenges as we go. So I think the, it's a learning process for both both sides, but I think the it's been successful so far and has a lot potential. >> Sounds like you had a great business and a great new partnership. So thanks so much for coming on theCUBE, appreciate it. >> Thank you very much, appreciate your time. >> My pleasure. And thank you for watching theCUBE, you're global leader in enterprise tech coverage. (upbeat music)

Published Date : Feb 9 2022

SUMMARY :

Great to see you, Charley Sure, so Acquia is the largest And more experience, you call the ability to solve their but the things that you were facing And so that involves the Sometimes that creates other challenges. and err on the side of that flexibility. and the power of Kubernetes and containers that if they had to redo And so what you have is a very And so there are, and the power of those So it sounds like you sort outsource that to StormForge is the capability, maybe that allows for the right sizing of here's a path that you want to go down, experts to have a hand in it Third is that the machine learning talent So that puts money in your pocket but it's a relatively and to support our business so rapidly. as having that kind of the engagement with them has proven, and the partnership with StormForge. We're going to uncover use cases, Sounds like you had a great business Thank you very much, And thank you for watching theCUBE,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
CharleyPERSON

0.99+

AcquiaORGANIZATION

0.99+

Andy JassyPERSON

0.99+

DavePERSON

0.99+

AWSORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

StormForgeORGANIZATION

0.99+

AdobeORGANIZATION

0.99+

SecondQUANTITY

0.99+

thousandsQUANTITY

0.99+

30 different countriesQUANTITY

0.99+

FirstQUANTITY

0.99+

hundreds of appsQUANTITY

0.99+

bothQUANTITY

0.99+

DrupalTITLE

0.99+

ThirdQUANTITY

0.99+

Charley DublinPERSON

0.99+

LambdaTITLE

0.98+

Think and Grow RichTITLE

0.98+

both sidesQUANTITY

0.98+

OneQUANTITY

0.98+

firstQUANTITY

0.98+

Optimize LiveTITLE

0.97+

Napoleon HillPERSON

0.96+

applicationsQUANTITY

0.95+

tens of thousandsQUANTITY

0.94+

StormForgeTITLE

0.93+

1%QUANTITY

0.91+

each individual appQUANTITY

0.91+

one timeQUANTITY

0.86+

KubernetesORGANIZATION

0.86+

KubernetesTITLE

0.85+

oneQUANTITY

0.84+

theCUBEORGANIZATION

0.83+

PresidentPERSON

0.58+

LegacyORGANIZATION

0.55+

twoQUANTITY

0.53+

LiveTITLE

0.49+

Matt Provo, StormForge


 

(upbeat music) >> The adoption of container orchestration platforms is accelerating at a rate as fast or faster than any category in enterprise IT. Survey data from enterprise technology research shows Kubernetes specifically, leads the pack into both spending velocity and market share. Now like virtualization in its early days, containers bring many new performance and tuning challenges, in particular ensuring consistent and predictable application performance is tricky especially because containers they're so flexible and they enable portability, things are constantly changing. DevOps Pros have to wade through a sea of observability data and tuning the environment becomes a continuous exercise of trial and error. This endless cycle taxes resources and kills operational efficiency. So teams often just capitulate and simply dial up and throw unnecessary resources at the problem. StormForge is a company founded mid last decade that is attacking these issues with a combination of machine learning and data analysis. And with me to talk about a new offering that directly addresses these concerns is Matt Provo, founder and CEO of StormForge. Matt, welcome to The CUBE. Good to see you. >> Good to see you. Thanks for having me. >> Yeah. So we saw you guys at CUBE con, sort of first introduce you to our community, but add a little color to my intro there if you want. >> Well, you semi stole my thunder, but I'm okay with that. Absolutely agree with everything you said in the intro. The problem that we have set out to solve, which is tailor made for the use of real machine learning, not machine learning kind of as a marketing tag is connected to how workloads on Kubernetes are really managed from a resource efficiency standpoint. And so a number of years ago, we built the core machine learning engine and have now turned that into a platform around how Kubernetes resources are managed at scale. And so organizations today, as they're moving more workloads over, sort of drink the cool-Aid of the flexibility that comes with Kubernetes and how many knobs you can turn and developers in many ways love it. Once they start to operationalize use of Kubernetes and move workloads from pre-production into production, they run into a pretty significant complexity wall. And this is where StormForge comes in to try to help them manage those resources more effectively and ensuring and implementing the right kind of automation that empowers developers into the process, ultimately does not automate them out of it. >> So you've got news, you a hard launch coming to further address these problems. Tell us about that. >> Yeah. So historically, like any machine learning engine, we think about data inputs and what kind of data is going to feed our system to be able to draw the appropriate insights out for the user. And so historically we've kind of been single threaded on load and performance tests in a pre-production environment. And there's been a lot of adoption of that, a lot of excitement around it and frankly, amazing results. My vision has been for us to be able to close the loop, however, between data coming out of pre-production and the associated optimizations and data coming out of production environment and our ability to optimize that. A lot of our users along the way have said, these results in pre-production are fantastic. How do I know they reflect reality of what my application is going to experience in a production environment? And so we're super excited to announce, kind of the second core module for our platform called optimized live. The data input for that is observability and telemetry data coming out of APM platforms and other data sources. >> So this is like Nirvana. So I wonder if we could talk a little bit more about the challenges that this addresses. I mean, I've been around a while and it really have observed, and I used to ask technology companies all the time. Okay. So you're telling me beforehand what the optimal configuration should be and resource allocation, what happens if something changes? And then it's always, always a pause. And Kubernetes is more of a rapidly changing environment than anything we've ever seen. So this is specifically the problem you're addressing, maybe talk about that a little bit more. >> Yeah. So we view what happens in pre-production as sort of the experimentation phase. And our machine learning is is allowing the user to experiment and scenario plan. What we're doing with optimized live and adding the production piece is what we kind of also call, kind of our observation phase. And so you need to be able to run the appropriate checks and balances between those two environments to ensure that what you're actually deploying and monitoring from an application performance, from a cost standpoint is aligning with your SLOs and your SLAs, as well as your business objectives. And so that's the entire point of this edition, is to allow our users to experience, hopefully the the Nirvana associated with that, because it's an exciting opportunity for them and really something that nobody else is doing from the standpoint of closing that loop. >> So you said up front, machine learning not as a marketing tag. So I want you to sort of double click on that. What's different than how other companies approach this problem? >> Yeah. I mean, part of it is a bias for me and a frustration as a founder of the reason I started the company in the first place. I think machine learning or AI gets tagged to a lot of stuff. It's very buzz wordy, it looks good. I'm fortunate to have found a number of folks from the outset of the company with PhDs and applied mathematics and a focus on actually building real AI at the core that is connected to solving the right kind of actual business problems. And so for the first three or four years of the company's history, we really operated as a lab. And that was our focus. We then decided, we're trying to connect a fantastic team with differentiated technology to the right market timing. And when we saw all these pain points around, how fast the adoption of containers and Kubernetes have taken place, but the pain that developers are running into, we actually found for ourselves that this was the perfect use case. >> So how specifically does optimize live work? Can you add a little detail on that? >> Yeah. So when you... Many organizations today have an existing monitoring APM, observability suite really in place, they've also got a metric source. So this could be something like Datadog, or Prometheus. And once that data starts flowing there's an out of the box or kind of a piece of Kubernetes that ships with it called the VPA or the vertical pod auto scaler. And less than, really less than 1% of Kubernetes users take advantage of of the VPA, mostly because it's really challenging to configure and it's not super compatible with the tool set or the ecosystem of tools in a Kubernetes environment. And so our biggest competitor is the VPA. And what's happening in this world for developers is they're having to make decisions on a number of different metrics or resource elements, typically things like memory and CPU, and they have to decide, what are the requests I'm going to allow for this application and what are the limits? So what are those thresholds that I'm going to be okay with? So that I can, again, try to hit my business objectives and keep in line with my SLAs. And to your earlier point in the intro, it's often guesswork. They either have to rely on out of the box recommendations that ship with the databases and other services that they are using, or it's a super manual process to go through and try to configure and tune this. And so with optimized live, we're making that one click. And so we're continuously and consistently observing and watching the data that's flowing through these tools and we're serving back recommendations for the user. They can choose to let those recommendations automatically patch and deploy, or they can retain some semblance of control over the recommendations and manually deploy them into their environment themselves. And we, again, really believe that the user knows their application. They know the goals that they have, we don't, but we have a system that's smart enough to align with the business objectives and ultimately provide the relevant recommendations at that point. >> So the business objectives are an input from the application team. And then your system is smart enough to a adapt and address those? >> Application over application. And so the thresholds in any given organization across their different ecosystem of apps or environment could be different. The business objectives could be different. And so we don't want to predefine that for people. We want to give them the opportunity to build those thresholds in and then allow the machine learning to learn and to send recommendations within those bounds. >> And we're going to hear later from a customer who's hosting a Drupal, one of the largest Drupal hosts. So it's all do it yourself across that of customers. So it's very unpredictable. I want to make something clear though. As to where you fit in the ecosystem, you're not an observability platform, you leverage observability platforms. So talk about that and where you fit in into the ecosystem. >> Yeah. So this is a great point. We're also a series B startup and growing where we've the choice to be very intentionally focused on the problems that we've solve and we've chosen to partner or integrate otherwise. And so we do get put into the APM category from time to time. We are really an intelligence platform and that intelligence and insights that we're able to draw is because of the core machine learning we've built over the years. And we also don't want organizations or users to have to switch from tools and investments that they've already made. And so we were never going to catch up to Datadog or Dynatrace or Splunk or UpDynamics or some of the other. And we're totally fine with that. They've got great market share and penetration. They do solve real problems. Instead, we felt like users would want a seamless integration into the tools they're already using. And so we view ourselves as kind of the Intel inside for that kind of a scenario. And it takes observability and APM data and insights that were somewhat reactive, they're visualized and somewhat reactive and we make those, we add that proactive nature onto it, the insights and ultimately the appropriate level of automation. >> So when I think Matt about cloud native and I go back to the sort of origins of CNCF, it was a handful of companies, and now you look at the participants make your eyes bleed. How do you address dealing with all those companies and what's the partnership strategy? >> Yeah, it's so interesting because, just that even that CNCF landscape has exploded. It was not too long ago where it was as small or smaller than the Finops landscape today, which by the way, the Finops piece is also on a neck breaking growth curve. I do see, although there are a lot of companies and a lot of tools, we're starting to see a significant amount of consistency or hardening of the tool chain with our customers and users. And so we've made strategic and intentional decisions on deep partnerships, in some cases like OEM, uses of our technology and certainly, intelligent and seamless integrations into a few. So we'll be announcing a really exciting partnership with AWS and that specifically what they're doing with EKS, their Kubernetes distribution and services. We've got a deep partnership and integration with Datadog and then with Prometheus, and specifically a few other cloud providers that are operating manage Prometheus environments. >> Okay. So where do you want to take this thing? You're not taking the observability guys head on, smart move. So many of those even entering the market now. But what is the vision? >> Yeah. So we've had this debate a lot as well 'cause it's super difficult to create a category. On one hand, I have a lot of respect for founders and companies that do that, on the other hand, from a market timing standpoint, we fit into AI Ops, that's really where we fit. We've made a bet on the future of Kubernetes and what that's going to look like. And so from a containers and Kubernetes standpoint that's our bet, but we're an AI Ops platform, we'll continue getting better at the problems we solve with machine learning and we'll continue adding data inputs. So we'll go beyond the application layer, which is really where we play now. We'll add kind of whole cluster optimization capabilities across the full stack. And the way we will get there is by continuing to add different data inputs that make sense across the different layers of the stack. And it's exciting. We can stay vertically oriented on the problems that we're really good at solving but we can become more applicable and compatible over time. >> So that's your next concentric circle. As the observability vendors expand their observation space, you can just play right into that? More data you get because your purpose built to solving these types of problems. >> Yeah. So you can imagine a world right now out of observability, we're taking things like telemetry data. Pretty quickly you can imagine a world where we take traces and logs and other data inputs as that ecosystem continues to grow. It just feeds our own, we are reliant on data. >> Excellent. Matt, thank you so much. Appreciate you coming on. >> Thanks for having me. >> Okay. Keep it right there. In a moment, we're going to hear from a customer with a highly diverse and constantly changing environment that I mentioned earlier. They went through a major replatforming with Kubernetes on AWS. You're watching The CUBE, your leader in enterprise tech coverage. (upbeat music)

Published Date : Feb 9 2022

SUMMARY :

And with me to talk about a new offering Good to see you. but add a little color to that empowers developers into the process, to further address these problems. and the associated optimizations And Kubernetes is more of a And so that's the entire So I want you to sort And so for the first three or four years And so our biggest competitor is the VPA. So the business objectives are an input And so the thresholds in of the largest Drupal hosts. is because of the core machine learning and I go back to the and that specifically what So many of those even And the way we will get there As the observability vendors as that ecosystem continues to grow. Matt, thank you so much. to hear from a customer

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
AWSORGANIZATION

0.99+

StormForgeORGANIZATION

0.99+

Matt ProvoPERSON

0.99+

MattPERSON

0.99+

DynatraceORGANIZATION

0.99+

DatadogORGANIZATION

0.99+

UpDynamicsORGANIZATION

0.99+

SplunkORGANIZATION

0.99+

mid last decadeDATE

0.99+

four yearsQUANTITY

0.99+

firstQUANTITY

0.98+

KubernetesTITLE

0.98+

first threeQUANTITY

0.98+

PrometheusTITLE

0.98+

EKSORGANIZATION

0.98+

second coreQUANTITY

0.98+

less than 1%QUANTITY

0.98+

DevOpsTITLE

0.98+

DrupalTITLE

0.96+

one clickQUANTITY

0.95+

bothQUANTITY

0.95+

CNCFORGANIZATION

0.92+

DatadogTITLE

0.91+

IntelORGANIZATION

0.91+

CUBEORGANIZATION

0.91+

two environmentsQUANTITY

0.9+

todayDATE

0.9+

FinopsORGANIZATION

0.89+

The CUBETITLE

0.84+

KubernetesORGANIZATION

0.81+

of years agoDATE

0.77+

singleQUANTITY

0.76+

oneQUANTITY

0.72+

CUBE conEVENT

0.68+

doubleQUANTITY

0.68+

PrometheusORGANIZATION

0.67+

NirvanaORGANIZATION

0.61+

NirvanaTITLE

0.48+

Matt Provo, StormForge


 

[Music] the adoption of container orchestration platforms is accelerating at a rate as fast or faster than any category in enterprise i.t survey data from enterprise technology research shows kubernetes specifically leads the pack in both spending velocity and market share now like virtualization in its early days containers bring many new performance and tuning challenges in particular ensuring consistent and predictable application performance is tricky especially because containers they're so flexible and they enable portability things are constantly changing devops pros have to wade through a sea of observability data and tuning the environment becomes a continuous exercise of trial and error this endless cycle taxes resources and kills operational efficiency so teams often just capitulate and simply dial up and throw unnecessary resources at the problem stormforge is a company founded mid last decade that is attacking these issues with a combination of machine learning and data analysis and with me to talk about a new offering that directly addresses these concerns is matt provo founder and ceo of stormforge matt welcome to the cube good to see you good to see you thanks for having me yeah so we saw you guys at a kubecon sort of first introduce you to our community but add a little color to my intro there yeah well you semi stole my thunder but uh i'm okay with that uh absolutely agree with everything you said in the intro um you know the the problem that we have set out to solve which is tailor-made for the use of real machine learning not machine learning kind of as a as a marketing tag uh is is connected to how workloads on kubernetes are are really managed from a resource efficiency standpoint and so a number of years ago we built uh the the core machine learning engine and have now turned that into a platform around how kubernetes resources are managed at scale and so organizations today as they're moving more workloads over uh sort of drink the kool-aid of the flexibility that comes with kubernetes and how many knobs you can turn and developers in many many ways love it once they start to operationalize the use of kubernetes and move uh workloads from pre-production into production they run into a pretty significant complexity wall and and this is where stormforge comes in to try to help them manage those resources more effectively in ensuring and implementing the right kind of automation that empowers developers into the process ultimately does not automate them out of it so you've got news yeah hard launch coming and to further address these problems tell us about that yeah so historically um uh you know like any machine learning engine we think about data inputs and what kind of data is going to feed our our system to be able to draw the appropriate insights out out for the user and so historically we are we've kind of been single threaded on load and performance tests in a pre-production environment and there's been a lot of adoption of that a lot of excitement around it and and frankly amazing results my vision has been uh for us to be able to close the loop however between uh data coming out of pre-production and opt in the associated optimizations and data coming out of production a production environment uh and and our ability to optimize that a lot of our users along the way have have said these results in pre-production are are fantastic how do i know they reflect reality of what my application is going to experience in a production environment and so we're super excited to to announce kind of the second core module for our platform called optimizelive the data input for that is uh observability and telemetry data coming out of apm platforms and and other data sources so this is like nirvana so i wonder if we could talk a little bit more about the the challenges that this address is i mean i've been around a while and it really have observed and i used to ask you know technology companies all the time okay so you're telling me beforehand what the optimal configuration should be and resource allocation what happens if something changes yeah and then it's always always a pause yeah and kubernetes is more of a rapidly changing environment than anything we've ever seen yeah so this is specifically the problem you're addressing maybe talk about that yeah so we view what happens in pre-production as sort of the experimentation phase and our machine learning is is allowing the user to experiment and design and scenario plan what we're doing uh with optimize live and adding the the production piece is uh what we kind of also call kind of our observation phase and so you need to be able to to to run the appropriate checks and balances between those two environments to ensure that what you're actually deploying and monitoring from an application performance from a cost standpoint is aligning with your slos and your slas as well as your business objectives and so that's the entire point of of this edition is to is to allow our users uh to experience uh hopefully the nirvana associated with that because it's an exciting er it's an exciting opportunity for them and really something that uh nobody else is doing from the standpoint of of closing that loop so you said upfront machine learning not as a marketing tag so i want you to sort of double click on that what's different than how other companies approach this problem yeah i mean part of it is a bias for me and a frustration as a founder of of the reason i started the company in the first place i think machine learning or ai gets tagged to a lot of stuff it's very buzz wordy it's it looks good i'm fortunate to have found a number of folks from the outset of the company with you know phds in applied mathematics and a focus on actually building real ai at the core uh that is connected to solving the right kind of actual business problems and so you know for the first three or four years of the company's history we really operated as a lab and that was our our focus we were we then decided we're trying to connect a fantastic team with differentiated technology to the right market timing and when we saw all these pain points around how fast the adoption of containers and kubernetes have taken place but the pain that the developers are running into we found it we actually found for ourselves uh that this was the perfect use case so how specifically does optimize live work can you add a little detail on that yeah so when you um many organizations today have an existing monitoring apm observability suite really in in place they've also got they've also got a metric source so this could be something like datadog or prometheus and once that data starts flowing there's an out of the box or or kind of a piece of kubernetes that ships with it called the vpa or the vertical pod auto scaler and uh less than really less than one percent of kubernetes users take advantage of the of the vpa mostly because it's really challenging to configure and it's not super compatible with the the tool set or the eco you know the ecosystem of tools uh in a kubernetes environment and so our biggest competitor is the vpa and what's happening in this environment or in in this world for developers is they're having to make decisions on on a number of different metrics or or resource elements typically things like memory and cpu and they have to decide what are the what are the limitations what are the requests i'm going to allow for this uh application and what are the limits so what are those thresholds that i'm going to be okay with so that i can again try to hit my business objectives and keep in line with my slas and to your earlier point in the intro it's often guesswork um you know they either have to rely on out of the box recommendations that ship with the databases and other services that they are using or it's a super manual process to go through and try to configure and tune this and so with optimize live we're making that one click and so we're continuously and consistently uh observing and watching the data that's flowing through these tools and we're serving back recommendations for the user they can choose to let those recommendations automatically patch and deploy or they can retain some semblance of control over the recommendations and manually deploy them into their environment themselves and we again really believe that the the user knows their application they know their the goals that they have we don't uh but we have a system that's smart enough to align with the business objectives and ultimately provide the relevant recommendations so the business objectives are an input from the application team yeah and then your system is smart enough to adapt and address those application over application right and and so the the thresholds in any given organization across their different ecosystem of apps or environment could be different the business objectives could be different and so we don't want to predefine that for people we want to give them the opportunity to build those thresholds in and then allow the machine learning to uh to learn and to send recommendations within those bounds and we're going to hear later from a customer who's uh hosting a drupal one of the largest drupal hosts so it's all do-it-yourself across thousands of customers so it's you know very unpredictable i want to make something clear though as to where you fit in the ecosystem you're not an observability platform you leverage observability platforms right so talk about that and where you fit in into the ecosystem yeah so that's a great point um we uh we're also you know a series b startup and and growing where we've made the choice to be very intentionally focused on the problems that we've solved and we've uh chosen to partner or integrate otherwise and so we do get put into the apm category from from time to time we're really an intelligence platform and that intelligence and insights that we're able to draw is because we because of the core machine learning we've built over the years and we also don't want organizations or users to have to switch from tools and investments that they've already made and so we were never going to we were never going to catch up to to to datadog or dynatrace or or splunk or app dynamics or some of the other and and we're totally fine with that they've got great market share and penetration they they do solve real problems instead we felt like users would want a seamless integration uh into the the tools they're already using and so we we we view ourselves as uh kind of the intel inside uh for that kind of a scenario and uh it takes observability and apm data and insights that were somewhat reactive uh they're visualized and somewhat reactive and and we make those uh we add that we add that proactive nature onto it the insights and and ultimately the the appropriate level of automation so when i think matt about cloud native and i go back to the sort of origins of cncf it was a handful of companies and now you look at the participants it'll you know make your eyes bleed how do you address dealing with all those companies and what are the what's the partnership strategy yeah it's so interesting because um it's just that even that cncf landscape has exploded um it was not too long ago where it was as small or smaller than the finnops landscape today which by the way the phenops pieces is also on a neck-breaking you know growth curve we i do see although there are a lot of companies and a lot of tools we're starting to see a significant amount of consistency or hardening of the tool chain uh you know for with our customers and end users and so we've made strategic and intentional decisions on deep partnerships in some cases like oem uh uses of our technology and and certainly you know intelligent and seamless integrations uh into a few so you know we're we'll be announcing uh a really exciting partnership with aws uh and and uh specifically what they're doing with eks their their kubernetes distribution and services we've got a deep partnership and integration with datadog and then with prometheus and specifically cloud provider a few other cloud providers that are operating managed prometheus environments okay so where do you want to take this thing it's not you're not taking the observability guys head on smart move so many of those even entering the market now but what is the vision yeah so we've had this debate a lot as well because it's super difficult to create a category uh you know on one hand um you know you know i have a lot of respect for founders and and companies that do that on the other hand um from a market timing standpoint you know we fit into ai ops that's really where we fit um you know we are we've made a bet on the future of kubernetes uh and and what that's going to look like and so um from a containers and kubernetes standpoint that's our bet uh but we're an aiops platform you know we'll continue getting better at what at the problems we solve with machine learning and we'll continue adding data inputs so we'll go you know we'll go beyond the application layer which is really where we play now we'll add kind of whole cluster optimization capabilities across across the full stack and the way we'll get there is by continuing to add different data inputs that make sense across the different layers of the stack and it's exciting we can stay vertically oriented on the problems that we're really good at solving but we can become more applicable and compatible over time so that's your next concentric circle as the observability vendors expand their observation space you can just play right into that yeah more data you get because you're a purpose built to solving these types of problems yeah so you can imagine a world right now out of observability we're taking things like telemetry data pretty quickly you can imagine a world where we take traces and logs and other data inputs as as that ecosystem continues to grow it just feeds our own uh you know we are reliant on data um so excellent matt thank you so much appreciate you for having me okay keep it right there in a moment we're gonna hear from a customer with a highly diverse and constantly changing environment that i mentioned earlier they went through a major re-platforming with kubernetes on aws you're watching thecube your leader in enterprise tech coverage [Music] you

Published Date : Feb 8 2022

SUMMARY :

the tool set or the eco you know the

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JimPERSON

0.99+

DavePERSON

0.99+

JohnPERSON

0.99+

JeffPERSON

0.99+

Paul GillinPERSON

0.99+

MicrosoftORGANIZATION

0.99+

DavidPERSON

0.99+

Lisa MartinPERSON

0.99+

PCCWORGANIZATION

0.99+

Dave VolantePERSON

0.99+

AmazonORGANIZATION

0.99+

Michelle DennedyPERSON

0.99+

Matthew RoszakPERSON

0.99+

Jeff FrickPERSON

0.99+

Rebecca KnightPERSON

0.99+

Mark RamseyPERSON

0.99+

GeorgePERSON

0.99+

Jeff SwainPERSON

0.99+

Andy KesslerPERSON

0.99+

EuropeLOCATION

0.99+

Matt RoszakPERSON

0.99+

Frank SlootmanPERSON

0.99+

John DonahoePERSON

0.99+

Dave VellantePERSON

0.99+

Dan CohenPERSON

0.99+

Michael BiltzPERSON

0.99+

Dave NicholsonPERSON

0.99+

Michael ConlinPERSON

0.99+

IBMORGANIZATION

0.99+

MeloPERSON

0.99+

John FurrierPERSON

0.99+

NVIDIAORGANIZATION

0.99+

Joe BrockmeierPERSON

0.99+

SamPERSON

0.99+

MattPERSON

0.99+

Jeff GarzikPERSON

0.99+

CiscoORGANIZATION

0.99+

Dave VellantePERSON

0.99+

JoePERSON

0.99+

George CanuckPERSON

0.99+

AWSORGANIZATION

0.99+

AppleORGANIZATION

0.99+

Rebecca NightPERSON

0.99+

BrianPERSON

0.99+

Dave ValantePERSON

0.99+

NUTANIXORGANIZATION

0.99+

NeilPERSON

0.99+

MichaelPERSON

0.99+

Mike NickersonPERSON

0.99+

Jeremy BurtonPERSON

0.99+

FredPERSON

0.99+

Robert McNamaraPERSON

0.99+

Doug BalogPERSON

0.99+

2013DATE

0.99+

Alistair WildmanPERSON

0.99+

KimberlyPERSON

0.99+

CaliforniaLOCATION

0.99+

Sam GroccotPERSON

0.99+

AlibabaORGANIZATION

0.99+

RebeccaPERSON

0.99+

twoQUANTITY

0.99+

Matt Provo & Chandler Hoisington | CUBE Conversation, March 2022


 

(bright upbeat music) >> According to the latest survey from Enterprise Technology Research, container orchestration is the number one category as measured by customer spending momentum. It's ahead of AIML, it's ahead of cloud computing, and it's ahead of robotic process automation. All of which also show highly elevated levels of customer spending velocity. Now, we drill deeper into the survey of more than 1200 CIOs and IT buyers, and we find that a whopping 70% of respondents are spending more on Kubernetes initiatives in 2022 as compared to last year. The rise of Kubernetes came about through a series of improbable events that change the way applications are developed, deployed and managed. Very early on Kubernetes committers chose to focus on simplicity in massive adoption rather than deep enterprise functionality. It's why initially virtually all activity around Kubernetes focused on stateless applications. That has changed. As Kubernetes adoption has gone mainstream, the need for stronger enterprise functionality has become much more pressing. You hear this constantly when you attend the various developer conference, and the talk is all around, let's say, shift left to improve security and better cluster management, more complete automation capabilities, support for data-driven workloads and very importantly, vastly better application performance in visibility and management. And that last topic is what we're here to talk about today. Hello, this is Dave Vellante, and welcome to this special CUBE conversation where we invite into our East Coast Studios Matt Provo, who's the founder and CEO of StormForge and Chandler Hoisington, the general manager of EKS Edge in Hybrid at AWS. Gentlemen, welcome, it's good to see you. >> Thanks. >> Thanks for having us. >> So Chandler, you have this convergence, you've got application performance, you've got developer speed and velocity and you've got cloud economics all coming together. What's driving that convergence and why is it important for customers? >> Yeah, yeah, great question. I think it's important to kind of understand how we got here in the first place. I think Kubernetes solves a lot of problems for users, but the complexity of Kubernetes of just standing up a cluster to begin with is not always simple. And that's where services like EKS comes in and where Amazon tried to solve that problem for users saying, "Hey the control plane, it's made up of 10, 15 different components, standing all these up, patching them, you know, handling the CBEs for it et cetera, et cetera, is a very complicated process, let me help you do that." And where EKS has been so successful and with EKS Anywhere which we launched last year, that's what we're helping customers do, a very similar thing in their own data centers. So we're kind of solving this problem of bringing the cluster online and helping customers launch their first application on it. But then what do you do once your application's there? That's the question. And so now you launched your application and does it have enough resources? Did you tune the right CPU? Did you tune the right amount of memory for it? All those questions need to be answered and that's where working with folks like StormForge come in. >> Well, it's interesting Matt because you're all about optimization and trying to maximize the efficiency which might mean people's lower their AWS bill, but that's okay with Amazon, right? You guys have shown the cheaper it is, the more they buy, well. >> Yeah. And it's all about loyalty and developer experience. And so when you can help create or add to the developer experience itself, over time that loyalty's there. And so when we can come alongside EKS and services from Amazon, well, number one StormForge is built on Amazon, on AWS, and so it's a nice fit, but when we don't have to require developers to choose between things like cost and performance, but they can focus on, you know, innovation and connecting the applications that they're managing on Kubernetes as they operationalize them to the actual business objectives that they have, it's a pretty powerful combination. >> So your entry into the market was in pre-production. >> Yeah. >> You can kind of simulate what performance is going to look like and now you've announced optimized live. >> Yep. >> So that should allow you to turn the crank a little bit more. >> Yeah. >> Get a little bit more accurate and respond more quickly. >> Yeah. So we're the only ones that give you both views. And so we want to, you know, we want to provide a view in what we call kind of our experimentation side of our platform, which is pre-production, as well as on ongoing and continuous view which we kind of call our observation, the observation part of our solution, which is in production. And so for us, it's about providing that view, it's also about taking an increased number of data inputs into the platform itself so that our machine learning can learn from that and ultimately be able to automate the right kinds of tasks alongside the developers to meet their objectives. >> So, Chandler, in my intro I was talking about the spending velocity and how Kubernetes was at the top. But when we had other survey questions that ETR did, and this is post pandemic, it was interesting. We asked what's the most important initiative? And the two top ones were security, no surprise, and it popped up really after the pandemic hit in the lockdown even more prominent and cloud migration, >> Right. >> was number two. And so how are you working with StormForge to effect cloud migrations? Talk about that relationship. >> Yeah. I think it's, you know, different enterprises to have different strategies on how they're going to get their workloads to the cloud. Some of 'em want to have modernize in place in their data centers and then take those modernized applications and move them to the cloud, and that's where something like I mentioned earlier, EKS Anywhere comes into play really nicely because we can bring a consistent experience, a Kubernetes experience to your data center, you can modernize your applications and then you can bring those to EKS in the cloud. And as you're moving them back and forth you have a more consistent experience with Kubernetes. And luckily StormForge works on prem as well even in air gapped environments for StormForge. So, you know, that's, you can get your applications tuned correctly for your data center workloads, and then you're going to tune them differently when you move them to the cloud and you can get them tuned correctly there but StormForge can run consistently in both environments. >> Now, can you add some color as to how you optimize EKS? >> Yeah, so I think from a EKS standpoint, when you, again, when the number of parameters that you have to look at for your application inside of EKS and then the associated services that will go alongside that the packages that are coming in from a Kubernetes standpoint itself, and then you start to transition and operationalize where more and more of these are in production, they're, you know, connected to the business, we provide the ability to go beyond what developers typically do which is sort of take the, either the out of the box defaults or recommendations that ship with the services that they put into their application or the any human's ability to kind of keep up with a couple parameters at a time. You know, with two parameters for the typical Kubernetes application, you might have about a 100 different possible combinations that you could choose from. And sometimes humans can keep up with that, at least statically. And so for us, we want to blow that wide open. We want developers to be able to take advantage of the entire footprint or environment itself. And, you know, by using machine learning to help augment what the developers themselves are doing, not replacing them, augmenting them and having them be a part of that process. Now this whole new world of optimization opens up to them, which is pretty fantastic. And so how the actual workloads are configured, you know, on an ongoing basis and predictively based on upcoming business events, or even unknowns many times is a pretty powerful position to be in. >> I mean, you said not to replace development. I mentioned robotic process automation in my intro, and of course in the early days, I was like, oh, it's going to replace my job. What's actually happened is it's replacing all the mundane tasks. >> Yeah. >> So you can actually do your job. >> Yeah. >> Right? We're all working 24/7, 365 these days, so that the extent that you can automate the things that I hate doing, >> Yeah. >> That's a huge win. So Chandler, how do people get started? You mentioned EKS Anywhere, are they starting on prem and then kind of moving into the cloud? If I'm a customer and I'm interested and I'm sort of at the beginning, where do I start? >> Yeah. Yeah. I mean, it really depends on your workload. Any workload that can run in the cloud should run in the cloud. I'm not just saying that because I work at Amazon but I truly think that that is the case. And I think customers think that as well. More and more customers are trying to move workloads to the cloud for that elasticity and all the benefits of using these huge platforms and, you know, hundreds of services that you have advantage of in the cloud but some workloads just can't move to the cloud yet. You have workloads that have latency requirements like some gaming workloads, for example, where we don't have regions close enough to the consumers yet. So, you know, you want to put workloads in Turkey to service Egypt customers or something like this. You also have workloads that are, you know, on cruise ships and they lose connectivity in the middle of the Atlantic, or maybe you have highly secure workloads in air gapped environments or something like this. So there's still a lot of use cases that keep workloads on prem and sometimes customers just have existing investments in hardware that they don't want to eat yet, right? And they want to slowly phase those out as they move to the cloud. And again, that's where EKS Anywhere really plays well for the workloads that you want to keep on prem, but then as you move to the cloud you can take advantage of obviously EKS. >> I'll put you in the spot. >> Sure. >> And don't hate me for doing this, but so Andy Jassy, Adam Selipsky, I've certainly heard Maylan Thompson Bukavek talk about this, and in fullness of time, all workloads will be in the cloud. >> Yeah. >> And I've said the cloud is expanding. We're going to bring the cloud to the edge. Edge is in your title. >> Yeah. >> Is that a correct interpretation and obvious it relates >> Absolutely. >> to Kubernetes. >> And you'll see that in Amazon strategy. I mean, without posts and wavelengths and local zones, like we're, at the end of the day, Amazon tries to satisfy customers. And if customers are saying, "Hey, I need workloads in San, I want to run a workload in San Francisco. And it's really important to me that it's close to those users, the end users that are in that area," we're going to help them do that at Amazon. And there's a variety of options now to do that. EKS Anywhere is actually only one piece of that kind of whole strategy. >> Yeah. I mean, here you have your best people working on the speed of light problem, but until that's solved, sure, sure. >> That's right. >> We'll give you the last word. >> How do you know about that? >> Yeah. Yeah. (all laughing) >> It's a top secret. Sorry. You heard it on the CUBE first. Matt, we'll give you the last word, bring us home. >> I, so I couldn't agree more. The, you know, the cloud is where workloads are going. Whether what I love is the ability to look at, you know, for the same enterprises, a lot of the ones we work with, want a, they want a public and a private view, public cloud, private cloud view. And they want that flexibility to, depending on the nature of the applications to be able to shift between from time to time where, you know, really decide. And I love EKS Anywhere. I think it's a fantastic addition to the, you know, to the ecosystem. And, you know, I think for us, we're about staying focused on the set of problems that we solve. No developer that I've ever met and probably neither of you have met, gets super excited about getting out of bed to manually tune their applications. And so what we find is that, you know, the time spent doing that, literally just is, there's like a one-to-one correlation. It means they're not innovating and they're not doing what they love to be doing. And so when we can come alongside that and automate away the manual task to your point, I think there are a lot of parallels to RPA in that case, it becomes actually a pretty empowering process for our users, so that they feel like they're, again, meeting the business objectives that they have, they get to innovate and yet, you know, they're exploring this whole new world around not having to choose between something like cost and performance for their applications. >> Well, and we're entering an entire new era of scale. >> Yeah. >> We've never seen before and human just are not going to be able to keep up with that. >> Yep. >> And that affect quality and speed and everything else. Guys, hey, thanks so much for coming in a great conversation. And thank you for watching this CUBE conversation. This is Dave Vellante, and we'll see you next time. (upbeat music)

Published Date : Mar 15 2022

SUMMARY :

and the talk is all around, let's say, So Chandler, you have this convergence, And so now you launched your application the more they buy, well. And so when you can help create or add So your entry into the is going to look like and now you to turn the crank and respond more quickly. And so we want to, you know, And the two top ones were And so how are you working with StormForge and then you can bring and then you start to transition and of course in the and I'm sort of at the hundreds of services that you And don't hate me for doing this, the cloud to the edge. at the end of the day, Amazon I mean, here you have your best You heard it on the CUBE first. they get to innovate and yet, you know, Well, and we're entering are not going to be able and we'll see you next time.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

Adam SelipskyPERSON

0.99+

Andy JassyPERSON

0.99+

AmazonORGANIZATION

0.99+

TurkeyLOCATION

0.99+

ChandlerPERSON

0.99+

March 2022DATE

0.99+

Matt ProvoPERSON

0.99+

StormForgeORGANIZATION

0.99+

2022DATE

0.99+

San FranciscoLOCATION

0.99+

AWSORGANIZATION

0.99+

SanLOCATION

0.99+

last yearDATE

0.99+

first applicationQUANTITY

0.99+

hundredsQUANTITY

0.99+

Enterprise Technology ResearchORGANIZATION

0.99+

MattPERSON

0.99+

10QUANTITY

0.99+

Chandler HoisingtonPERSON

0.99+

EgyptLOCATION

0.99+

AtlanticLOCATION

0.99+

firstQUANTITY

0.98+

365QUANTITY

0.98+

todayDATE

0.98+

EKSORGANIZATION

0.98+

two parametersQUANTITY

0.98+

EKS EdgeORGANIZATION

0.98+

EKSTITLE

0.98+

both environmentsQUANTITY

0.97+

two top onesQUANTITY

0.96+

one pieceQUANTITY

0.95+

15 different componentsQUANTITY

0.95+

KubernetesTITLE

0.95+

buyersQUANTITY

0.94+

pandemicEVENT

0.92+

ETRORGANIZATION

0.91+

more than 1200 CIOs andQUANTITY

0.89+

East Coast StudiosORGANIZATION

0.88+

oneQUANTITY

0.87+

CUBEORGANIZATION

0.86+

StormForgeTITLE

0.85+

number one categoryQUANTITY

0.84+

servicesQUANTITY

0.83+

both viewsQUANTITY

0.82+

70% of respondentsQUANTITY

0.78+

about a 100 different possible combinationsQUANTITY

0.77+

Maylan Thompson BukavekPERSON

0.71+

number twoQUANTITY

0.67+

KubernetesPERSON

0.66+

CBEsORGANIZATION

0.62+

premORGANIZATION

0.61+

coupleQUANTITY

0.6+

KubernetesORGANIZATION

0.59+

CUBE ConversationEVENT

0.48+

Matt Provo | ** Do not make public **


 

(bright upbeat music) >> The adoption of container orchestration platforms is accelerating at a rate as fast or faster than any category in enterprise IT. Survey data from Enterprise Technology Research shows Kubernetes specifically leads the pack in both spending velocity and market share. Now like virtualization in its early days, containers bring many new performance and tuning challenges. In particular, ensuring consistent and predictable application performance is tricky especially because containers they're so flexible and the enabled portability things are constantly changing. DevOps pros have to wade through a sea of observability data and tuning the environment becomes a continuous exercise of trial and error. This endless cycle taxes, resources, and kills operational efficiencies so teams often just capitulate and simply dial up and throw unnecessary resources at the problem. StormForge is a company founded in mid last decade that is attacking these issues with a combination of machine learning and data analysis. And with me to talk about a new offering that directly addresses these concerns, is Matt Provo, founder and CEO of StormForge. Matt, welcome to thecube. Good to see you. >> Good to see you, thanks for having me. >> Yeah. So we saw you guys at CubeCon, sort of first introduce you to our community but add a little color to my intro if you will. >> Yeah, well you semi stole my thunder but I'm okay with that. Absolutely agree with everything you said in the intro. You know, the problem that we have set out to solve which is tailor made for the use of real machine learning not machine learning kind of as a marketing tag is connected to how workloads on Kubernetes are really managed from a resource efficiency standpoint. And so a number of years ago we built the core machine learning engine and have now turned that into a platform around how Kubernetes resources are managed at scale. And so organizations today as they're moving more workloads over sort of drink the Kool-Aid of the flexibility that comes with Kubernetes and how many knobs you can turn and developers in many ways love it. Once they start to operationalize the use of Kubernetes and move workloads from pre-production into production, they run into a pretty significant complexity wall. And this is where StormForge comes in to try to help them manage those resources more effectively in ensuring and implementing the right kind of automation that empowers developers into the process ultimately does not automate them out of it. >> So you've got news, your hard launch coming in to further address these problems. Tell us about that. >> Yeah so historically, you know, like any machine learning engine, we think about data inputs and what kind of data is going to feed our system to be able to draw the appropriate insights out for the user. And so historically we are, we've kind of been single-threaded on load and performance tests in a pre-production environment. And there's been a lot of adoption of that, a lot of excitement around it and frankly, amazing results. My vision has been for us to be able to close the loop however between data coming out of pre-production and the associated optimizations and data coming out of production, a production environment, and our ability to optimize that. A lot of our users along the way have said these results in pre-production are fantastic. How do I know they reflect reality of what my application is going to experience in a production environment? And so we're super excited to announce kind of the second core module for our platform called Optimize Live. The data input for that is observability and telemetry data coming out of APM platforms and other data sources. >> So this is like Nirvana. So I wonder if we could talk a little bit more about the challenges that this addresses. I mean, I've been around a while and it really have observed and I used to ask technology companies all the time, okay, so you're telling me beforehand what the optimal configuration should be in resource allocation, what happens if something changes? And then it's always a pause. And Kubernetes is more of a rapidly changing environment than anything we've ever seen. So this is specifically the problem you're addressing. Maybe talk about that a little bit. >> Yeah so we view what happens in pre-production as sort of the experimentation phase and our machine learning is allowing the user to experiment and scenario plan. What we're doing with Optimize Live and adding the production piece is what we kind of also call kind of our observation phase. And so you need to be able to run the appropriate checks and balances between those two environments to ensure that what you're actually deploying and monitoring from an application performance, from a cost standpoint, is aligning with your SLOs and your SLAs as well as your business objectives. And so that's the entire point of this addition is to allow our users to experience hopefully the Nirvana associated with that because it's an exciting opportunity for them and really something that nobody else is doing from the standpoint of closing that loop. >> So you said upfront machine learning not as a marketing tag. So I want you to sort of double click on that. What's different than how other companies approach this problem? >> Yeah I mean, part of it is a bias for me and a frustration as a founder of the reason I started the company in the first place. I think machine learning our AI gets tagged to a lot of stuff. It's very buzzwordy, it looks good. I'm fortunate to have found a number of folks from the outset of the company with, you know, PhDs in Applied Mathematics and a focus on actually building real AI at the core that is connected to solving the right kind of actual business problems. And so, you know, for the first three or four years of the company's history, we really operated as a lab and that was our focus. We then decided we're trying to connect a fantastic team with differentiated technology to the right market timing. And when we saw all of these pain points around how fast the adoption of containers and Kubernetes have taken place but the pain that the developers are running into, we found it, we actually found for ourselves that this was the perfect use case. >> So how specifically does Optimize Live work? Can you add a little detail on that? >> Yeah so when you, many organizations today have an existing monitoring APM observability suite really in place. They've also got, they've also got a metric source, so this could be something like Datadog or Prometheus. And once that data starts flowing, there's an out of the box or kind of a piece of Kubernetes that ships with it called the VPA or the Vertical Pod Autoscaler. And less than really less than 1% of Kubernetes users take advantage of the VPA mostly because it's really challenging to configure and it's not super compatible with the tool set or the, you know, the ecosystem of tools in a Kubernetes environment. And so our biggest competitor is the VPA. And what's happening in this environment or in this world for developers is they're having to make decisions on a number of different metrics or resource elements typically things like memory and CPU. And they have to decide what are the, what are the requests I'm going to allow for this application and what are the limits? So what are those thresholds that I'm going to be okay with? So that I can again try to hit my business objectives and keep in line with my SLAs. And to your earlier point in the intro, it's often guesswork. You know, they either have to rely on out of the box recommendations that ship with the databases and other services that they are using or it's a super manual process to go through and try to configure and tune this. And so with Optimize Live, we're making that one-click. And so we're continuously and consistently observing and watching the data that's flowing through these tools and we're serving back recommendations for the user. They can choose to let those recommendations automatically patch and deploy or they can retain some semblance of control over the recommendations and manually deploy them into their environment themselves. And we again, really believe that the user knows their application, they know the goals that they have, we don't. But we have a system that's smart enough to align with the business objectives and ultimately provide the relevant recommendations at that point. >> So the business objectives are an input from the application team and then your system is smart enough to adapt and adjust those. >> Application over application, right? And so the thresholds in any given organization across their different ecosystem of apps or environment could be different. The business objectives could be different. And so we don't want to predefine that for people. We want to give them the opportunity to build those thresholds in and then allow the machine learning to learn and to send recommendations within those bounds. >> And we're going to hear later from a customer who is hosting a Drupal, one of the largest Drupal host, is it? So it's all do it yourself across thousands of customers so it's very unpredictable. I want to make something clear though, as to where you fit in the ecosystem. You're not an observability platform, you leverage observability platforms, right? So talk about that and where you fit in into the ecosystem. >> Yeah so it's a great point. We, we're also you know, a series B startup and growing. We've made the choice to be very intentionally focused on the problems that we've solve and we've chosen to partner or integrate otherwise. And so we do get put into the APM category from time to time. We're really an intelligence platform. And that intelligence and insights that we're able to draw is because we, because of the core machine learning we've built over the years. And we also don't want organizations or users to have to switch from tools and investments that they've already made. And so we were never going to catch up to Datadog or Dynatrace or Splunk or AppDynamics or some of the other, and we're totally fine with that. They've got great market share and penetration and they do solve real problems. Instead, we felt like users would want a seamless integration into the tools they're already using. And so we view ourselves as kind of the Intel inside for that kind of a scenario. And it takes observability and APM data and insights that were somewhat reactive, they're visualized and somewhat reactive and we make those, we add that proactive nature onto it, the insights and ultimately the appropriate level of automation. >> So when I think Matt about cloud native and I go back to the sort of origins of CNCF, it was a, you know, handful of companies, and now you look at the participants, you know, make your eyes bleed. How do you address dealing with all those companies and what's the partnership strategy? >> Yeah it's so interesting because it's just that even at CNCF landscape has exploded. It was not too long ago where it was as smaller than the finOps Landscape today which by the way the FinOps pieces is also on a neck breaking, you know, growth curve. We, I do see although there are a lot of companies and a lot of tools, we're starting to see a significant amount of consistency or hardening of the tool chain with our customers and users. And so we've made strategic and intentional decisions on deep partnerships in some cases like OEM users of our technology and certainly, you know, intelligent and seamless integrations into a few. So, you know, we'll be announcing a really exciting partnership with AWS and specifically what they're doing with EKS, their Kubernetes distribution and services. We've got a deep partnership and integration with Datadog and then with Prometheus and specifically cloud provider, a few other cloud providers that are operating manage Prometheus environments. >> Okay so where do you want to take this thing? If it's not, you're not taking the observability guys head on, smart move, so many of those even entering the market now, but what is the vision? >> Yeah so we've had this debate a lot as well because it's super difficult to create a category. You know, on one hand, I have a lot of respect for founders and companies that do that, on the other hand from a market timing standpoint, you know, we fit into AIOps. That's really where we fit. You know we are, we've made a bet on the future of Kubernetes and what that's going to look like. And so from a containers and Kubernetes standpoint that's our bet. But we're an AIOps platform, we'll continue getting better at what, at the problems we solve with machine learning and we'll continue adding data inputs so we'll go beyond the application layer which is really where we play now. We'll add kind of whole cluster optimization capabilities across the full stack. And the way we'll get there is by continuing to add different data inputs that make sense across the different layers of the stack and it's exciting. We can stay vertically oriented on the problems that we're really good at solving but we become more applicable and compatible over time. >> So that's your next concentric circle. As the observability vendors expand their observation space you can just play right into that. The more data you get could be because you're purpose built to solving these types of problems. >> Yeah so you can imagine a world right now out of observability, we're taking things like telemetry data pretty quickly. You can imagine a world where we take traces and logs and other data inputs as that ecosystem continues to grow, it just feeds our own, you know, we are reliant on data. So. >> Excellent. Matt, thank you so much. Thanks for hoping on. >> Yeah, appreciate it. >> Okay. Keep it right there. In a moment, We're going to hear from a customer with a highly diverse and constantly changing environment that I mentioned earlier, they went through a major re-platforming with Kubernetes on AWS. You're watching theCube, your a leader in enterprise tech coverage. (bright music)

Published Date : Jan 27 2022

SUMMARY :

and the enabled portability to my intro if you will. and how many knobs you can turn to further address these problems. and the associated optimizations about the challenges that this addresses. And so that's the entire So I want you to sort and that was our focus. And so our biggest competitor is the VPA. So the business objectives are an input And so the thresholds in as to where you fit in the ecosystem. We've made the choice to be and I go back to the and certainly, you know, And the way we'll get there As the observability vendors and other data inputs as that Matt, thank you so much. We're going to hear from a customer

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
AWSORGANIZATION

0.99+

MattPERSON

0.99+

StormForgeORGANIZATION

0.99+

Matt ProvoPERSON

0.99+

DynatraceORGANIZATION

0.99+

DatadogORGANIZATION

0.99+

firstQUANTITY

0.99+

SplunkORGANIZATION

0.99+

thousandsQUANTITY

0.99+

CNCFORGANIZATION

0.99+

AppDynamicsORGANIZATION

0.99+

KubernetesTITLE

0.98+

one-clickQUANTITY

0.98+

four yearsQUANTITY

0.98+

first threeQUANTITY

0.98+

two environmentsQUANTITY

0.98+

PrometheusTITLE

0.97+

EKSORGANIZATION

0.97+

DevOpsTITLE

0.97+

mid last decadeDATE

0.97+

bothQUANTITY

0.96+

DrupalTITLE

0.96+

Kool-AidORGANIZATION

0.93+

todayDATE

0.91+

EnterpriseORGANIZATION

0.91+

second core moduleQUANTITY

0.9+

Optimize LiveTITLE

0.85+

DatadogTITLE

0.84+

less than 1%QUANTITY

0.84+

LiveTITLE

0.83+

KubernetesORGANIZATION

0.8+

of years agoDATE

0.8+

oneQUANTITY

0.79+

lessQUANTITY

0.76+

IntelORGANIZATION

0.75+

CubeConEVENT

0.69+

FinOpsTITLE

0.65+

finOps LandscapeTITLE

0.59+

doubleQUANTITY

0.58+

Optimize LiveORGANIZATION

0.57+

AIOpsORGANIZATION

0.56+

AIOpsTITLE

0.54+

theCubeTITLE

0.5+

PrometheusORGANIZATION

0.49+

NirvanaTITLE

0.41+

NirvanaORGANIZATION

0.27+