Matt Provo, StormForge
(bright upbeat music) >> The adoption of container orchestration platforms is accelerating at a rate as fast or faster than any category in enterprise IT. Survey data from Enterprise Technology Research shows Kubernetes specifically leads the pack into both spending velocity and market share. Now like virtualization in its early days, containers bring many new performance and tuning challenges in particular insuring consistent and predictable application performance is tricky especially because containers, they're so flexible and they enable portability. Things are constantly changing. DevOps pros have to way through a sea of observability data and tuning the environment becomes a continuous exercise of trial and error. This endless cycle taxes resources and kills operational efficiency. So teams often just capitulate and simply dial up and throw unnecessary resources at the problem. StormForge is a company founded mid last decade that is attacking these issues with a combination of machine learning and data analysis. And with me to talk about a new offering that directly addresses these concerns is Matt Provo, founder and CEO of StormForge. Matt, welcome to theCUBE. Good to see you. >> Good to see you. Thanks for having me. >> Yeah, so we saw you guys at a KubeCon sort of first introduce you to our community but add a little color to my intro there if you will. >> Yeah, well, Semi stole my thunder but I'm okay with that. Absolutely agree with everything you said in the intro. You know, the problem that we have set out to solve which is tailor made for the use of real machine learning not machine learning kind of as a marketing tag is connected to how workloads on Kubernetes are really managed from a resource efficiency standpoint. And so a number of years ago, we built the core machine learning engine and have now turned that into a platform around how Kubernetes resources are managed at scale. And so organizations today as they're moving more workloads over, sort of drink the Kool-Aid of the flexibility that comes with Kubernetes and how many knobs you can turn. And developers in many ways love it. Once they start to operationalize the use of Kubernetes and move workloads from pre-production into production, they run into a pretty significant complexity wall. And this is where StormForge comes in to try to help them manage those resources more effectively in ensuring and implementing the right kind of automation that empowers developers into the process ultimately does not automate them out of it. >> So you've got news. You had launch coming to further address these problems. Tell us about that. >> Yeah, so historically, you know, like any machine learning engine, we think about data inputs and what kind of data is going to feed our system to be able to draw the appropriate insights out for the user. And so historically we've kind of been single threaded on load and performance tests in a pre-production environment. And there's been a lot of adoption of that, a lot of excitement around it and frankly amazing results. My vision has been for us to be able to close the loop, however, between data coming out of pre-production and the associated optimizations and data coming out of production environment and our ability to optimize that. A lot of our users along the way have said these results in pre-production are fantastic. How do I know they reflect reality of what my application is going to experience in a production environment? And so we're super excited to announce kind of the a second core module for our platform called Optimize Live. The data input for that is observability and telemetry data coming out of APM platforms and other data sources. >> So this is like Nirvana. So I wonder if we could talk a little bit more about the challenges that this addresses. I mean, I've been around a while and it really have observed... And I used to ask, you know, technology companies all the time. Okay, so you're telling me beforehand what the optimal configuration should be and resource allocation. What happens if something changes? >> Yeah. >> And then it's always, always a pause. >> Yeah. >> And Kubernetes is more of a rapidly changing environment than anything we've ever seen. So specifically the problem you're addressing. Maybe talk about that a little bit. >> Yeah, so we view what happens in pre-production as sort of the experimentation phase. And our machine learning is allowing the user to experiment in scenario plan. What we're doing with Optimize Live and adding the the production piece is what we kind of also call kind of our observation phase. And so you need to be able to run the appropriate checks and balances between those two environments to ensure that what you're actually deploying and monitoring from an application performance, from a cost standpoint is with your SLOs and your SLAs as well as your business objectives. And so that's the entire point of this edition is to allow our users to experience hopefully the Nirvana associated with that because it's an exciting opportunity for them and really something that no else is doing from the standpoint of closing that loop. >> So you said front machine learning not as a marketing tag. So I want you to sort of double click on that. What's different than how other companies approach this problem? >> Yeah, I mean, part of it is a bias for me and a frustration as a founder of the reason I started the company in the first place. I think machine learning or AI gets tagged to a lot of stuff. It's very buzzwordy. It looks good. I'm fortunate to have found a number of folks from the outset of the company with, you know, PhDs in Applied Mathematics and a focus on actually building real AI at the core that is connected to solving the right kind of actual business problems. And so, you know, for the first three or four years of the company's history, we really operated as a lab. And that was our focus. We then decided, we're trying to connect a fantastic team with differentiated technology to the right market timing. And when we saw all these pain points around how fast the adoption of containers and Kubernetes have taken place but the pain that the developers are running into, we actually found for ourselves that this was the perfect use case. >> So how specifically does Optimize Live work? Can you add a little detail on that? >> Yes, so when you... Many organizations today have an existing monitoring APM observability suite really in place. They've also got a metric source. So this could be something like Datadog or Prometheus. And once that data starts flowing, there's an out of the box or kind of a piece of Kubernetes that ships with it called the VPA or the Vertical Pod Autoscaler. And less than, really than 1% of Kubernetes users take advantage of the VPA mostly because it's really challenging to configure and it's not super compatible with the the tool set or, you know, the ecosystem of tools in a Kubernetes environment. And so our biggest competitor is the VPA. And what's happening in this environment or in this world for developers is they're having to make decisions on a number of different metrics or resource elements typically things like memory and CPU. And they have to decide what are the requests I'm going to allow application and what are the limits? So what are those thresholds that I'm going to be okay with so that I can, again, try to hit my business objectives and keep in line with my SLAs? And to your earlier point in the intro, it's often guesswork. You know, they either have to rely on out of the box recommendations that ship with the databases and other services that they are using or it's a super manual process to go through and try to configure and tune this. And so with Optimize Live, we're making that one click. And so we're continuously and consistently observing and watching the data that's flowing through these tools and we're serving back recommendations for the user. They can choose to let those recommendations automatically patch and deploy or they can retain some semblance of control over are the recommendations and manually deploy them into their environment themselves. And we, again, really believe that the user knows their application. They know the goals that they have and we don't. But we have a system that's smart enough to align with the business objectives and ultimately provide the relevant recommendations at that point. >> So the business objectives are an input from the application team? >> Yep. >> And then your system is smart enough to adapt and address those. >> Application over application, right? And so the thresholds in any given organization across their different ecosystem of apps or environment could be different. The business objectives could be different. And so we don't want to predefine that for people. We want to give them the opportunity to build those thresholds in and then allow the machine learning to learn and to send recommendations within those bounds. >> And we're going to hear later from a customer who's hosting a Drupal, one of the largest Drupal hosts. So it's all do it yourself across thousands of customers so it's, you know, very unpredictable. I want to make something clear though as to where you fit in the ecosystem. You're not an observability platform, you leverage observability platforms, right? So talk about that and where you fit into the ecosystem. >> Yeah, so it's a great point. We're also, you know, a series B startup and growing. We've the choice to be very intentionally focused on the problems that we've solve. And we've chosen to partner or integrate otherwise. And so we do get put into the APM category from time to time. We are really an intelligence platform. And that intelligence and insights that we're able to draw is because of the core machine learning we've built over the years. And we also don't want organizations or users to have to switch from tools and investments that they've already made. And so we were never going to catch up to to Datadog or Dynatrace or Splunk or AppDynamics or some of the other. And we're totally fine with that. They've got great market share and penetration. They do solve real problems. Instead, we felt like users would want a seamless integration into the tools they're already using. And so we view ourselves as kind of the Intel inside for that kind of a scenario. And it takes observability and APM data and insights that were somewhat reactive. They're visualized and somewhat reactive. And we add that proactive nature onto it, the insights and ultimately the appropriate level of automation. >> So when I think, Matt, about cloud native and I go back to the sort of origins of CNCF who's a, you know, handful of companies. And now you look at the participants it'll, you know, make your eyes bleed. How do you address dealing with all those companies and what is the partnership strategy? >> Yeah, it's so interesting because it's just that even that CNCF landscape has exploded. It was not too long ago where it was as small or smaller than the FinOps landscape today which by the way, the FinOps piece is also on a a neck breaking, you know, growth curve. We, I do see, although there are a lot of companies and a lot of tools, we're starting to see a significant amount of consistency or hardening of the tool chain, you know, with our customers and users. And so we've made strategic and intentional decisions on deep partnerships in some cases like OEM uses of our technology and certainly, you know, intelligent and seamless integrations into a few. So, you know, we'll be announcing a really exciting partnership with AWS and that specifically what they're doing with EKS, their Kubernetes distribution and services. We've got a deep partnership and integration with Datadog and then with Prometheus and specifically a few other cloud providers that are operating, manage Prometheus environments. >> Okay, so where do you want to take this thing? You're not taking the observability guys head on, smart move. So many of those even entering the market now. But what is the vision? >> Yeah, so we've had this debate a lot as well 'cause it's super difficult to create a category. You know, on one hand, you know, I have a lot of respect for founders and companies that do that. On the other hand from a market timing standpoint, you know we fit into AIOps, that's really where we fit. You know, we've made a bet on the future of Kubernetes and what that's going to look like. And so from a containers and Kubernetes standpoint, that's our bet. But we're an AIOps platform. You know, we'll continue getting better at the problems we solve with machine learning and we'll continue adding data inputs. So we'll go, you know, we'll go beyond the application layer which is really where we play now. We'll add, you know, kind of whole cluster optimization capabilities across the full stack. And the way we will get there is by continuing to add different data inputs that make sense across the different layers of the stack. And it's exciting. We can stay vertically oriented on the problems that we're really good at solving but we can become more applicable and compatible over time. >> So that's your next concentric circle. As the observability vendors expand their observation space, you can just play right into that. >> Yeah. >> The more data you get because your purpose built to solving these types of problems. >> Yeah, so you can imagine a world right now out of observability, we're taking things like telemetry data pretty quickly. You can imagine a world where we take traces and logs and other data inputs as that ecosystem continues to grow, it just feeds our own, you know, we are reliant on data. >> Excellent, Matt, thank you so much. >> Thanks for having me. >> Appreciate for coming on. Okay, keep it right there in a moment. We're going to hear from a customer with a highly diverse and constantly changing environment that I mentioned earlier. They went through a major replatforming with Kubernetes on AWS. You're watching theCUBE, you are leader in enterprise tech coverage. (bright upbeat music)
SUMMARY :
and CEO of StormForge. Good to see you. Yeah, so we saw you guys at a KubeCon that empowers developers into the process You had launch coming to and the associated optimizations And I used to ask, you know, And Kubernetes is more of And so that's the entire So I want you to sort And so, you know, for the And so our biggest competitor is the VPA. is smart enough to adapt And so the thresholds in as to where you fit in the ecosystem. We've the choice to be and I go back to the or hardening of the tool chain, you know, Okay, so where do you And the way we will get there As the observability vendors to solving these types of problems. as that ecosystem continues to grow, and constantly changing environment
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
AWS | ORGANIZATION | 0.99+ |
StormForge | ORGANIZATION | 0.99+ |
Matt | PERSON | 0.99+ |
Matt Provo | PERSON | 0.99+ |
KubeCon | EVENT | 0.99+ |
Dynatrace | ORGANIZATION | 0.99+ |
first | QUANTITY | 0.99+ |
Datadog | ORGANIZATION | 0.99+ |
Splunk | ORGANIZATION | 0.99+ |
four years | QUANTITY | 0.99+ |
AppDynamics | ORGANIZATION | 0.99+ |
CNCF | ORGANIZATION | 0.98+ |
mid last decade | DATE | 0.98+ |
Prometheus | TITLE | 0.98+ |
Kubernetes | TITLE | 0.98+ |
DevOps | TITLE | 0.97+ |
one click | QUANTITY | 0.97+ |
first three | QUANTITY | 0.97+ |
Semi | PERSON | 0.97+ |
two environments | QUANTITY | 0.97+ |
Drupal | TITLE | 0.96+ |
both | QUANTITY | 0.96+ |
thousands of customers | QUANTITY | 0.95+ |
Kool-Aid | ORGANIZATION | 0.94+ |
EKS | ORGANIZATION | 0.92+ |
today | DATE | 0.92+ |
of years ago | DATE | 0.88+ |
1% | QUANTITY | 0.86+ |
one | QUANTITY | 0.85+ |
Datadog | TITLE | 0.85+ |
Optimize Live | TITLE | 0.84+ |
second core module | QUANTITY | 0.83+ |
Intel | ORGANIZATION | 0.82+ |
Live | TITLE | 0.8+ |
single | QUANTITY | 0.76+ |
Enterprise Technology | ORGANIZATION | 0.74+ |
AIOps | ORGANIZATION | 0.72+ |
theCUBE | TITLE | 0.72+ |
theCUBE | ORGANIZATION | 0.72+ |
Kubernetes | ORGANIZATION | 0.68+ |
double | QUANTITY | 0.6+ |
Optimize | ORGANIZATION | 0.59+ |
Nirvana | ORGANIZATION | 0.46+ |
FinOps | ORGANIZATION | 0.42+ |
FinOps | TITLE | 0.41+ |
Nirvana | TITLE | 0.39+ |
Matt Provo | ** Do not make public **
(bright upbeat music) >> The adoption of container orchestration platforms is accelerating at a rate as fast or faster than any category in enterprise IT. Survey data from Enterprise Technology Research shows Kubernetes specifically leads the pack in both spending velocity and market share. Now like virtualization in its early days, containers bring many new performance and tuning challenges. In particular, ensuring consistent and predictable application performance is tricky especially because containers they're so flexible and the enabled portability things are constantly changing. DevOps pros have to wade through a sea of observability data and tuning the environment becomes a continuous exercise of trial and error. This endless cycle taxes, resources, and kills operational efficiencies so teams often just capitulate and simply dial up and throw unnecessary resources at the problem. StormForge is a company founded in mid last decade that is attacking these issues with a combination of machine learning and data analysis. And with me to talk about a new offering that directly addresses these concerns, is Matt Provo, founder and CEO of StormForge. Matt, welcome to thecube. Good to see you. >> Good to see you, thanks for having me. >> Yeah. So we saw you guys at CubeCon, sort of first introduce you to our community but add a little color to my intro if you will. >> Yeah, well you semi stole my thunder but I'm okay with that. Absolutely agree with everything you said in the intro. You know, the problem that we have set out to solve which is tailor made for the use of real machine learning not machine learning kind of as a marketing tag is connected to how workloads on Kubernetes are really managed from a resource efficiency standpoint. And so a number of years ago we built the core machine learning engine and have now turned that into a platform around how Kubernetes resources are managed at scale. And so organizations today as they're moving more workloads over sort of drink the Kool-Aid of the flexibility that comes with Kubernetes and how many knobs you can turn and developers in many ways love it. Once they start to operationalize the use of Kubernetes and move workloads from pre-production into production, they run into a pretty significant complexity wall. And this is where StormForge comes in to try to help them manage those resources more effectively in ensuring and implementing the right kind of automation that empowers developers into the process ultimately does not automate them out of it. >> So you've got news, your hard launch coming in to further address these problems. Tell us about that. >> Yeah so historically, you know, like any machine learning engine, we think about data inputs and what kind of data is going to feed our system to be able to draw the appropriate insights out for the user. And so historically we are, we've kind of been single-threaded on load and performance tests in a pre-production environment. And there's been a lot of adoption of that, a lot of excitement around it and frankly, amazing results. My vision has been for us to be able to close the loop however between data coming out of pre-production and the associated optimizations and data coming out of production, a production environment, and our ability to optimize that. A lot of our users along the way have said these results in pre-production are fantastic. How do I know they reflect reality of what my application is going to experience in a production environment? And so we're super excited to announce kind of the second core module for our platform called Optimize Live. The data input for that is observability and telemetry data coming out of APM platforms and other data sources. >> So this is like Nirvana. So I wonder if we could talk a little bit more about the challenges that this addresses. I mean, I've been around a while and it really have observed and I used to ask technology companies all the time, okay, so you're telling me beforehand what the optimal configuration should be in resource allocation, what happens if something changes? And then it's always a pause. And Kubernetes is more of a rapidly changing environment than anything we've ever seen. So this is specifically the problem you're addressing. Maybe talk about that a little bit. >> Yeah so we view what happens in pre-production as sort of the experimentation phase and our machine learning is allowing the user to experiment and scenario plan. What we're doing with Optimize Live and adding the production piece is what we kind of also call kind of our observation phase. And so you need to be able to run the appropriate checks and balances between those two environments to ensure that what you're actually deploying and monitoring from an application performance, from a cost standpoint, is aligning with your SLOs and your SLAs as well as your business objectives. And so that's the entire point of this addition is to allow our users to experience hopefully the Nirvana associated with that because it's an exciting opportunity for them and really something that nobody else is doing from the standpoint of closing that loop. >> So you said upfront machine learning not as a marketing tag. So I want you to sort of double click on that. What's different than how other companies approach this problem? >> Yeah I mean, part of it is a bias for me and a frustration as a founder of the reason I started the company in the first place. I think machine learning our AI gets tagged to a lot of stuff. It's very buzzwordy, it looks good. I'm fortunate to have found a number of folks from the outset of the company with, you know, PhDs in Applied Mathematics and a focus on actually building real AI at the core that is connected to solving the right kind of actual business problems. And so, you know, for the first three or four years of the company's history, we really operated as a lab and that was our focus. We then decided we're trying to connect a fantastic team with differentiated technology to the right market timing. And when we saw all of these pain points around how fast the adoption of containers and Kubernetes have taken place but the pain that the developers are running into, we found it, we actually found for ourselves that this was the perfect use case. >> So how specifically does Optimize Live work? Can you add a little detail on that? >> Yeah so when you, many organizations today have an existing monitoring APM observability suite really in place. They've also got, they've also got a metric source, so this could be something like Datadog or Prometheus. And once that data starts flowing, there's an out of the box or kind of a piece of Kubernetes that ships with it called the VPA or the Vertical Pod Autoscaler. And less than really less than 1% of Kubernetes users take advantage of the VPA mostly because it's really challenging to configure and it's not super compatible with the tool set or the, you know, the ecosystem of tools in a Kubernetes environment. And so our biggest competitor is the VPA. And what's happening in this environment or in this world for developers is they're having to make decisions on a number of different metrics or resource elements typically things like memory and CPU. And they have to decide what are the, what are the requests I'm going to allow for this application and what are the limits? So what are those thresholds that I'm going to be okay with? So that I can again try to hit my business objectives and keep in line with my SLAs. And to your earlier point in the intro, it's often guesswork. You know, they either have to rely on out of the box recommendations that ship with the databases and other services that they are using or it's a super manual process to go through and try to configure and tune this. And so with Optimize Live, we're making that one-click. And so we're continuously and consistently observing and watching the data that's flowing through these tools and we're serving back recommendations for the user. They can choose to let those recommendations automatically patch and deploy or they can retain some semblance of control over the recommendations and manually deploy them into their environment themselves. And we again, really believe that the user knows their application, they know the goals that they have, we don't. But we have a system that's smart enough to align with the business objectives and ultimately provide the relevant recommendations at that point. >> So the business objectives are an input from the application team and then your system is smart enough to adapt and adjust those. >> Application over application, right? And so the thresholds in any given organization across their different ecosystem of apps or environment could be different. The business objectives could be different. And so we don't want to predefine that for people. We want to give them the opportunity to build those thresholds in and then allow the machine learning to learn and to send recommendations within those bounds. >> And we're going to hear later from a customer who is hosting a Drupal, one of the largest Drupal host, is it? So it's all do it yourself across thousands of customers so it's very unpredictable. I want to make something clear though, as to where you fit in the ecosystem. You're not an observability platform, you leverage observability platforms, right? So talk about that and where you fit in into the ecosystem. >> Yeah so it's a great point. We, we're also you know, a series B startup and growing. We've made the choice to be very intentionally focused on the problems that we've solve and we've chosen to partner or integrate otherwise. And so we do get put into the APM category from time to time. We're really an intelligence platform. And that intelligence and insights that we're able to draw is because we, because of the core machine learning we've built over the years. And we also don't want organizations or users to have to switch from tools and investments that they've already made. And so we were never going to catch up to Datadog or Dynatrace or Splunk or AppDynamics or some of the other, and we're totally fine with that. They've got great market share and penetration and they do solve real problems. Instead, we felt like users would want a seamless integration into the tools they're already using. And so we view ourselves as kind of the Intel inside for that kind of a scenario. And it takes observability and APM data and insights that were somewhat reactive, they're visualized and somewhat reactive and we make those, we add that proactive nature onto it, the insights and ultimately the appropriate level of automation. >> So when I think Matt about cloud native and I go back to the sort of origins of CNCF, it was a, you know, handful of companies, and now you look at the participants, you know, make your eyes bleed. How do you address dealing with all those companies and what's the partnership strategy? >> Yeah it's so interesting because it's just that even at CNCF landscape has exploded. It was not too long ago where it was as smaller than the finOps Landscape today which by the way the FinOps pieces is also on a neck breaking, you know, growth curve. We, I do see although there are a lot of companies and a lot of tools, we're starting to see a significant amount of consistency or hardening of the tool chain with our customers and users. And so we've made strategic and intentional decisions on deep partnerships in some cases like OEM users of our technology and certainly, you know, intelligent and seamless integrations into a few. So, you know, we'll be announcing a really exciting partnership with AWS and specifically what they're doing with EKS, their Kubernetes distribution and services. We've got a deep partnership and integration with Datadog and then with Prometheus and specifically cloud provider, a few other cloud providers that are operating manage Prometheus environments. >> Okay so where do you want to take this thing? If it's not, you're not taking the observability guys head on, smart move, so many of those even entering the market now, but what is the vision? >> Yeah so we've had this debate a lot as well because it's super difficult to create a category. You know, on one hand, I have a lot of respect for founders and companies that do that, on the other hand from a market timing standpoint, you know, we fit into AIOps. That's really where we fit. You know we are, we've made a bet on the future of Kubernetes and what that's going to look like. And so from a containers and Kubernetes standpoint that's our bet. But we're an AIOps platform, we'll continue getting better at what, at the problems we solve with machine learning and we'll continue adding data inputs so we'll go beyond the application layer which is really where we play now. We'll add kind of whole cluster optimization capabilities across the full stack. And the way we'll get there is by continuing to add different data inputs that make sense across the different layers of the stack and it's exciting. We can stay vertically oriented on the problems that we're really good at solving but we become more applicable and compatible over time. >> So that's your next concentric circle. As the observability vendors expand their observation space you can just play right into that. The more data you get could be because you're purpose built to solving these types of problems. >> Yeah so you can imagine a world right now out of observability, we're taking things like telemetry data pretty quickly. You can imagine a world where we take traces and logs and other data inputs as that ecosystem continues to grow, it just feeds our own, you know, we are reliant on data. So. >> Excellent. Matt, thank you so much. Thanks for hoping on. >> Yeah, appreciate it. >> Okay. Keep it right there. In a moment, We're going to hear from a customer with a highly diverse and constantly changing environment that I mentioned earlier, they went through a major re-platforming with Kubernetes on AWS. You're watching theCube, your a leader in enterprise tech coverage. (bright music)
SUMMARY :
and the enabled portability to my intro if you will. and how many knobs you can turn to further address these problems. and the associated optimizations about the challenges that this addresses. And so that's the entire So I want you to sort and that was our focus. And so our biggest competitor is the VPA. So the business objectives are an input And so the thresholds in as to where you fit in the ecosystem. We've made the choice to be and I go back to the and certainly, you know, And the way we'll get there As the observability vendors and other data inputs as that Matt, thank you so much. We're going to hear from a customer
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
AWS | ORGANIZATION | 0.99+ |
Matt | PERSON | 0.99+ |
StormForge | ORGANIZATION | 0.99+ |
Matt Provo | PERSON | 0.99+ |
Dynatrace | ORGANIZATION | 0.99+ |
Datadog | ORGANIZATION | 0.99+ |
first | QUANTITY | 0.99+ |
Splunk | ORGANIZATION | 0.99+ |
thousands | QUANTITY | 0.99+ |
CNCF | ORGANIZATION | 0.99+ |
AppDynamics | ORGANIZATION | 0.99+ |
Kubernetes | TITLE | 0.98+ |
one-click | QUANTITY | 0.98+ |
four years | QUANTITY | 0.98+ |
first three | QUANTITY | 0.98+ |
two environments | QUANTITY | 0.98+ |
Prometheus | TITLE | 0.97+ |
EKS | ORGANIZATION | 0.97+ |
DevOps | TITLE | 0.97+ |
mid last decade | DATE | 0.97+ |
both | QUANTITY | 0.96+ |
Drupal | TITLE | 0.96+ |
Kool-Aid | ORGANIZATION | 0.93+ |
today | DATE | 0.91+ |
Enterprise | ORGANIZATION | 0.91+ |
second core module | QUANTITY | 0.9+ |
Optimize Live | TITLE | 0.85+ |
Datadog | TITLE | 0.84+ |
less than 1% | QUANTITY | 0.84+ |
Live | TITLE | 0.83+ |
Kubernetes | ORGANIZATION | 0.8+ |
of years ago | DATE | 0.8+ |
one | QUANTITY | 0.79+ |
less | QUANTITY | 0.76+ |
Intel | ORGANIZATION | 0.75+ |
CubeCon | EVENT | 0.69+ |
FinOps | TITLE | 0.65+ |
finOps Landscape | TITLE | 0.59+ |
double | QUANTITY | 0.58+ |
Optimize Live | ORGANIZATION | 0.57+ |
AIOps | ORGANIZATION | 0.56+ |
AIOps | TITLE | 0.54+ |
theCube | TITLE | 0.5+ |
Prometheus | ORGANIZATION | 0.49+ |
Nirvana | TITLE | 0.41+ |
Nirvana | ORGANIZATION | 0.27+ |