Image Title

Search Results for CloudNativeCon 21:

Webb Brown | KubeCon + CloudNativeCon NA 2021


 

>> Welcome back to theCUBE's coverage of KubeCon + CloudNativeCon 21 live form Los Angeles. Lisa Martin, with Dave Nicholson. And we've got a CUBE alum back with us. Webb Brown is back. The co-founder and CEO of Kubecost. Welcome back! >> Thank you so much. It's great to be back. It's been right at two years, a lot's happening in our community and ecosystem as well as with our open source project and company. So awesome with that. >> Give the audience an overview in case they're not familiar with Kubecost. And then talk to us about this explosive growth that you've seen since we last saw you in person. >> Yeah, absolutely. So Kubecost provides cost management solutions purpose-built for teams throwing in Kubernetes and Cloud Native. Right? So everything we do is built on open source. All of our products can be installed in minutes. We give teams visibility into spend, then help them optimize it and govern it over time. So it's been a busy two years since we last talked, we have grown the team about, you know, 5 x, so like right around 20 people today. We now have thousands of mostly medium and large sized enterprises using the product. You know, that's north of a 10 x growth since we launched just before, you know, KubeCon San Diego, now managing billions of dollars of spin and, you know, I feel like, we're just getting started. So it's an incredibly exciting time for us as a company and also just great to be back in person with our friends in the community. >> This community is such a strong community. And it's great to see people back here. I agree. >> Absolutely, absolutely. >> So Kubecost, obviously you talk about cost optimization, but it's, you really, you're an insight engine in the sense that if you're looking at costs, you have to measure that against what you're getting for that cost. >> Absolutely. So what are some of the insights that your platform or that your tool set offers. >> Yeah, absolutely, so, you know, we think about our product is first and foremost, like visibility and monitoring and then insights and optimization and then governance. You know, if you talk to most teams today, they're still kind of getting that visibility, but once you do it quickly leads in how do we optimize? And then we're going to give you insights at every part of the stack, right? So like at the infrastructure layer, thinking about things like Spot and RIS and savings plans, et cetera. At the Kubernetes orchestration layer, thinking about things like auto scaling and, you know, setting requests and limits, et cetera, all the way up to like the application layer with all of that being purpose-built for, you know, Cloud Native Kubernetes. So the way we work as you deploy our product in your environment, anywhere you're running Kubernetes, 1.11 or above we'll run. And we're going to start dynamically generating these insights in minutes and they're real time. And again, they scaled to the largest Kubernetes clusters in the world. >> And you said, you've had a thousand or so customers in the medium to large enterprise. These are large organizations, probably brand names, I imagine we are familiar with that are leaning on Kubecost to help get that visibility that before they did not have the ability to get. >> Absolutely, absolutely. So definitely our users of our thousands of users, skews heavily towards, you know, medium and large side enterprise. Working with some amazing companies like Adobe, who, you know, just have such high scale and like complex and sophisticated infrastructure. So, you know, I think this is very natural in what we expect, which is like, as you start spending more resources, you know, missing visibility, having unoptimized infrastructure starts to be more costly. >> Absolutely. >> And we typically see as once that gets into like the multiple head count, right? And it starts to, you know, spend some, may make sense to spend some time optimizing and monitoring and, you know, putting the learning in place. So you can manage it more effectively as time goes on. >> Do you have any metrics or any X factor ranges of the costs that you've actually saved customers? >> Yeah. I mean, we've saved multiple customers in them, like many of millions of dollars at this point, >> So we're talking big. >> Really big. So yeah, we're now managing more than $2 billion of spin. So like some really big savings on a per customer base, but it's really common where we're saving, you know, north of 30%, sometimes up to 70% on your Kubernetes and related spin. And so we're giving you insights into your Kubernetes cluster and again, the full stack there, but also giving you visibility and insights into external things like external disk or cloud storage buckets or, you know, cloud sequel that, that sort of stuff, external cloud services. >> Taking those blinders off >> Exactly. And giving you that unified, you know, real time picture again, that accurately reflects everything that's going on in your system. >> So when these insights are produced or revealed, are the responses automated? or are they then manually applied? >> Yeah. Yeah. That's a great question. We support both and we support both in different ways By default, when you deploy Kubecost, and again it's, today it's Helm Install. It can be running in your cluster in, you know, minutes or less, it's deployed in read only mode. And by the way, you don't share any data externally, it's all in your local environment. So we started generating these insights, you know, right when you install in your environment. >> Let me ask you about, I'm sorry to interrupt, but when you say you're generating an insight, are you just giving an answer and guidance? or you're providing the reader background on what leads to that insight? >> Yeah. You know, is that a philosophical question of, do you need to provide the user rationale for the insight? >> Yeah, absolutely. And I think we're doing this today and we'll do more, but one example is, you know, if you just look at this notion of setting requests and limits for your applications in Kubernetes, you know, if you, in simple forms, if you set a request too high, you're potentially wasting money because the Kubernetes scheduler is presenting that resource for you. If you set it too low, you're at risk of being CPU throttled, right? So communicating that symbiotic relationship and the risk on either side really helps the team understand why do I need to strike this balance, right? It's not just cost it's performance and reliability as well. So absolutely given that background and again, out of the box we're read only, but we also have automation in our product with our cluster controller. So you can dynamically do things like right-size your infrastructure, or, you know, move workloads to Spot, et cetera. But we also have integrations with a bunch of tooling in this ecosystem. So like Prometheus native, you know, Alert Manager native, just launched an integration with Spinnaker and Armory where you can like dynamically at the time of deployment, you know, right size and have insights. So you can expect to see more from us there. But we very much think about automation is twofold. One, you know, building trust in Kubecost and our insights and adopting them over time. But then two is meeting you where you are with your existing tooling, whether it's your CICB pipeline, observability or, you know, existing kind of workflow automation system. >> Meeting customers where they are is, is critical these days. >> Absolutely. I think, especially in this market, right? where we have the potential to have so much interoperability and all these things working in harmony and also, you know, there's, there's a lot of booths back here, right? So we, you know, we have complex tech stacks and, you know, in certain cases we feel like when we bring you to our UI or API's or, you know, automation or COI's, we can do things more effective. But oftentimes when we bring that data to you, we can be more effective again, that's, you know, coming, bringing your data to Chronosphere or Prometheus or Grafana, you know, all of the tooling that you're already using on a daily, regular basis. >> Bringing that data into the tool is just another example of the value in data that the organizations can actually harness that value and unlock it. >> Webb: Yeah. >> There's so much potential there for them to be more competitive, for them to be able to develop products and services faster. >> Absolutely. Yeah, I think you're just seeing the coming of age with, you know, cost metrics into that equation. We now live in a world with Kubernetes as this amazing innovation platform where as an engineer, I can go spin up some pretty costly resources, really fast, and that's a great thing for innovation, right? But it also kind of pushes some of the accountability or awareness down to the individual >> Webb: IC who needs to be aware, you know, what, you know, things generally cost at a minimum in like a directional way, so they can make informed decisions again, when they think about this cost performance, reliability, trade-off. >> Lisa: Where are your customer conversations? Are your target users, DevOps folks? I was just wondering where finance might be in this whole game. >> Yeah, it's a great question. Given the fact that we are kind of open source first and started with open source, we, you know, 95% of the time when we start working with an infrastructure engineering team or dev ops team. They've already installed our product. They're already familiar with what we're doing, but then increasingly and increasingly fast, you know, finance is being brought into the equation and, you know, management is being brought into the equation. And I think it's a function of what we were talking about where, you know, 70% of teams grew their Kubernetes spend over the last year, you know, 20% of them more than doubled. So, you know, these are starting to be real, you know, expense items where finance is increasingly aware of what's going on. So yeah, they're coming into the picture, but it's simply thought that you starting with, and, and working with the infrastructure team, that's actually kind of putting some of these insights into action or hooking us into their pipelines or something. >> When you think of developers going out and grabbing resources, and you think of a, an insight tool that looks at controlling cost, that could seem like an inhibitor. But really if you're talking about how to efficiently use whatever resources you have to be able to have access to in terms of dollars, you could sell this to the developers on that basis. It's like, look, you have these 10 things that you want to be able to do. If you don't optimize using a tool like this, you're only going to be able to do 4 of them. >> Without a doubt. Yeah. And you know, us as our founding team, all engineers, you know, we were the ones getting those questions of, you know, how have we already spent, you know, our budget on just this project? We have these three others we want to do, right? Or why are costs going up as quickly as they are? You know, what are we spending on this application, instead of that kind of being a manual lift, like, let me go do a bunch of analysis or come back with answers. It's tools to where not only can management answer those questions themselves, but like engineering teams can make informed opportunity costs and optimizations decisions itself, whether it's tooling and automation doing it for them or them applying things, you know, directly. >> Lisa: So a lot of growth. You talked about the growth on employees, the growth in revenue, what lies ahead for Kubecost? What are some of the things that are coming on the horizon that you're really excited about? >> Yeah, we very much feel like we're just getting started you know, just like we feel this ecosystem and community is, right? Like there's been tons of progress all around, but like, wow, it's still early days. So, you know, we, we did raise, you know, five and a half million dollars from, you know, First Round who is an amazing group to work with at the end of last year. So by growing the engineering team were able to do a lot more. We got a bunch of really big things coming across all parts of our product. You can think about one thing we're really excited that's in limit availability right now is our first hosted solution. It's our first SaaS solution. And this is critically important to us in that we want to give teams the option to, if you want to own and control your data and never egress anything outside of your cluster, you can do that with our deploy product. You can do that with our open source. You can truly lock down namespace to egress and never send a byte out. Or if you'd like the convenience of us to manage it for you and be kind of stewards of your data, we're going to offer, you know, a great offering there too. So that's unlimited availability day. We're going to have a lot more announcements coming there, but we see those being at feature parity, you know, between like our enterprise offerings and our hosted solution and just, you know, a lot more coming with, you know, visibility, some more like GPU insights, you know, metrics coming quickly, a lot more with automation coming and then more integrations for governance. Again, kind of talked about Spinnaker and things like that. A lot more really interesting ones coming. >> So five and a half million raised in the last round of funding. Where are you going to be applying that? What are some of the growth engines that you want to tune with that money? >> Yeah, so, you know, first and foremost, it was really growing the engineering team, right? So we've, you know, like 4 x the engineering team in the last year, and just have an amazing group of engineers. We want to continue to do that. >> Webb: We're kind of super early on the like, you know, marketing and sales side. We're going to start thinking about that more and more, you know, our approach first off was like, we want to solve a really valuable problem and doing it in a way that is super compelling. And we think that when you do that, you know, good things happen. I think that's some of our Google background, which is like, you build a great search engine and like, you know, good things generally happen. So we're just super focused on, again, working with great users, you know, building great products that meet them where they are and solve problems that are really important to them. >> Lisa: Awesome. Well, congratulations on all the trajectory of success since we last saw you in person. >> Thank you. >> Great to have you back on the show, looking forward to, so folks can go to www.kubecost.com to learn more and see some of those announcements coming down the pike. >> Absolutely, yeah. >> Don't you make it two years before you come back. >> Webb: I would love to be back. I hope we're back bigger than ever, you know, next year, but it has been such a pleasure, you know, last time and this time, thank you so much for having me, you know, I love being part of the show and the community at large. >> It's a great community and we appreciate you sharing all your insights. >> Thank you so much. >> All right. For Dave Nicholson, I'm Lisa Martin coming to you live from Los Angeles. This is theCUBE's coverage of KubeCon and CloudNativeCon 21. We back with our next guest shortly. We'll see you there.

Published Date : Oct 15 2021

SUMMARY :

and CEO of Kubecost. Thank you so much. last saw you in person. of spin and, you know, I feel like, And it's great to see So Kubecost, obviously you or that your tool set offers. So the way we work as you And you said, you've had like Adobe, who, you know, And it starts to, you know, spend some, like many of millions of you know, north of 30%, that unified, you know, And by the way, you don't do you need to provide the at the time of deployment, you know, is critical these days. So we, you know, we have complex Bringing that data into the tool for them to be more competitive, the coming of age with, you know, aware, you know, what, you know, Lisa: Where are your over the last year, you know, and you think of a, you know, we were the ones Lisa: So a lot of growth. and just, you know, that you want to tune with that money? So we've, you know, like and like, you know, good we last saw you in person. Great to have you back on the show, years before you come back. you know, next year, but it and we appreciate you We'll see you there.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Lisa MartinPERSON

0.99+

Dave NicholsonPERSON

0.99+

LisaPERSON

0.99+

Los AngelesLOCATION

0.99+

95%QUANTITY

0.99+

20%QUANTITY

0.99+

AdobeORGANIZATION

0.99+

4QUANTITY

0.99+

WebbPERSON

0.99+

10 thingsQUANTITY

0.99+

thousandsQUANTITY

0.99+

70%QUANTITY

0.99+

Webb BrownPERSON

0.99+

two yearsQUANTITY

0.99+

www.kubecost.comOTHER

0.99+

more than $2 billionQUANTITY

0.99+

KubeConEVENT

0.99+

PrometheusTITLE

0.99+

twoQUANTITY

0.99+

next yearDATE

0.99+

KubecostORGANIZATION

0.99+

CloudNativeConEVENT

0.99+

two yearsQUANTITY

0.99+

five and a half million dollarsQUANTITY

0.99+

bothQUANTITY

0.99+

GoogleORGANIZATION

0.99+

last yearDATE

0.98+

threeQUANTITY

0.98+

OneQUANTITY

0.98+

KubernetesTITLE

0.98+

todayDATE

0.98+

10 xQUANTITY

0.98+

one exampleQUANTITY

0.97+

oneQUANTITY

0.97+

billions of dollarsQUANTITY

0.97+

ArmoryORGANIZATION

0.97+

firstQUANTITY

0.97+

ChronosphereTITLE

0.97+

around 20 peopleQUANTITY

0.96+

first hosted solutionQUANTITY

0.96+

five and a half milliQUANTITY

0.96+

KubeConORGANIZATION

0.96+

Cloud NativeTITLE

0.95+

Alert ManagerTITLE

0.94+

up to 70%QUANTITY

0.94+

theCUBEORGANIZATION

0.93+

millions of dollarsQUANTITY

0.89+

First RoundQUANTITY

0.89+

CUBEORGANIZATION

0.88+

SpinnakerORGANIZATION

0.87+

21QUANTITY

0.85+

a thousandQUANTITY

0.85+

San DiegoLOCATION

0.84+

GrafanaTITLE

0.83+

CICBORGANIZATION

0.82+

end of last yearDATE

0.81+

CloudNativeCon 21EVENT

0.8+

30%QUANTITY

0.76+

5 xQUANTITY

0.73+

NA 2021EVENT

0.73+

first SaaS solutionQUANTITY

0.69+

twofoldQUANTITY

0.63+

Steven Huels | KubeCon + CloudNativeCon NA 2021


 

(upbeat soft intro music) >> Hey everyone. Welcome back to theCube's live coverage from Los Angeles of KubeCon and CloudNativeCon 2021. Lisa Martin with Dave Nicholson, Dave and I are pleased to welcome our next guest remotely. Steven Huels joins us, the senior director of Cloud Services at Red Hat. Steven, welcome to the program. >> Steven: Thanks, Lisa. Good to be here with you and Dave. >> Talk to me about where you're seeing traction from an AI/ML perspective? Like where are you seeing that traction? What are you seeing? Like it. >> It's a great starter question here, right? Like AI/ML is really being employed everywhere, right? Regardless of industry. So financial services, telco, governments, manufacturing, retail. Everyone at this point is finding a use for AI/ML. They're looking for ways to better take advantage of the data that they've been collecting for these years. It really, it wasn't all that long ago when we were talking to customers about Kubernetes and containers, you know, AI/ML really wasn't a core topic where they were looking to use a Kubernetes platform to address those types of workloads. But in the last couple of years, that's really skyrocketed. We're seeing a lot of interest from existing customers that are using Red Hat open shift, which is a Kubernetes based platform to take those AI/ML workloads and take them from what they've been doing for additionally, for experimentation, and really get them into production and start getting value out of them at the end of it. >> Is there a common theme, you mentioned a number of different verticals, telco, healthcare, financial services. Is there a common theme, that you're seeing among these organizations across verticals? >> ^There is. I mean, everyone has their own approach, like the type of technique that they're going to get the most value out of. But the common theme is really that everyone seems to have a really good handle on experimentation. They have a lot of very brig data scientists, model developers that are able to take their data and out of it, but where they're all looking to get, get our help or looking for help, is to put those models into production. So ML ops, right. So how do I take what's been built on, on somebody's machine and put that into production in a repeatable way. And then once it's in production, how do I monitor it? What am I looking for as triggers to indicate that I need to retrain and how do I iterate on this sequentially and rapidly applying what would really be traditional dev ops software development, life cycle methodologies to ML and AI models. >> So Steve, we're joining you from KubeCon live at the moment. What's, what's the connection with Kubernetes and how does Kubernetes enable machine learning artificial intelligence? How does it enable it and what are some of the special considerations to in mind? >> So the immediate connection for Red Hat, is Red Hat's open shift is basically an enterprise grade Kubernetics. And so the connection there is, is really how we're working with customers and how customers in general are looking to take advantage of all the benefits that you can get from the Kubernetes platform that they've been applying to their traditional software development over the years, right? The, the agility, the ability to scale up on demand, the ability to have shared resources, to make specialized hardware available to the individual communities. And they want to start applying those foundational elements to their AI/Ml practices. A lot of data science work traditionally was done with high powered monolithic machines and systems. They weren't necessarily shared across development communities. So connecting something that was built by a data scientist, to something that then a software developer was going to put into production was challenging. There wasn't a lot of repeatability in there. There wasn't a lot of scalability, there wasn't a lot of auditability and these are all things that we know we need when talking about analytics and AI/ML. There's a lot of scrutiny put on the auditability of what you put into production, something that's making decisions that impact on whether or not somebody gets a loan or whether or not somebody is granted access to systems or decisions that are made. And so that the connection there is really around taking advantage of what has proven itself in kubernetes to be a very effective development model and applying that to AI/ML and getting the benefits in being able to put these things into production. >> Dave: So, so Red Hat has been involved in enterprises for a long time. Are you seeing most of this from a Kubernetes perspective, being net new application environments or are these extensions of what we would call legacy or traditional environments. >> They tend to be net new, I guess, you know, it's, it's sort of, it's transitioned a little bit over time. When we first started talking to customers, there was desire to try to do all of this in a single Kubernetes cluster, right? How can I take the same environment that had been doing our, our software development, beef it up a little bit and have it applied to our data science environment. And over time, like Kubernetes advanced rights. So now you can actually add labels to different nodes and target workloads based on specialized machinery and hardware accelerators. And so that has shifted now toward coming up with specialized data science environments, but still connecting the clusters in that's something that's being built on that data science environment is essentially being deployed then through, through a model pipeline, into a software artifact that then makes its way into an application that that goes live. And, and really, I think that that's sensible, right? Because we're constantly seeing a lot of evolution in, in the types of accelerators, the types of frameworks, the types of libraries that are being made available to data scientists. And so you want the ability to extend your data science cluster to take advantage of those things and to give data scientists access to that those specialized environments. So they can try things out, determine if there's a better way to, to do what they're doing. And then when they find out there is, be able to rapidly roll that into your production environment. >> You mentioned the word acceleration, and that's one of the words that we talk about when we talk about 2020, and even 2021, the acceleration in digital transformation that was necessary really a year and a half ago, for companies to survive. And now to be able to pivot and thrive. What are you seeing in terms of customers appetites for, for adopting AI/ML based solutions? Has it accelerated as the pandemic has accelerated digital transformation. >> It's definitely accelerated. And I think, you know, the pandemic probably put more of a focus for businesses on where can they start to drive more value? How can they start to do more with less? And when you look at systems that are used for customer interactions, whether they're deflecting customer cases or providing next best action type recommendations, AI/ML fits the bill there perfectly. So when they were looking to optimize, Hey, where do we put our spend? What can help us accelerate and grow? Even in this virtual world we're living in, AI/ML really floated to the top there, that's definitely a theme that we've seen. >> Lisa: Is there a customer example that you think that you could mention that really articulates the value over that? >> You know, I think a lot of it, you know, we've published one specifically around HCA health care, and this had started actually before the pandemic, but I think especially, it's applicable because of the nature of what a pandemic is, where HCA was using AI/ML to essentially accelerate diagnosis of sepsis, right. They were using it for, for disease diagnoses. That same type of, of diagnosis was being applied to looking at COVID cases as well. And so there was one that we did in Canada with, it's called 'how's your flattening', which was basically being able to track and do some predictions around COVID cases in the Canadian provinces. And so that one's particularly, I guess, kind of close to home, given the nature of the pandemic, but even within Red Hat, we started applying a lot more attention to how we could help with customer support cases, right. Knowing that if folks were going to be out with any type of illness. We needed to be able to be able to handle that case, you know, workload without negatively impacting work-life balance for, for other associates. So we looked at how can we apply AI/ML to help, you know, maintain and increase the quality of customer service we were providing. >> it's a great use case. Did you have a keynote or a session, here at KubeCon CloudNative? >> I did. I did. And it really focused specifically on that whole ML ops and model ops pipeline. It was called involving Kubernetes and bracing model ops. It was for a Kubernetes AI day. I believe it aired on Wednesday of this week. Tuesday, maybe. It all kind of condenses in the virtual world. >> Doesn't it? It does. >> So one of the questions that Lisa and I have for folks where we sit here, I don't know, was it year seven or so of the Dawn of Kubernetes, if I have that, right. Where do you think we are, in this, in this wave of adoption, coming from a Red Hat perspective, you have insight into, what's been going on in enterprises for the last 20 plus years. Where are we in this wave? >> That's a great question. Every time, like you, it's sort of that cresting wave sort of, of analogy, right? That when you get to top one wave, you notice the next wave it's even bigger. I think we've certainly gotten to the point where, where organizations have accepted that Kubernetes can, is applicable across all the workloads that they're looking to put in production. Now, the focus has shifted on optimizing those workloads, right? So what are the things that we need to run in our in-house data centers? What are things that we need, or can benefit from using commodity hardware from one of the hyperscalers, how do we connect those environments and more effectively target workloads? So if I look at where things are going to the future, right now, we see a lot of things being targeted based on cluster, right? We say, Hey, we have a data science cluster. It has characteristics because of X, Y, and Z. And we put all of our data science workloads into that cluster. In the future, I think we want to see more workload specific, type of categorization of workloads so that we're able to match available hardware with workloads rather than targeting a workload at a specific cluster. So a developer or data scientist can say, Hey, my particular algorithm here needs access to GPU acceleration and the following frameworks. And then it, the Kubernetes scheduler is able to determine of the available environments. What's the capacity, what are the available resources and match it up accordingly. So we get into a more dynamic environment where the developers and those that are actually building on top of these platforms actually have to know less and less about the clusters they're running on. It just have to know what types of resources they need access to. >> Lisa: So sort of democratizing that. Steve, thank you for joining Dave and me on the program tonight, talking about the traction that you're seeing with AI/ML, Kubernetes as an enabler, we appreciate your time. >> Thank you. >> Thanks Steve. >> For Dave Nicholson. I'm Lisa Martin. You're watching theCube live from Los Angeles KubeCon and CloudNativeCon 21. We'll be right back with our next guest. (subtle music playing) >> Lisa: I have been in the software and technology industry for over 12 years now. So I've had the opportunity as a marketer to really understand and interact with customers across the entire buyer's journey. Hi, I'm Lisa Martin and I'm a host of theCube. Being a host on the cube has been a dream of mine for the last few years. I had the opportunity to meet Jeff and Dave and John at EMC World a few years ago and got the courage up to say, Hey, I'm really interested in this. I love talking with customers...

Published Date : Oct 15 2021

SUMMARY :

Dave and I are pleased to welcome Good to be here with you and Dave. Talk to me about where But in the last couple of years, that you're seeing among these that they're going to get the considerations to in mind? and applying that to AI/ML Are you seeing most of this and have it applied to our and that's one of the How can they start to do more with less? apply AI/ML to help, you know, Did you have a keynote in the virtual world. It does. of the Dawn of Kubernetes, that they're looking to put in production. Dave and me on the program tonight, KubeCon and CloudNativeCon 21. a dream of mine for the last few years.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Steven HuelsPERSON

0.99+

StevePERSON

0.99+

Lisa MartinPERSON

0.99+

Dave NicholsonPERSON

0.99+

LisaPERSON

0.99+

DavePERSON

0.99+

StevenPERSON

0.99+

CanadaLOCATION

0.99+

JeffPERSON

0.99+

TuesdayDATE

0.99+

2021DATE

0.99+

KubeConEVENT

0.99+

Los AngelesLOCATION

0.99+

Red HatORGANIZATION

0.99+

2020DATE

0.99+

JohnPERSON

0.99+

HCAORGANIZATION

0.99+

tonightDATE

0.99+

CloudNativeConEVENT

0.98+

oneQUANTITY

0.98+

a year and a half agoDATE

0.98+

Cloud ServicesORGANIZATION

0.98+

pandemicEVENT

0.98+

KubernetesTITLE

0.97+

over 12 yearsQUANTITY

0.96+

firstQUANTITY

0.96+

COVIDOTHER

0.95+

CloudNativeCon 2021EVENT

0.95+

Wednesday of this weekDATE

0.94+

Dawn of KubernetesEVENT

0.93+

CanadianLOCATION

0.92+

singleQUANTITY

0.9+

Red HatTITLE

0.9+

waveEVENT

0.89+

KubernetesEVENT

0.89+

CloudNativeCon 21EVENT

0.84+

yearsDATE

0.84+

Los AngelesEVENT

0.83+

NA 2021EVENT

0.82+

few years agoDATE

0.77+

last couple of yearsDATE

0.76+

last 20 plus yearsDATE

0.76+

KubeCon CloudNativeORGANIZATION

0.76+

AI/MLTITLE

0.75+

crestingEVENT

0.7+

sepsisOTHER

0.7+

KuberneticsORGANIZATION

0.69+

theCubeORGANIZATION

0.69+

HatTITLE

0.69+

AI/MLOTHER

0.67+

sevenQUANTITY

0.6+

RedORGANIZATION

0.58+

lastDATE

0.58+

wave ofEVENT

0.58+