Image Title

Search Results for MTTR:

Ian Smith, Chronosphere | KubeCon + CloudNativeCon NA 2022`


 

(upbeat music) >> Good Friday morning everyone from Motor City, Lisa Martin here with John Furrier. This is our third day, theCUBE's third day of coverage of KubeCon + CloudNativeCon 22' North America. John, we've had some amazing conversations the last three days. We've had some good conversations about observability. We're going to take that one step further and look beyond its three pillars. >> Yeah, this is going to be a great segment. Looking forward to this. This is about in depth conversation on observability. The guest is technical and it's on the front lines with customers. Looking forward to this segment. Should be great. >> Yeah. Ian Smith is here, the field CTO at Chronosphere. Ian, welcome to theCUBE. Great to have you. >> Thank you so much. It's great to be here. >> All right. Talk about the traditional three pillars, approach, and observability. What are some of the challenges with that, and how does Chronosphere solve those? >> Sure. So hopefully everyone knows people think of the three pillars as logs, metrics and traces. What do you do with that? There's no action there. It's just data, right? You collect this data, you go put it somewhere, but it's not actually talking about any sort of outcomes. And I think that's really the heart of the issue, is you're not achieving anything. You're just collecting a whole bunch of data. Where do you put it? What are you... What can you do with it? Those are the fundamental questions. And so one of the things that we're focused on at Chronosphere is, well, what are those outcomes? What is the real value of that? And for example, thinking about phases of observability. When you have an incident or you're trying to investigate something through observability, you probably want to know what's going on. You want to triage any problems you detect. And then finally, you want to understand the cause of those and be able to take longer term steps to address them. >> What do customers do when they start thinking about it? Because observability has that promise. Hey, you know, get the data, we'll throw AI at it. >> Ian: Yeah. >> And that'll solve the problem. When they get over their skis, when do they realize that they're really not tackling it properly, or the ones that are taking the right approach? What's the revelation? What's your take on that? You're in the front lines. What's going on with the customer? The good and the bad. What's the scene look like? >> Yeah, so I think the bad is, you know, you end up buying a lot of things or implementing even in open source or self building, and it's very disconnected. You're not... You don't have a workflow, you don't have a path to success. If you ask different teams, like how do you address these particular problems? They're going to give you a bunch of different answers. And then if you ask about what their success rate is, it's probably very uneven. Another key indicator of problems is that, well, do you always need particular senior engineers in your instance or to help answer particular performance problems? And it's a massive anti pattern, right? You have your senior engineers who are probably need to be focused on innovation and competitive differentiation, but then they become the bottleneck. And you have this massive sort of wedge of maybe less experienced engineers, but no less valuable in the overall company perspective, who aren't effective at being able to address these problems because the tooling isn't right, the workflows are incorrect. >> So the senior engineers are getting pulled in to kind of fix and troubleshoot or observe what the observability data did or didn't deliver. >> Correct. Yeah. And you know, the promise of observability, a lot of people talk about unknown unknowns and there's a lot of, you know, crafting complex queries and all this other things. It's a very romantic sort of deep dive approach. But realistically, you need to make it very accessible. If you're relying on complex query languages and the required knowledge about the architecture and everything every other team is doing, that knowledge is going to be super concentrated in just a couple of heads. And those heads shouldn't be woken up every time at 3:00 AM. They shouldn't be on every instant call. But oftentimes they are the sort of linchpin to addressing, oh, as a business we need to be up 99.99% of the time. So how do we accomplish that? Well, we're going to end up burning those people. >> Lisa: Yeah. >> But also it leads to a great dissatisfaction in the bulk of the engineers who are, you know, just trying to build and operate the services. >> So talk... You mentioned that some of the problems with the traditional three pillars are, it's not outcome based, it leads to silo approaches. What is Chronosphere's definition and can you walk us through those three phases and how that really gives you that competitive edge in the market? >> Yeah, so the three phases being know, triage and understand. So just knowing about a problem, and you can relate this very specifically to capabilities, but it's not capabilities first, not feature function first. So know, I need to be able to alert on things. So I do need to collect data that gives me those signals. But particularly as you know, the industry starts moving towards as slows. You start getting more business relevant data. Everyone knows about alert storms. And as you mentioned, you know, there's this great white hope of AI and machine learning, but AI machine learning is putting a trust in sort of a black box, or the more likely reality is that really statistical model. And you have to go and spend a very significant amount time programming it for sort of not great outcomes. So know, okay, I want to know that I have a problem, I want to maybe understand the symptoms of that particular problem. And then triage, okay, maybe I have a lot of things going wrong at the same time, but I need to be very precise about my resources. I need to be able to understand the scope and importance. Maybe I have five major SLOs being violated right now. Which one is the greatest business impact? Which symptoms are impacting my most valuable customers? And then from there, not getting into the situation, which is very common where, okay, well we have every... Your customer facing engineering team, they have to be on the call. So we have 15 customer facing web services. They all have to be on that call. Triage is that really important aspect of really mitigating the cost to the organization because everyone goes, oh, well I achieved my MTTR and my experience from a variety of vendors is that most organizations, unless you're essentially failing as a business, you achieve your SLA, you know, three nines, four nines, whatever it is. But the cost of doing that becomes incredibly extreme. >> This is huge point. I want to dig into that if you don't mind, 'cause you know, we've been all seeing the cost of ownership miles in it all, the cost of doing business, cost of the shark fan, the iceberg, what's under the water, all those metaphors. >> Ian: Yeah. >> When you look at what you're talking about here, there are actually, actually real hardcore costs that might be under the water, so to speak, like labor, senior engineering time, 'cause Cloud Native engineers are coding in the pipelines. A lot of impact. Can you quantify and just share an example or illustrate where the costs are? 'Cause this is something that's kind of not obvious. >> Ian: Yeah. >> On the hard costs. It's not like a dollar amount, but time resource breach, wrong triage, gap in the data. What are some of the costs? >> Yeah, and I think they're actually far more important than the hard costs of infrastructure and licensing. And of course there are many organizations out there using open source observability components together. And they go, Oh it's free. No licensing costs. But you think again about those outcomes. Okay, I have these 15 teams and okay, I have X number of incidents a month, if I pull a representative from every single one of those teams on. And it turns out that, you know, as we get down in further phases, we need to be able to understand and remediate the issue. But actually only two teams required of that. There's 13 individuals who do not need to be on the call. Okay, yes, I met my SLA and MTTR, but if I am from a competitive standpoint, I'm comparing myself to a very similar organization that only need to impact those two engineers versus the 15 that I had over here. Who is going to be the most competitive? Who's going to be most differentiated? And it's not just in terms of number of lines of code, but leading to burnout of your engineers and the churn of that VPs of engineering, particularly in today's economy, the hardest thing to do is acquire engineers and retain them. So why do you want to burn them unnecessarily on when you can say, okay, well I can achieve the same or better result if I think more clearly about my observability, but reduce the number of people involved, reduce the number of, you know, senior engineers involved, and ultimately have those resources more focused on innovation. >> You know, one thing I want, at least want get in there, but one thing that's come up a lot this year, more than I've ever seen before, we've heard about the skill gaps, obviously, but burnout is huge. >> Ian: Yes. >> That's coming up more and more. This is a real... This actually doesn't help the skills gap either. >> Ian: Correct. >> Because you got skills gap, that's a cost potentially. >> Ian: Yeah. >> And then you got burnout. >> Ian: Yeah. >> People just kind of sitting on their hands or just walking away. >> Yeah. So one of the things that we're doing with Chronosphere is, you know, while we do deal with the, you know, the pillar data, but we're thinking about it more, what can you achieve with that? Right? So, and aligning with the know, triage and understand. And so you think about things like alerts, you know, dashboards, you be able to start triaging your symptoms. But really importantly, how do we bring the capabilities of things like distributed tracing where they can actually impact this? And it's not just in the context of, well, what can we do in this one incident? So there may be scenarios where you, absolutely do need those power users or those really sophisticated engineers. But from a product challenge perspective, what I'm personally really excited about is how do you capture that insight and those capabilities and then feed that back in from a product perspective so it's accessible. So you know, everyone talks about unknown unknowns in observability and then everyone sort of is a little dismissive of monitoring, but monitoring that thing, that democratizes access and the decision making capacity. So if you say I once worked at an organization and there were three engineers in the whole company who could generate the list of customers who were impacted by a particular incident. And I was in post sales at the time. So anytime there was a major incident, need to go generate that list. Those three engineers were on every single incident until one of them got frustrated and built a tool. But he built it entirely on his own. But can you think from an observability perspective, can you build a thing that it makes all those kinds of capabilities accessible to the first point where you take that alert, you know, which customers are affected or whatever other context was useful last time, but took an hour, two hours to achieve. And so that's what really makes a dramatic difference over time, is it's not about the day one experience, but how does the product evolve with the requirements and the workflow- >> And Cloud Native engineers, they're coding so they can actually be reactive. That's interesting, a platform and a tool. >> Ian: Yes. >> And platform engineering is the hottest topic at this event. And this year, I would say with Cloud Native hearing a lot more. I mean, I think that comes from the fact that SREs not really SRE, I think it's more a platform engineer. >> Ian: Yes. >> Not everyone's an... Not company has an SRE or SRE environment. But platform engineering is becoming that new layer that enables the developers. >> Ian: Correct. >> This is what you're talking about. >> Yeah. And there's lots of different labels for it, but I think organizations that really think about it well they're thinking about things like those teams, that developer efficiency, developer productivity. Because again, it's about the outcomes. It's not, oh, we just need to keep the site reliable. Yes, you can do that, but as we talked about, there are many different ways that you can burn unnecessary resources. But if you focus on developer efficiency and productivity, there's retainment, there's that competitive differentiation. >> Let's uplevel those business outcomes. Obviously you talked about in three phases, know, triage and understand. You've got great alignment with the Cloud Native engineers, the end users. Imagine that you're facilitating company's ability to reduce churn, attract more talent, retain talent. But what are some of the business outcomes? Like to the customer experience to the brand? >> Ian: Sure. >> Talk about it in some of those contexts. >> Yeah. One of the things that not a lot of organizations think about is, what is the reliability of my observability solution? It's like, well, that's not what I'm focused on. I'm focused on the reliability of my own website. Okay, let's take the, common open source pattern. I'm going to deploy my observability solution next to my core site infrastructure. Okay, I now have a platform problem because DNS stopped working in cloud provider of my choice. It's also affecting my observability solution. So at the moment that I need- >> And the tool chain and everything else. >> Yeah. At the moment that I need it the most to understand what's going on and to be able to know triage and understand that fails me at the same time. It's like, so reliability has this very big impact. So being able to make sure that my solution's reliable so that when I need it the most, and I can affect reliability of my own solution, my own SLA. That's a really key aspect of it. One of the things though that we, look at is it's not just about the outcomes and the value, it's ROI, right? It's what are you investing to put into that? So we've talked a little bit about the engineering cost, there's the infrastructure cost, but there's also a massive data explosion, particularly with Cloud Native. >> Yes. Give us... Alright, put that into real world examples. A customer that you think really articulates the value of what Chronosphere is delivering and why you're different in the market. >> Yeah, so DoorDash is a great customer example. They're here at KubeCon talking about their experience with Chronosphere and you know, the Cloud Native technologies, Prometheus and those other components align with Chronosphere. But being able to undergo, you know, a transformation, they're a Cloud Native organization, but going a transformation from StatsD to very heavy microservices, very heavy Kubernetes and orchestration. And doing that with your massive explosion, particularly during the last couple of years, obviously that's had a very positive impact on their business. But being able to do that in a cost effective way, right? One of the dirty little secrets about observability in particular is your business growth might be, let's say 50%, 60%, your infrastructure spend in the cloud providers is maybe going to be another 10, 15% on top of that. But then you have the intersection of, well my engineers need more data to diagnose things. The business needs more data to understand what's going on. Plus we've had this massive explosion of containers and everything like that. So oftentimes your business growth is going to be more than doubled with your observability data growth and SaaS solutions and even your on-premises solutions. What's the main cost driver? It's the volume of data that you're processing and storing. And so Chronosphere one of the key things that we do, because we're focused on organizational pain for larger scale organizations, is well, how do we extract the maximum volume of the data you're generating without having to store all of that data and then present it not just from a cost perspective, but also from a performance perspective. >> Yes. >> John: Yeah. >> And so feeding all into developer productivity and also lowering that investment so that your return can stand out more clearly and more valuably when you are assessing that TCO. >> Better insights and outcomes drives developer productivity for sure. That also has top theme here at KubeCon this year. It always is, but this is more than ever 'cause of the velocity. My question for you, given that you're the field chief technology officer for Chronosphere and you have a unique position, you've got a great experience in the industry, been involved in some really big companies and cutting edge. What's the competitive landscape? 'Cause the customers sometimes are confused by all the pitches they're getting from other vendors. Some are bolting on observability. Some have created like I would say, a shim layer or horizontally scalable platform or platform engineering approach. It's a data problem. Okay. This is a data architecture challenge. You mentioned that many times. What's the difference between a pretender and a player in this space? What's the winning architecture look like? What's a, I won't say phony or fake solution, but ones that customers should be aware of? Because my opinion, if you have a gap in the data or you configure it wrong, like a bolt on and say DNS crashes you're dead in the water. >> Ian: Yeah. >> What's the right approach from a customer standpoint? How do they squint through all the noise to figure out what's the right approach? >> Yeah, so I mean, I think one of the ways, and I've worked with customers in a pre-sales capacity for a very long time I know all the tricks of guiding you through. I think it needs to be very clear that customers should not be guided by the vendor. You don't talk to one vendor and they decide, Oh, I'm going to evaluate based off this. We need to particularly get away from feature based evaluations. Features are very important, but they're all have to be aligned around outcomes. And then you have to clearly understand, where am I today? What do I do today? And what is going to be the transformation that I have to go through to take advantage of these features? They can get very entrancing to say, Oh, there's a list of 25 features that this solution has that no one else has, but how am I going to get value out of that? >> I mean, distributed tracing is a distributed word. Distributed is the key word. This is a system architecture. The holistic big picture comes in. How do they figure that out? Knowing what they're transforming into? How does it fit in? >> Ian: Yeah. >> What's the right approach? >> Too often I say distributed tracing, particularly, you know, bought, because again, look at the shiny features look at the the premise and the MTTR expectations, all these other things. And then it's off to the side. We go through the traditional usage of metrics very often, very log heavy approaches, maybe even some legacy APM. And then it's sort of at last resort. And out of all the tools, I think distributed tracing is the worst in the problem we talked about earlier where the most sophisticated engineers, the ones who are being longest tenured, are the only ones who end up using it. So adoption is really, really poor. So again, what do we do today? Well, we alert, we probably want to understand our symptoms, but then what is the key problem? Oh, we spend a lot of time digging into the where the problem exists in my architecture, we talked about, you know, getting every engineer in at the same time, but how do we reduce the number of engineers involved? How do we make it so that, well, this looks like a great day one experience, but what is my day 30 experience like? Day 90. How is the product get more valuable? How do I get my most senior engineers out of this, not just on day one, but as we progress through it? >> You got to operationalize it. That's the key. >> Yeah, Correct. >> Summarize this as we wrap here. When you're in customer conversations, what is the key factor behind Chronosphere's success? If you can boil it down to that key nugget, what is it? >> I think the key nugget is that we're not just fixated on sort of like technical features and functions and frankly gimmicks of like, Oh, what could you possibly do with these three pillars of data? It's more about what can we do to solve organizational pain at the high level? You know, things like what is the cost of these solutions? But then also on the individual level, it's like, what exactly is an engineer trying to do? And how is their quality of life affected by this kind of tooling? And it's something I'm very passionate about. >> Sounds like it. Well, the quality of life's important, right? For everybody, for the business, and ultimately ends up affecting the overall customer experience. So great job, Ian, thank you so much for joining John and me talking about what you guys are doing beyond the three pillars of observability at Chronosphere. We appreciate your insights. >> Thank you so much. >> John: All right. >> All right. For John Furrier and our guest, I'm Lisa Martin. You're watching theCUBE live Friday morning from KubeCon + CloudNativeCon 22' from Detroit. Our next guest joins theCUBE momentarily, so stick around. (upbeat music)

Published Date : Oct 28 2022

SUMMARY :

the last three days. it's on the front lines Ian Smith is here, the It's great to be here. What are some of the challenges with that, the cause of those and be able to take Hey, you know, get the And that'll solve the problem. They're going to give you a So the senior engineers and the required knowledge in the bulk of the and how that really gives you the cost to the organization cost of the shark fan, are coding in the pipelines. What are some of the costs? reduce the number of, you know, but burnout is huge. the skills gap either. Because you got skills gap, People just kind of And it's not just in the context of, And Cloud Native engineers, is the hottest topic that enables the developers. Because again, it's about the outcomes. the Cloud Native engineers, Talk about it in One of the things that not the most to understand what's the value of what One of the dirty little when you are assessing that TCO. 'cause of the velocity. And then you have to clearly understand, Distributed is the key word. And out of all the tools, That's the key. If you can boil it down the cost of these solutions? beyond the three pillars For John Furrier and our

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
IanPERSON

0.99+

Lisa MartinPERSON

0.99+

JohnPERSON

0.99+

Ian SmithPERSON

0.99+

LisaPERSON

0.99+

two hoursQUANTITY

0.99+

15 teamsQUANTITY

0.99+

John FurrierPERSON

0.99+

13 individualsQUANTITY

0.99+

25 featuresQUANTITY

0.99+

50%QUANTITY

0.99+

three engineersQUANTITY

0.99+

three engineersQUANTITY

0.99+

60%QUANTITY

0.99+

two teamsQUANTITY

0.99+

an hourQUANTITY

0.99+

todayDATE

0.99+

third dayQUANTITY

0.99+

15QUANTITY

0.99+

10, 15%QUANTITY

0.99+

DetroitLOCATION

0.99+

two engineersQUANTITY

0.99+

3:00 AMDATE

0.99+

KubeConEVENT

0.99+

15 customerQUANTITY

0.99+

Friday morningDATE

0.99+

first pointQUANTITY

0.99+

KubeConORGANIZATION

0.99+

Cloud NativeORGANIZATION

0.99+

three phasesQUANTITY

0.98+

oneQUANTITY

0.98+

three pillarsQUANTITY

0.98+

DoorDashORGANIZATION

0.98+

OneQUANTITY

0.97+

this yearDATE

0.97+

theCUBEORGANIZATION

0.96+

three ninesQUANTITY

0.95+

three pillarsQUANTITY

0.94+

day oneQUANTITY

0.94+

one stepQUANTITY

0.93+

ChronosphereTITLE

0.92+

one incidentQUANTITY

0.92+

North AmericaLOCATION

0.92+

CloudNativeConEVENT

0.91+

PrometheusTITLE

0.91+

99.99%QUANTITY

0.9+

firstQUANTITY

0.89+

one thingQUANTITY

0.89+

four ninesQUANTITY

0.86+

last couple of yearsDATE

0.85+

one vendorQUANTITY

0.85+

ChronosphereORGANIZATION

0.84+

Day 90QUANTITY

0.84+

Cloud NativeTITLE

0.83+

Varun Talwar, Tetrate | Kubecon + Cloudnativecon Europe 2022


 

>>The cube presents, Coon and cloud native con Europe, 22 brought to you by the cloud native computing foundation. >>Welcome to ity of Spain and cube con coup con cloud native con Europe 2022 is near the end of the day. That's okay. We, we, we have plenty of energy because we're bringing it. I'm Keith Townsend, along with my coho, Paul Gillon Paul, this has been an amazing day. Thus far. We've talked to some incredible folks. You got a chance to walk the show floor. Yeah. So I'm really excited to hear what's the vibe of the show floor, 7,500 people in Europe following the protocols, but getting stuff done. >>Well, first I have to say that I haven't traveled for two years. So getting out to a show by, by itself is, is an amazing experience, but a show like this with all of the energy and the crowd, she is enormously crowded at lunchtime today. It's hard to believe how many people have made it, made it all the way here out on the floor. The boots are crowded. The, the demonstrations are what you would expect at a show like this. Lots of code, lots of, lots of block diagrams, lots of architecture. I think the audience is eating it up. You know, when they're, they're on their laptops, they're coding on their laptops. And this is very much symbolic of the crowd that comes to a cubic con. And it's, it's a, just a delight to see them outta here. I so much fun. >>So speaking of lots of gold, we have Bome Toro co-founder of pet trade, but, you know, just saw, didn't realize this Isto becoming part of CNCF was the latest on infield. >>Yeah. Is still is, you know, it was always one of those service mesh projects, which was very widely adopted. And it's great to see that going into the cloud native computing foundation. And I think what happened with Kubernetes, like just became the defacto container orchestrator. I think similar thing is happening with Isto and service mesh. >>What, >>So I'm sorry, Keith, what's the process like of becoming adopted by and incubated by the CNCF? >>Yeah, I mean, it's pretty simple. It's an application process into the foundation where you say, you know what the project is about, how diverse is your contributor base, how many people are using it. And it goes through a review of with TC. It goes through a review of like all the users and contributors. And if you see a good base of deployments in production, if you see a diverse of contributors, then you can basically be part of the CNCF. And as you know, CNCF is very flexible on governance. Basically it's like, bring your own governance. And then the projects can basically seamlessly go in and, you know, get into incubation and gradually graduate >>Another project close and dear to you Envoy. Yes. Now I've always considered Envoy just as what it is. It's a, I've always used it as, as a load balancer type thing. So I've always considered it somewhat of a gateway proxy, but Envoy gateway was announced last week. Yes. >>So Envoy is basically won the data plane war of in cloud native workloads. Right. And, but, and this was over the last five years, Envoy was announced even way before Rio and it is used in various deployment models. You can use it as a front load balancer. You can use it as an Ingres in Kubernetes. You can use it as a side car and a service mesh like steel, and it's lightweight dynamically, programmable, very open with a white community. But what we looked at when we looked at the Envoy base, was it still, wasn't very approachable for application developers. Like when you still see like the nouns that it uses in terms of clusters and so on is not what an application developer was used to. And so Envoy gateway is really an effort to make Envoy even more stronger out of the box for an application developer to use it as an API gateway. >>Right? Because if you think about it, ultimately, you know, people de developers start deploying workloads onto their Kubernetes clusters. They need some functionality like an API gateway to expose their services and you wanna make it really, really easy and simple. Right? I often say like what, what engine X was to like static websites like Envoy gateway will be to like, you know, APIs and it's really few the community coming together. We are a big part, but also VMware and as well as end users, like in this case, fidelity who is investing heavily into Envoy and API gateway use cases, joining forces saying, let's do this in upstream Envoy. >>I'd like to go back to IIO because this is a major step in IIOS development. Where do you see SIO coming into the picture? And Kubernetes is already broadly accepted. Is IIO generally adopted as an after an after step to, to Kubernetes or are they increasingly being adopted together? >>Yeah. So usually it's adopted as a follow on step and the reason is primarily the learning curve, right. It's just get used to all the Kubernetes and, you know, it takes a while for people to understand the concepts, get applications going, and then, you know, studio was made to basically solve, you know, three big problems there. Right. Which is around observability traffic management and security. Right. So as people deploy more services, they figure out, okay, how do I connect them? How do I secure all the connections and how do I do more fine grain routing? I'm doing more frequent deployments with Kubernetes, but I would like to do Canary releases to make safer rollouts. Right. And those are the problems that Isto solves. And I don't really want to know the metrics of like, yes, it'll be, I it's good to know all the node level and CPO level metrics. >>But really what I want to know is how are my services performing? Where is the latency, right? Where is the error rate? And those are the things thatto gives out of the box. So that's like a very natural next step for people using Kubernetes. And, you know, Tetra was really formed as a company to enable enterprises, to adopt STO Envoy and service mission, their environment. Right? So we do everything from run an academy for like courses and certifications on Envoy and STO to a distribution, which is, you know, compliant with various bills and tooling as well as a whole platform on top of STO to make it usable and deployment in a large enterprise. >>So paint the end to end for me, for STO in Envoy. I know they can be used in similar fashions is like side cars, but how they work together to deliver value. >>Yeah. So if you step back from technology a little bit, right, and you like, sort of look at what customers are doing and facing, right. Really it is about, they have applications. They have some applications that new workloads going into Kubernetes and cloud native. They have a lot of legacy workloads, a lot of workloads on VMs and with different teams in different clouds or due to acquisitions. They're very heterogeneous right now. Our mission Tetrad's mission is power. The world's application traffic, but really the business value that we are going after is consistency of application operations. Right? And I'll tell you how powerful that is because the more places you can deploy Envoy into the more places you can deploy studio into, the more consistency you can get for the value pillars of observability, traffic management, and security. Right. And really, if you think about what is the journey for an enterprise to migrate from workloads into Kubernetes or from data centers into cloud, the challenges are around security and connectivity, right? Because if it's Kubernetes fabric, the same Kubernetes app and data center can be deployed exactly as is it in cloud. Right. Right. So why is it hard to migrate to cloud, right. The challenges come in the security and networking layer. >>Right. So let's talk about that with some granularity and you can maybe gimme some concrete examples, right? Because it, as I think about the hybrid infrastructure where I have VMs on premises, cloud, native stuff, running in the public cloud, or even cloud native next to VMs, right. I do security differently when I'm in the VM world. I say, you know what, this IP address, can't talk to this Oracle database server. Right. That's not how cloud native works. Right. I, I can't say if I have a cloud, if I have a cloud native app talking to a Oracle database, there's no IP address. Yeah. But how do I, how, how do I secure the communication between the two? Exactly. >>So I think you hit it straight on the head. So which is with things like Kubernetes, IP is no longer a really a valid noun where you can say, because things will auto scale either from Kubernetes or, you know, the cloud autoscales. So really the noun that is becoming now is service. So, and I could have many instances of it. They could go scale up and down. But what I'm saying is this service, which, you know, some app server, some application can talk to the article service. Hmm. And what we have done with the te trade service bridge, which is why we call our platform service bridge, because it's all about bridging all the services is whatever you're running on, the VM can be onboarded onto the mesh, like as if it were a ity service. Right. And then my policy around this service can talk to this service is same in Kubernetes is same for Kubernetes talking to VM it's same for VM to VM, both in terms of access control in terms of encryption. What we do is because it's the Envoy, proxy goes everywhere and the traffic is going through them. We actually take care of distributing, certs, encrypting, everything, and it becomes, and that is what leads to consistent application operations. And that's where the value is. >>We're seeing a lot of activity around observ observability right now, a lot of different tools, both open source and proprietary STO certainly part of the open telemetry project, I believe. Are you part of that? Yes. But the customers are still piecing together a lot of tools on their own. Right. Do you see a, a more coherent framework forming around observability? >>I think very much so. And there are layers of observability, right? So the thing is like, if we tell you there is latency between these two services at L seven layer, the first question is, is it the service? Is it the Envoy? Or is it the network? It sounds like a very simple question. It's actually not that easy to answer. And that is one of the questions we answer in like platforms like ours. Right. But even that is not the end. It, if it's neither of these three, it could be the node. It could be the hardware underneath. Right. And those, you realize like those are different observability tools that work on each layer. So I think there's a lot of work to be done, to enable end users to go from app, like from top to bottom to make, reduce what is called MTTR or meantime to, you know, resolution of an issue, where is the problem. >>But I think with tools like what is being built now, it is becoming easier, right? It is because one of the things we have to realize is with things like Kubernetes, we made the development of microservices easier. Right. And that's great. But as a result, what is happening is that more things are getting broken down. So there is more network in between. So that's harder. It gets to troubleshoot harder. It gets to secure everything harder. It gets to get visibility from everywhere. Right. So I often say like, actually, if you're going embarking down microservices journey, you actually are, you better have a platform like this. Otherwise, you know, you're, you're taking on operational cost. >>Wow. J's paradox. The more accessible we make something, the more it gets used, the more complex it is. That's been a theme here at KU con cloud native con Europe, 2022 from Licia Spain. I'm Keith Townsend, along with my host, Paul Gillman. And you're watching the queue, the leader in high tech coverage.

Published Date : May 18 2022

SUMMARY :

you by the cloud native computing foundation. So I'm really excited to hear what's The, the demonstrations are what you would expect at a show like this. of pet trade, but, you know, just saw, didn't realize this Isto And I think what happened with Kubernetes, And as you know, CNCF is very flexible Another project close and dear to you Envoy. like the nouns that it uses in terms of clusters and so on is not what an Because if you think about it, ultimately, you know, Where do you see SIO coming the concepts, get applications going, and then, you know, a distribution, which is, you know, compliant with various bills and tooling So paint the end to end for me, for STO in Envoy. can deploy studio into, the more consistency you can get for the value pillars So let's talk about that with some granularity and you can maybe gimme some concrete examples, So I think you hit it straight on the head. But the customers are still piecing together a So the thing is like, if we tell you there of the things we have to realize is with things like Kubernetes, we made the development the queue, the leader in high tech coverage.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Paul GillmanPERSON

0.99+

EuropeLOCATION

0.99+

Keith TownsendPERSON

0.99+

KeithPERSON

0.99+

Varun TalwarPERSON

0.99+

CNCFORGANIZATION

0.99+

last weekDATE

0.99+

two yearsQUANTITY

0.99+

each layerQUANTITY

0.99+

7,500 peopleQUANTITY

0.99+

first questionQUANTITY

0.99+

IIOSTITLE

0.99+

two servicesQUANTITY

0.99+

twoQUANTITY

0.99+

threeQUANTITY

0.98+

IstoORGANIZATION

0.98+

bothQUANTITY

0.98+

2022DATE

0.98+

KubernetesTITLE

0.98+

OracleORGANIZATION

0.98+

CoonORGANIZATION

0.97+

TetradORGANIZATION

0.97+

EnvoyTITLE

0.97+

SpainLOCATION

0.97+

EnvoyORGANIZATION

0.97+

KubernetesORGANIZATION

0.97+

oneQUANTITY

0.97+

todayDATE

0.96+

KubeconORGANIZATION

0.96+

Paul Gillon PaulPERSON

0.96+

CloudnativeconORGANIZATION

0.92+

TetraORGANIZATION

0.92+

firstQUANTITY

0.9+

IIOTITLE

0.88+

TCORGANIZATION

0.88+

one of the questionsQUANTITY

0.86+

three big problemsQUANTITY

0.86+

Bome ToroORGANIZATION

0.84+

SIOTITLE

0.83+

cloud native con EuropeORGANIZATION

0.83+

STOTITLE

0.82+

last five yearsDATE

0.82+

KU con cloud native conORGANIZATION

0.8+

MTTRTITLE

0.79+

cloud native computing foundationORGANIZATION

0.79+

lots of block diagramsQUANTITY

0.78+

22QUANTITY

0.78+

Licia SpainLOCATION

0.7+

codeQUANTITY

0.7+

lotsQUANTITY

0.67+

cube con coup con cloudORGANIZATION

0.56+

RioORGANIZATION

0.55+

L sevenOTHER

0.41+

conORGANIZATION

0.4+

2022EVENT

0.39+

nativeCOMMERCIAL_ITEM

0.37+

EuropeCOMMERCIAL_ITEM

0.37+

Larry Lancaster, Zebrium | Virtual Vertica BDC 2020


 

>> Announcer: It's theCUBE! Covering the Virtual Vertica Big Data Conference 2020 brought to you by Vertica. >> Hi, everybody. Welcome back. You're watching theCUBE's coverage of the Vertica Virtual Big Data Conference. It was, of course, going to be in Boston at the Encore Hotel. Win big with big data with the new casino but obviously Coronavirus has changed all that. Our hearts go out and we are empathy to those people who are struggling. We are going to continue our wall-to-wall coverage of this conference and we're here with Larry Lancaster who's the founder and CTO of Zebrium. Larry, welcome to theCUBE. Thanks for coming on. >> Hi, thanks for having me. >> You're welcome. So first question, why did you start Zebrium? >> You know, I've been dealing with machine data a long time. So for those of you who don't know what that is, if you can imagine servers or whatever goes on in a data center or in a SAS shop. There's data coming out of those servers, out of those applications and basically, you can build a lot of cool stuff on that. So there's a lot of metrics that come out and there's a lot of log files that come. And so, I've built this... Basically spent my career building that sort of thing. So tools on top of that or products on top of that. The problem is that since at least log files are completely unstructured, it's always doing the same thing over and over again, which is going in and understanding the data and extracting the data and all that stuff. It's very time consuming. If you've done it like five times you don't want to do it again. So really, my idea was at this point with machine learning where it's at there's got to be a better way. So Zebrium was founded on the notion that we can just do all that automatically. We can take a pile of machine data, we can turn it into a database, and we can build stuff on top of that. And so the company is really all about bringing that value to the market. >> That's cool. I want to get in to that, just better understand who you're disrupting and understand that opportunity better. But before I do, tell us a little bit about your background. You got kind of an interesting background. Lot of tech jobs. Give us some color there. >> Yeah, so I started in the Valley I guess 20 years ago and when my son was born I left grad school. I was in grad school over at Berkeley, Biophysics. And I realized I needed to go get a job so I ended up starting in software and I've been there ever since. I mean, I spent a lot of time at, I guess I cut my teeth at Nedap, which was a storage company. And then I co-founded a business called Glassbeam, which was kind of an ETL database company. And then after that I ended up at Nimble Storage. Another company, EMC, ended up buying the Glassbeam so I went over there and then after Nimble though, which where I build the InfoSight platform. That's where I kind of, after that I was able to step back and take a year and a half and just go into my basement, actually, this is my kind of workspace here, and come up with the technology and actually build it so that I could go raise money and get a team together to build Zebrium. So that's really my career in a nutshell. >> And you've got Hello Kitty over your right shoulder, which is kind of cool >> That's right. >> And then up to the left you got your monitor, right? >> Well, I had it. It's over here, yeah. >> But it was great! Pull it out, pull it out, let me see it. So, okay, so you got that. So what do you do? You just sit there and code all night or what? >> Yeah, that's right. So Hello Kitty's over here. I have a daughter and she setup my workspace here on this side with Hello Kitty and so on. And over on this side, I've got my recliner where I basically lay it all the way back and then I pivot this thing down over my face and put my keyboard on my lap and I can just sit there for like 20 hours. It's great. Completely comfortable. >> That's cool. All right, better put that monitor back or our guys will yell at me. But so, obviously, we're talking to somebody with serious coding chops and I'll also add that the Nimble InfoSight, I think it was one of the best pick ups that HP, HPE, has had in a while. And the thing that interested me about that, Larry, is the ability that the company was able to take that InfoSight and poured it very quickly across its product lines. So that says to me it was a modern, architecture, I'm sure API, microservices, and all those cool buzz words, but the proof is in their ability to bring that IP to other parts of the portfolio. So, well done. >> Yeah, well thanks. Appreciate that. I mean, they've got a fantastic team there. And the other thing that helps is when you have the notion that you don't just build on top of the data, you extract the data, you structure it, you put that in a database, we used Vertica there for that, and then you build on top of that. Taking the time to build that layer is what lets you build a scalable platform. >> Yeah, so, why Vertica? I mean, Vertica's been around for awhile. You remember you had the you had the old RDBMS, Oracles, Db2s, SQL Server, and then the database was kind of a boring market. And then, all of a sudden, you had all of these MPP companies came out, a spade of them. They all got acquired, including Vertica. And they've all sort of disappeared and morphed into different brands and Micro Focus has preserved the Vertica brand. But it seems like Vertica has been able to survive the transitions. Why Vertica? What was it about that platform that was unique and interested you? >> Well, I mean, so they're the first fund to build, what I would call a real column store that's kind of market capable, right? So there was the C-Store project at Berkeley, which Stonebreaker was involved in. And then that became sort of the seed from which Vertica was spawned. So you had this idea of, let's lay things out in a columnar way. And when I say columnar, I don't just mean that the data for every column is in a different set of files. What I mean by that is it takes full advantage of things like run length and coding, and L file and coding, and block--impression, and so you end up with these massive orders of magnitude savings in terms of the data that's being pulled off of storage as well as as it's moving through the pipeline internally in Vertica's query processing. So why am I saying all this? Because it's fundamentally, it was a fundamentally disruptive technology. I think column stores are ubiquitous now in analytics. And I think you could name maybe a couple of projects which are mostly open source who do something like Vertica does but name me another one that's actually capable of serving an enterprise as a relational database. I still think Vertica is unique in being that one. >> Well, it's interesting because you're a startup. And so a lot of startups would say, okay, we're going with a born-in-the-cloud database. Now Vertica touts that, well look, we've embraced cloud. You know, we have, we run in the cloud, we run on PRAM, all different optionality. And you hear a lot of vendors say that, but a lot of times they're just taking their stack and stuffing it into the cloud. But, so why didn't you go with a cloud-native database and is Vertica able to, I mean, obviously, that's why you chose it, but I'm interested from a technologist standpoint as to why you, again, made that choice given all these other choices around there. >> Right, I mean, again, I'm not, so... As I explained a column store, which I think is the appropriate definition, I'm not aware of another cloud-native-- >> Hm, okay. >> I'm aware of other cloud-native transactional databases, I'm not aware of one that has the analytics form it and I've tried some of them. So it was not like I didn't look. What I was actually impressed with and I think what let me move forward using Vertica in our stack is the fact that Eon really is built from the ground up to be cloud-native. And so we've been using Eon almost ever since we started the work that we're doing. So I've been really happy with the performance and with reliability of Eon. >> It's interesting. I've been saying for years that Vertica's a diamond in the rough and it's previous owner didn't know what to do with it because it got distracted and now Micro Focus seems to really see the value and is obviously putting some investments in there. >> Yeah >> Tell me more about your business. Who are you disrupting? Are you kind of disrupting the do-it-yourself? Or is there sort of a big whale out there that you're going to go after? Add some color to that. >> Yeah, so our broader market is monitoring software, that's kind of the high-level category. So you have a lot of people in that market right now. Some of them are entrenched in large players, like Datadog would be a great example. Some of them are smaller upstarts. It's a pretty, it's a pretty saturated market. But what's happened over the last, I'd say two years, is that there's been sort of a push towards what's called observability in terms of at least how some of the products are architected, like Honeycomb, and how some of them are messaged. Most of them are messaged these days. And what that really means is there's been sort of an understanding that's developed that that MTTR is really what people need to focus on to keep their customers happy. If you're a SAS company, MTTR is going to be your bread and butter. And it's still measured in hours and days. And the biggest reason for that is because of what's called unknown unknowns. Because of complexity. Now a days, things are, applications are ten times as complex as they used to be. And what you end up with is a situation where if something is new, if it's a known issue with a known symptom and a known root cause, then you can setup a automation for it. But the ones that really cost a lot of time in terms of service disruption are unknown unknowns. And now you got to go dig into this massive mass of data. So observability is about making tools to help you do that, but it's still going to take you hours. And so our contention is, you need to automate the eyeball. The bottleneck is now the eyeball. And so you have to get away from this notion of a person's going to be able to do it infinitely more efficient and recognize that you need automated help. When you get an alert agent, it shouldn't be that, "Hey, something weird's happening. Now go dig in." It should be, "Here's a root cause and a symptom." And that should be proposed to you by a system that actually does the observing. That actually does the watching. And that's what Zebrium does. >> Yeah, that's awesome. I mean, you're right. The last thing you want is just another alert and it say, "Go figure something out because there's a problem." So how does it work, Larry? In terms of what you built there. Can you take us inside the covers? >> Yeah, sure. So there's really, right now there's two kinds of data that we're ingesting. There's metrics and there's log files. Metrics, there's actually sort of a framework that's really popular in DevOp circles especially but it's becoming popular everywhere, which is called Prometheus. And it's a way of exporting metrics so that scrapers can collect them. And so if you go look at a typical stack, you'll find that most of the open source components and many of the closed source components are going to have exporters that export all their stacks to Prometheus. So by supporting that stack we can bring in all of those metrics. And then there's also the log files. And so you've got host log files in a containerized environment, you've got container logs, and you've got application-specific logs, perhaps living on a host mount. And you want to pull all those back and you want to be able to associate this log that I've collected here is associated with the same container on the same host that this metric is associated with. But now what? So once you've got that, you've got a pile of unstructured logs. So what we do is we take a look at those logs and we say, let's structure those into tables, right? So where I used to have a log message, if I look in my log file and I see it says something like, X happened five times, right? Well, that event types going to occur again and it'll say, X happened six times or X happened three times. So if I see that as a human being, I can say, "Oh clearly, that's the same thing." And what's interesting here is the times that X, that X happened, and that this number read... I may want to know when the numbers happened as a time series, the values of that column. And so you can imagine it as a table. So now I have table for that event type and every time it happens, I get a row. And then I have a column with that number in it. And so now I can do any kind of analytics I want almost instantly across my... If I have all my event types structured that way, every thing changes. You can do real anomaly detection and incident detection on top of that data. So that's really how we go about doing it. How we go about being able to do autonomous monitoring in a way that's effective. >> How do you handle doing that for, like the Spoke app? Do you have to, does somebody have to build a connector to those apps? How do you handle that? >> Yeah, that's a really good question. So you're right. So if I go and install a typical log manager, there'll be connectors for different apps and usually what that means is pulling in the stuff on the left, if you were to be looking at that log line, and it will be things like a time stamp, or a severity, or a function name, or various other things. And so the connector will know how to pull those apart and then the stuff to the right will be considered the message and that'll get indexed for search. And so our approach is we actually go in with machine learning and we structure that whole thing. So there's a table. And it's going to have a column called severity, and timestamp, and function name. And then it's going to have columns that correspond to the parameters that are in that event. And it'll have a name associated with the constant parts of that event. And so you end up with a situation where you've structured all of it automatically so we don't need collectors. It'll work just as well on your home-grown app that has no collectors or no parsers to find or anything. It'll work immediately just as well as it would work on anything else. And that's important, because you can't be asking people for connectors to their own applications. It just, it becomes now they've go to stop what they're doing and go write code for you, for your platform and they have to maintain it. It's just untenable. So you can be up and running with our service in three minutes. It'll just be monitoring those for you. >> That's awesome! I mean, that is really a breakthrough innovation. So, nice. Love to see that hittin' the market. Who do you sell to? Both types of companies and what role within the company? >> Well, definitely there's two main sort of pushes that we've seen, or I should say pulls. One is from DevOps folks, SRE folks. So these are people who are tasked with monitoring an environment, basically. And then you've got people who are in engineering and they have a staging environment. And what they actually find valuable is... Because when we find an incident in a staging environment, yeah, half the time it's because they're tearing everything up and it's not release ready, whatever's in stage. That's fine, they know that. But the other half the time it's new bugs, it's issues and they're finding issues. So it's kind of diverged. You have engineering users and they don't have titles like QA, they're Dev engineers or Dev managers that are really interested. And then you've got DevOps and SRE people there (mumbles). >> And how do I consume your product? Is the SAS... I sign up and you say within three minutes I'm up and running. I'm paying by the drink. >> Well, (laughs) right. So there's a couple ways. So, right. So the easiest way is if you use Kubernetes. So Kubernetes is what's called a container orchestrator. So these days, you know Docker and containers and all that, so now there's container orchestrators have become, I wouldn't say ubiquitous but they're very popular now. So it's kind of on that inflection curve. I'm not exactly sure the penetration but I'm going to say 30-40% probably of shops that were interested are using container orchestrators. So if you're using Kubernetes, basically you can install our Kubernetes chart, which basically means copying and pasting a URL and so on into your little admin panel there. And then it'll just start collecting all the logs and metrics and then you just login on the website. And the way you do that is just go to our website and it'll show you how to sign up for the service and you'll get your little API key and link to the chart and you're off and running. You don't have to do anything else. You can add rules, you can add stuff, but you don't have to. You shouldn't have to, right? You should never have to do any more work. >> That's great. So it's a SAS capability and I just pay for... How do you price it? >> Oh, right. So it's priced on volume, data volume. I don't want to go too much into it because I'm not the pricing guy. But what I'll say is that it's, as far as I know it's as cheap or cheaper than any other log manager or metrics product. It's in that same neighborhood as the very low priced ones. Because right now, we're not trying to optimize for take. We're trying to make a healthy margin and get the value of autonomous monitoring out there. Right now, that's our priority. >> And it's running in the cloud, is that right? AWB West-- >> Yeah, that right. Oh, I should've also pointed out that you can have a free account if it's less than some number of gigabytes a day we're not going to charge. Yeah, so we run in AWS. We have a multi-tenant instance in AWS. And we have a Vertica Eon cluster behind that. And it's been working out really well. >> And on your freemium, you have used the Vertica Community Edition? Because they don't charge you for that, right? So is that how you do it or... >> No, no. We're, no, no. So, I don't want to go into that because I'm not the bizdev guy. But what I'll say is that if you're doing something that winds up being OEM-ish, you can work out the particulars with Vertica. It's not like you're going to just go pay retail and they won't let you distinguish between tests, and prod, and paid, and all that. They'll work with you. Just call 'em up. >> Yeah, and that's why I brought it up because Vertica, they have a community edition, which is not neutered. It runs Eon, it's just there's limits on clusters and storage >> There's limits. >> But it's still fully functional though. >> So to your point, we want it multi-tenant. So it's big just because it's multi-tenant. We have hundred of users on that (audio cuts out). >> And then, what's your partnership with Vertica like? Can we close on that and just describe that a little bit? >> What's it like. I mean, it's pleasant. >> Yeah, I mean (mumbles). >> You know what, so the important thing... Here's what's important. What's important is that I don't have to worry about that layer of our stack. When it comes to being able to get the performance I need, being able to get the economy of scale that I need, being able to get the absolute scale that I need, I've not been disappointed ever with Vertica. And frankly, being able to have acid guarantees and everything else, like a normal mature database that can join lots of tables and still be fast, that's also necessary at scale. And so I feel like it was definitely the right choice to start with. >> Yeah, it's interesting. I remember in the early days of big data a lot of people said, "Who's going to need these acid properties and all this complexity of databases." And of course, acid properties and SQL became the killer features and functions of these databases. >> Who didn't see that one coming, right? >> Yeah, right. And then, so you guys have done a big seed round. You've raised a little over $6 million dollars and you got the product market fit down. You're ready to rock, right? >> Yeah, that's right. So we're doing a launch probably, well, when this airs it'll probably be the day before this airs. Basically, yeah. We've got people... Like literally in the last, I'd say, six to eight weeks, It's just been this sort of pique of interest. All of a sudden, everyone kind of gets what we're doing, realizes they need it, and we've got a solution that seems to meet expectations. So it's like... It's been an amazing... Let me just say this, it's been an amazing start to the year. I mean, at the same time, it's been really difficult for us but more difficult for some other people that haven't been able to go to work over the last couple of weeks and so on. But it's been a good start to the year, at least for our business. So... >> Well, Larry, congratulations on getting the company off the ground and thank you so much for coming on theCUBE and being part of the Virtual Vertica Big Data Conference. >> Thank you very much. >> All right, and thank you everybody for watching. This is Dave Vellante for theCUBE. Keep it right there. We're covering wall-to-wall Virtual Vertica BDC. You're watching theCUBE. (upbeat music)

Published Date : Mar 31 2020

SUMMARY :

brought to you by Vertica. and we're here with Larry Lancaster why did you start Zebrium? and basically, you can build a lot of cool stuff on that. and understand that opportunity better. and actually build it so that I could go raise money It's over here, yeah. So what do you do? and then I pivot this thing down over my face and I'll also add that the Nimble InfoSight, And the other thing that helps is when you have the notion and Micro Focus has preserved the Vertica brand. and so you end up with these massive orders And you hear a lot of vendors say that, I'm not aware of another cloud-native-- I'm not aware of one that has the analytics form it and now Micro Focus seems to really see the value Are you kind of disrupting the do-it-yourself? And that should be proposed to you In terms of what you built there. And so you can imagine it as a table. And so you end up with a situation I mean, that is really a breakthrough innovation. and it's not release ready, I sign up and you say within three minutes And the way you do that So it's a SAS capability and I just pay for... and get the value of autonomous monitoring out there. that you can have a free account So is that how you do it or... and they won't let you distinguish between Yeah, and that's why I brought it up because Vertica, But it's still So to your point, I mean, it's pleasant. What's important is that I don't have to worry I remember in the early days of big data and you got the product market fit down. that haven't been able to go to work and thank you so much for coming on theCUBE All right, and thank you everybody for watching.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Larry LancasterPERSON

0.99+

Dave VellantePERSON

0.99+

LarryPERSON

0.99+

BostonLOCATION

0.99+

five timesQUANTITY

0.99+

three timesQUANTITY

0.99+

six timesQUANTITY

0.99+

EMCORGANIZATION

0.99+

sixQUANTITY

0.99+

ZebriumORGANIZATION

0.99+

20 hoursQUANTITY

0.99+

GlassbeamORGANIZATION

0.99+

NedapORGANIZATION

0.99+

VerticaORGANIZATION

0.99+

NimbleORGANIZATION

0.99+

Nimble StorageORGANIZATION

0.99+

HPORGANIZATION

0.99+

HPEORGANIZATION

0.99+

AWSORGANIZATION

0.99+

a year and a halfQUANTITY

0.99+

Micro FocusORGANIZATION

0.99+

ten timesQUANTITY

0.99+

two kindsQUANTITY

0.99+

two yearsQUANTITY

0.99+

three minutesQUANTITY

0.99+

first questionQUANTITY

0.99+

eight weeksQUANTITY

0.98+

StonebreakerORGANIZATION

0.98+

PrometheusTITLE

0.98+

30-40%QUANTITY

0.98+

EonORGANIZATION

0.98+

hundred of usersQUANTITY

0.98+

OneQUANTITY

0.98+

Vertica Virtual Big Data ConferenceEVENT

0.98+

KubernetesTITLE

0.97+

first fundQUANTITY

0.97+

Virtual Vertica Big Data Conference 2020EVENT

0.97+

AWB WestORGANIZATION

0.97+

Virtual Vertica Big Data ConferenceEVENT

0.97+

HoneycombORGANIZATION

0.96+

SASORGANIZATION

0.96+

20 years agoDATE

0.96+

Both typesQUANTITY

0.95+

theCUBEORGANIZATION

0.95+

DatadogORGANIZATION

0.95+

two mainQUANTITY

0.94+

over $6 million dollarsQUANTITY

0.93+

Hello KittyORGANIZATION

0.93+

SQLTITLE

0.93+

ZebriumPERSON

0.91+

SpokeTITLE

0.89+

Encore HotelLOCATION

0.88+

InfoSightORGANIZATION

0.88+

CoronavirusOTHER

0.88+

oneQUANTITY

0.86+

lessQUANTITY

0.85+

OraclesORGANIZATION

0.85+

2020DATE

0.85+

CTOPERSON

0.84+

VerticaTITLE

0.82+

Nimble InfoSightORGANIZATION

0.81+