Image Title

Search Results for Clint Sharp:

Clint Sharp, Cribl | AWS re:Invent 2022


 

(upbeat music) (background crowd chatter) >> Hello, fantastic cloud community and welcome back to Las Vegas where we are live from the show floor at AWS re:Invent. My name is Savannah Peterson. Joined for the first time. >> Yeah, Doobie. >> VIP, I know. >> All right, let's do this. >> Thanks for having me Dave, I really appreciate it. >> I appreciate you doing all the hard work. >> Yeah. (laughs) >> You, know. >> I don't know about that. We wouldn't be here without you and all these wonderful stories that all the businesses have. >> Well, when I host with John it's hard for me to get a word in edgewise. I'm just kidding, John. (Savannah laughing) >> Shocking, I've never want that experience. >> We're like knocking each other, trying to, we're elbowing. No, it's my turn to speak, (Savannah laughing) so I'm sure we're going to work great together. I'm really looking forward to it. >> Me too Dave, I feel very lucky to be here and I feel very lucky to introduce our guest this afternoon, Clint Sharp, welcome to the show. You are with Cribl. Yeah, how does it feel to be on the show floor today? >> It's amazing to be back at any conference in person and this one is just electric, I mean, there's like a ton of people here love the booth. We're having like a lot of activity. It's been really, really exciting to be here. >> So you're a re:Ieinvent alumni? Have you been here before? You're a Cube alumni. We're going to have an OG conversation about observability, I'm looking forward to it. Just in case folks haven't been watching theCUBE for the last nine years that you've been on it. I know you've been with a few different companies during that time period. Love that you've been with us since 2013. Give us the elevator pitch for Cribl. >> Yeah, so Cribl is an observability company which we're going to talk about today. Our flagship product is a telemetry router. So it just really helps you get data into the right places. We're very specifically in the observability and security markets, so we sell to those buyers and we help them work with logs and metrics and open telemetry, lots of different types of data to get it into the right systems. >> Why did observability all of a sudden become such a hot thing? >> Savannah: Such a hot topic. >> Right, I mean it just came on the scene so quickly and now it's obviously a very crowded space. So why now, and how do you guys differentiate from the crowd? >> Yeah, sure, so I think it's really a post-digital transformation thing Dave, when I think about how I interact with organizations you know, 20 years ago when I started this business I called up American Airlines when things weren't working and now everything's all done digitally, right? I rarely ever interact with a human being and yet if I go on one of these apps and I get a bad experience, switching is just as easy as booking another airline or changing banks or changing telecommunications providers. So companies really need an ability to dive into this data at very high fidelity to understand what Dave's experience with their service or their applications are. And for the same reasons on the security side, we need very, very high fidelity data in order to understand whether malicious actors are working their way around inside of the enterprise. And so that's really changed the tooling that we had, which, in prior years, it was really hard to ask arbitrary questions of that data. You really had to deal with whatever the vendor gave you or you know, whatever the tool came with. And observability is really an evolution, allowing people to ask and answer questions of their data that they really weren't planning in advance. >> Dave: Like what kind of questions are people asking? >> Yeah sure so what is Dave's performance with this application? I see that a malicious actor has made their way on the inside of my network. Where did they go? What did they do? What files did they access? What network connections did they open? And the scale of machine data of this machine to machine communication is so much larger than what you tend to see with like human generated data, transactional data, that we really need different systems to deal with that type of data. >> And what would you say is your secret sauce? Like some people come at it, some search, some come at it from security. What's your sort of superpower as Lisa likes to say? >> Yeah, so we're a customer's first company. And so one of the things I think that we've done incredibly well is go look at the market and look for problems that are not being solved by other vendors. And so when we created this category of an observability pipeline, nobody was really marketing an observability pipeline at that time. And really the problem that customers had is they have data from a lot of different sources and they need to get it to a lot of different destinations. And a lot of that data is not particularly valuable. And in fact, one of the things that we like to say about this class of data is that it's really not valuable until it is, right? And so if I have a security breach, if I have an outage and I need to start pouring through this data suddenly the data is very, very valuable. And so customers need a lot of different places to store this data. I might want that data in a logging system. I might want that data in a metric system. I might want that data in a distributed tracing system. I might want that data in a data lake. In fact AWS just announced their security data lake product today. >> Big topic all day. >> Yeah, I mean like you can see that the industry is going in this way. People want to be able to store massively greater quantities of data than they can cost effectively do today. >> Let's talk about that just a little bit. The tension between data growth, like you said it's not valuable until it is or until it's providing context, whether that be good or bad. Let's talk about the tension between data growth and budget growth. How are you seeing that translate in your customers? >> Yeah, well so data's growing in a 25% CAGR per IDC which means we're going to have two and a half times the data in five years. And when you talk to CISOs and CIOs and you ask them, is your budget growing at a 25% CAGR, absolutely not, under no circumstances am I going to have, you know, that much more money. So what got us to 2022 is not going to get us to 2032. And so we really need different approaches for managing this data at scale. And that's where you're starting to see things like the AWS security data lake, Snowflake is moving into this space. You're seeing a lot of different people kind of moving into the database for security and observability type of data. You also have lots of other companies that are competing in broad spectrum observability, companies like Splunk or companies like Datadog. And these guys are all doing it from a data-first approach. I'm going to bring a lot of data into these platforms and give users the ability to work with that data to understand the performance and security of their applications. >> Okay, so carry that through, and you guys are different how? >> Yeah, so we are this pipeline that's sitting in the middle of all these solutions. We don't care whether your data was originally intended for some other tool. We're going to help you in a vendor-neutral way get that data wherever you need to get it. And that gives them the ability to control cost because they can put the right data in the right place. If it's data that's not going to be frequently accessed let's put it in a data lake, the cheapest place we can possibly put that data to rest. Or if I want to put it into my security tool maybe not all of the data that's coming from my vendor, my vendor has to put all the data in their records because who knows what it's going to be used for. But I only use half or a quarter of that information for security. And so what if I just put the paired down results in my more expensive storage but I kept full fidelity data somewhere else. >> Okay so you're observing the observability platforms basically, okay. >> Clint: We're routing that data. >> And then creating- >> It's meta observability. >> Right, observability pipeline. When I think a data pipeline, I think of highly specialized individuals, there's a data analyst, there's a data scientist, there's a quality engineer, you know, etc, et cetera. Do you have specific roles in your customer base that look at different parts of that pipeline and can you describe that? >> Yeah, absolutely, so one of the things I think that we do different is we sell very specifically to the security tooling vendors. And so in that case we are, or not to the vendors, but to the customers themselves. So generally they have a team inside of that organization which is managing their security tooling and their operational tooling. And so we're building tooling very specifically for them, for the types of data they work with for the volumes and scale of data that they work with. And that is giving, and no other vendor is really focusing on them. There's a lot of general purpose data people in the world and we're really the only ones that are focusing very specifically on observability and security data. >> So the announcement today, the security data lake that you were talking about, it's based on the Open Cybersecurity Framework, which I think AWS put forth, right? And said, okay, everybody come on. [Savannah] Yeah, yeah they did. >> So, right, all right. So what are your thoughts on that? You know, how does it fit with your strategy, you know. >> Yeah, so we are again a customer's first neutral company. So if OCSF gains traction, which we hope it does then we'll absolutely help customers get data into that format. But we're kind of this universal adapter so we can take data from other vendors, proprietary schemas, maybe you're coming from one of the other send vendors and you want to translate that to OCSF to use it with the security data lake. We can provide customers the ability to change and reshape that data to fit into any schema from any vendor so that we're really giving security data lake customers the ability to adapt the legacy, the stuff that they have that they can't get rid of 'cause they've had it for 10 years, 20 years and nothing inside of an enterprise ever goes away. That stuff stays forever. >> Legacy. >> Well legacy is working right? I mean somebody's actually, you know, making money on top of this thing. >> We never get rid of stuff. >> No, (laughing) we just added the toolkit. It's like all the old cell phones we have, it's everything. I mean we even do it as individual users and consumers. It's all a part of our little personal library. >> So what's happened in the field company momentum? >> Yeah let's talk trends too. >> Yeah so the company's growing crazily fast. We're north of 400 employees and we're only a hundred and something, you know, a year ago. So you can kind of see we're tripling you know, year over year. >> Savannah: Casual, especially right now in a lot of companies are feeling that scale back. >> Yeah so obviously we're keeping our eye closely on the macro conditions, but we see such a huge opportunity because we're a value player in this space that there's a real flight to value in enterprises right now. They're looking for projects that are going to pay themselves back and we've always had this value prop, we're going to come give you a lot of capabilities but we're probably going to save you money at the same time. And so that's just really resonating incredibly well with enterprises today and giving us an opportunity to continue to grow in the face of some challenging headwinds from a macro perspective. >> Well, so, okay, so people think okay, security is immune from the macro. It's not, I mean- >> Nothing, really. >> No segment is immune. CrowdStrike announced today the CrowdStrike rocket ship's still growing AR 50%, but you know, stocks down, I don't know, 20% right now after our- >> Logically doesn't make- >> Okay stuff happens, but still, you know, it's interesting, the macro, because it was like, to me it's like a slingshot, right? Everybody was like, wow, pandemic, shut down. All of a sudden, oh wow, need tech, boom. >> Savannah: Yeah, digitally transformed today. >> It's like, okay, tap the brakes. You know, when you're driving down the highway and you get that slingshotting effect and I feel like that's what's going on now. So, the premise is that the real leaders, those guys with the best tech that really understand the customers are going to, you know, get through this. What are your customers telling you in terms of, you know they're spending patterns, how they're trying to maybe consolidate vendors and how does that affect you guys? >> Yeah, for sure, I mean, I think, obviously, back to that flight to value, they're looking for vendors who are aligned with their interests. So, you know, as their budgets are getting pressure, what vendors are helping them provide the same capabilities they had to provide to the business before especially from a security perspective 'cause they're going to get cut along with everybody else. If a larger organization is trimming budgets across, security's going to get cut along with everybody else. So is IT operations. And so since they're being asked to do more with less that's you know, really where we're coming in and trying to provide them value. But certainly we're seeing a lot of pressure from IT departments, security departments all over in terms of being able to live and do more with less. >> Yeah, I mean, Celip's got a great quote today. "If you're looking to tighten your belt the cloud is the place to do it." I mean, it's probably true. >> Absolutely, elastic scalability in this, you know, our new search product is based off of AWS Lambda and it gives you truly elastic scalability. These changes in architectures are what's going to allow, it's not that cloud is cheaper, it's that cloud gives you on-demand scalability that allows you to truly control the compute that you're spending. And so as a customer of AWS, like this is giving us capabilities to offer products that are scalable and cost effective in ways that we just have not been able to do in the cloud. >> So what does that mean for the customer that you're using serverless using Lambda? What does that mean for them in terms of what they don't have to do that they maybe had to previously? >> It offers us the ability to try to charge them like a truly cloud native vendor. So in our cloud product we sell a credit model whereby which you deduct credits for usage. So if you're streaming data, you pay for gigabytes. If you're searching data then you're paying for CPU consumption, and so it allows us to charge them only for what they're consuming which means we don't have to manage a whole fleet of servers, and eventually, well we go to managing our own compute quite possibly as we start to get to scale at certain customers. But Lambda allowed us to not have to launch that way, not have to run a bunch of infrastructure. And we've been able to align our charging model with something that we think is the most customer friendly which is true consumption, pay for what you consume. >> So for example, you're saying you don't have to configure the EC2 Instance or figure out the memory sizing, you don't have to worry about any of that. You just basically say go, it figures that out and you can focus on upstream, is that right? >> Yep, and we're able to not only from a cost perspective also from a people perspective, it's allowed us velocity that we did not have before, which is we can go and prototype and build significantly faster because we're not having to worry, you know, in our mature products we use EC2 like everybody else does, right? And so as we're launching new products it's allowed us to iterate much faster and will we eventually go back to running our own compute, who knows, maybe, but it's allowed us a lot faster velocity than we were able to get before. >> I like what I've heard you discuss a lot is the agility and adaptability. We're going to be moving and evolving, choosing different providers. You're very outspoken about being vendor agnostic and I think that's actually a really unique and interesting play because we don't know what the future holds. So we're doing a new game on that note here on theCUBE, new game, new challenge, I suppose I would call it to think of this as your 30 second thought leadership highlight reel, a sizzle of the most important topic or conversation that's happening theme here at the show this year. >> Yeah, I mean, for me, as I think, as we're looking, especially like security data lake, et cetera, it's giving customers ownership of their data. And I think that once you, and I'm a big fan of this concept of open observability, and security should be the same way which is, I should not be locking you in as a vendor into my platform. Data should be stored in open formats that can be analyzed by multiple places. And you've seen this with AWS's announcement, data stored in open formats the same way other vendors store that. And so if you want to plug out AWS and you want to bring somebody else in to analyze your security lake, then great. And as we move into our analysis product, our search product, we'll be able to search data in the security data lake or data that's raw in S3. And we're really just trying to give customers back control over their future so that they don't have to maintain a relationship with a particular vendor. They're always getting the best. And that competition fuels really great product. And I'm really excited for the next 10 years of our industry as we're able to start competing on experiences and giving customers the best products, the customer wins. And I'm really excited about the customer winning. >> Yeah, so customer focused, I love it. What a great note to end on. That was very exciting, very customer focused. So, yo Clint, I have really enjoyed talking to you. Thanks. >> Thanks Clint. >> Thanks so much, it's been a pleasure being on. >> Thanks for enhancing our observability over here, I feel like I'll be looking at things a little bit differently after this conversation. And thank all of you for tuning in to our wonderful afternoon of continuous live coverage here at AWS re:Ieinvent in fabulous Las Vegas, Nevada with Dave Vellante. I'm Savannah Peterson. We're theCUBE, the leading source for high tech coverage. (bright music)

Published Date : Nov 30 2022

SUMMARY :

Joined for the first time. Dave, I really appreciate it. I appreciate you that all the businesses have. it's hard for me to want that experience. I'm really looking forward to it. Yeah, how does it feel to It's amazing to be back for the last nine years and security markets, so and how do you guys And for the same reasons And the scale of machine data And what would you And so one of the things I think that the industry is going in this way. Let's talk about the am I going to have, you We're going to help you the observability and can you describe that? And so in that case we that you were talking about, it's based on So what are your thoughts on that? the ability to change I mean somebody's actually, you know, It's like all the old cell and something, you know, a year ago. of companies are feeling that scale back. that are going to pay themselves back security is immune from the macro. the CrowdStrike rocket it's interesting, the Savannah: Yeah, and you get that slingshotting effect asked to do more with less the cloud is the place to do it." it's that cloud gives you and so it allows us to charge them only and you can focus on And so as we're launching new products I like what I've heard you and security should be the same way What a great note to end on. Thanks so much, it's And thank all of you for tuning in

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JohnPERSON

0.99+

AWSORGANIZATION

0.99+

ClintPERSON

0.99+

DavePERSON

0.99+

Dave VellantePERSON

0.99+

SavannahPERSON

0.99+

Savannah PetersonPERSON

0.99+

CriblORGANIZATION

0.99+

Clint SharpPERSON

0.99+

LisaPERSON

0.99+

20 yearsQUANTITY

0.99+

25%QUANTITY

0.99+

10 yearsQUANTITY

0.99+

Las VegasLOCATION

0.99+

American AirlinesORGANIZATION

0.99+

DatadogORGANIZATION

0.99+

2022DATE

0.99+

CrowdStrikeORGANIZATION

0.99+

20%QUANTITY

0.99+

SplunkORGANIZATION

0.99+

30 secondQUANTITY

0.99+

DoobiePERSON

0.99+

a year agoDATE

0.99+

LambdaTITLE

0.99+

five yearsQUANTITY

0.99+

halfQUANTITY

0.99+

2013DATE

0.99+

first companyQUANTITY

0.99+

first timeQUANTITY

0.99+

todayDATE

0.99+

2032DATE

0.99+

CubeORGANIZATION

0.98+

20 years agoDATE

0.98+

oneQUANTITY

0.98+

CriblPERSON

0.98+

EC2TITLE

0.98+

this yearDATE

0.97+

Las Vegas, NevadaLOCATION

0.96+

two and a half timesQUANTITY

0.96+

OCSFORGANIZATION

0.93+

S3TITLE

0.91+

this afternoonDATE

0.9+

IeinventORGANIZATION

0.86+

Open Cybersecurity FrameworkTITLE

0.84+

a hundred and somethingQUANTITY

0.82+

CelipPERSON

0.81+

one of the thingsQUANTITY

0.79+

InventEVENT

0.78+

last nine yearsDATE

0.77+

a quarterQUANTITY

0.77+

first neutral companyQUANTITY

0.75+

ARQUANTITY

0.75+

first approachQUANTITY

0.74+

dataQUANTITY

0.72+

re:InventEVENT

0.7+

north of 400 employeesQUANTITY

0.67+

SnowflakeORGANIZATION

0.67+

Clint Sharp, Cribl | Cube Conversation


 

(upbeat music) >> Hello, welcome to this CUBE conversation I'm John Furrier your host here in theCUBE in Palo Alto, California, featuring Cribl a hot startup taking over the enterprise when it comes to data pipelining, and we have a CUBE alumni who's the co-founder and CEO, Clint Sharp. Clint, great to see you again, you've been on theCUBE, you were on in 2013, great to see you, congratulations on the company that you co-founded, and leading as the chief executive officer over $200 million in funding, doing this really strong in the enterprise, congratulations thanks for joining us. >> Hey, thanks John it's really great to be back. >> You know, remember our first conversation the big data wave coming in, Hadoop World 2010, now the cloud comes in, and really the cloud native really takes data to a whole nother level. You've seeing the old data architectures being replaced with cloud scale. So the data landscape is interesting. You know, Data as Code you're hearing that term, data engineering teams are out there, data is everywhere, it's now part of how developers and companies are getting value whether it's real time, or coming out of data lakes, data is more pervasive than ever. Observability is a hot area, there's a zillion companies doing it, what are you guys doing? Where do you fit in the data landscape? >> Yeah, so what I say is that Cribl and our products and we solve the problem for our customers of the fundamental tension between data growth and budget. And so if you look at IDCs data data's growing at a 25%, CAGR, you're going to have two and a half times the amount of data in five years that you have today, and I talk to a lot of CIOs, I talk to a lot of CISOs, and the thing that I hear repeatedly is my budget is not growing at a 25% CAGR so fundamentally, how do I resolve this tension? We sell very specifically into the observability in security markets, we sell to technology professionals who are operating, you know, observability in security platforms like Splunk, or Elasticsearch, or Datadog, Exabeam, like these types of platforms they're moving, protocols like syslog, they're moving, they have lots of agents deployed on every endpoint and they're trying to figure out how to get the right data to the right place, and fundamentally you know, control cost. And we do that through our product called Stream which is what we call an observability pipeline. It allows you to take all this data, manipulate it in the stream and get it to the right place and fundamentally be able to connect all those things that maybe weren't originally intended to be connected. >> So I want to get into that new architecture if you don't mind, but let me first ask you on the problem space that you're in. So cloud native obviously instrumentating, instrumenting everything is a key thing. You mentioned data got all these tools, is the problem that there's been a sprawl of things being instrumented and they have to bring it together, or it's too costly to run all these point solutions and get it to work? What's the problem space that you're in? >> So I think customers have always been forced to make trade offs John. So the, hey I have volumes and volumes and volumes of data that's relevant to securing my enterprise, that's relevant to observing and understanding the behavior of my applications but there's never been an approach that allows me to really onboard all of that data. And so where we're coming at is giving them the tools to be able to, you know, filter out noise and waste, to be able to, you know, aggregate this high fidelity telemetry data. There's a lot of growing changes, you talk about cloud native, but digital transformation, you know, the pandemic itself and remote work all these are driving significantly greater data volumes, and vendors unsurprisingly haven't really been all that aligned to giving customers the tools in order to reshape that data, to filter out noise and waste because, you know, for many of them they're incentivized to get as much data into their platform as possible, whether that's aligned to the customer's interests or not. And so we saw an opportunity to come out and fundamentally as a customers-first company give them the tools that they need, in order to take back control of their data. >> I remember those conversations even going back six years ago the whole cloud scale, horizontally scalable applications, you're starting to see data now being stuck in the silos now to have high, good data you have to be observable, which means you got to be addressable. So you now have to have a horizontal data plane if you will. But then you get to the question of, okay, what data do I need at the right time? So is the Data as Code, data engineering discipline changing what new architectures are needed? What changes in the mind of the customer once they realize that they need this new way to pipe data and route data around, or make it available for certain applications? What are the key new changes? >> Yeah, so I think one of the things that we've been seeing in addition to the advent of the observability pipeline that allows you to connect all the things, is also the advent of an observability lake as well. Which is allowing people to store massively greater quantities of data, and also different types of data. So data that might not traditionally fit into a data warehouse, or might not traditionally fit into a data lake architecture, things like deployment artifacts, or things like packet captures. These are binary types of data that, you know, it's not designed to work in a database but yet they want to be able to ask questions like, hey, during the Log4Shell vulnerability, one of all my deployment artifacts actually had Log4j in it in an affected version. These are hard questions to answer in today's enterprise. Or they might need to go back to full fidelity packet capture data to try to understand that, you know, a malicious actor's movement throughout the enterprise. And we're not seeing, you know, we're seeing vendors who have great log indexing engines, and great time series databases, but really what people are looking for is the ability to store massive quantities of data, five times, 10 times more data than they're storing today, and they're doing that in places like AWSS3, or in Azure Blob Storage, and we're just now starting to see the advent of technologies we can help them query that data, and technologies that are generally more specifically focused at the type of persona that we sell to which is a security professional, or an IT professional who's trying to understand the behaviors of their applications, and we also find that, you know, general-purpose data processing technologies are great for the enterprise, but they're not working for the people who are running the enterprise, and that's why you're starting to see the concepts like observability pipelines and observability lakes emerge, because they're targeted at these people who have a very unique set of problems that are not being solved by the general-purpose data processing engines. >> It's interesting as you see the evolution of more data volume, more data gravity, then you have these specialty things that need to be engineered for the business. So sounds like observability lake and pipelining of the data, the data pipelining, or stream you call it, these are new things that they bolt into the architecture, right? Because they have business reasons to do it. What's driving that? Sounds like security is one of them. Are there others that are driving this behavior? >> Yeah, I mean it's the need to be able to observe applications and observe end-user behavior at a fine-grain detail. So, I mean I often use examples of like bank teller applications, or perhaps, you know, the app that you're using to, you know, I'm going to be flying in a couple of days. I'll be using their app to understand whether my flight's on time. Am I getting a good experience in that particular application? Answering the question of is Clint getting a good experience requires massive quantities of data, and your application and your service, you know, I'm going to sit there and look at, you know, American Airlines which I'm flying on Thursday, I'm going to be judging them based on off of my experience. I don't care what the average user's experience is I care what my experience is. And if I call them up and I say, hey, and especially for the enterprise usually this is much more for, you know, in-house applications and things like that. They call up their IT department and say, hey, this application is not working well, I don't know what's going on with it, and they can't answer the question of what was my individual experience, they're living with, you know, data that they can afford to store today. And so I think that's why you're starting to see the advent of these new architectures is because digital is so absolutely critical to every company's customer experience, that they're needing to be able to answer questions about an individual user's experience which requires significantly greater volumes of data, and because of significantly greater volumes of data, that requires entirely new approaches to aggregating that data, bringing the data in, and storing that data. >> Talk to me about enabling customer choice when it comes around controlling their data. You mentioned that before we came on camera that you guys are known for choice. How do you enable customer choice and control over their data? >> So I think one of the biggest problems I've seen in the industry over the last couple of decades is that vendors come to customers with hugely valuable products that make their lives better but it also requires them to maintain a relationship with that vendor in order to be able to continue to ask questions of that data. And so customers don't get a lot of optionality in these relationships. They sign multi-year agreements, they look to try to start another, they want to go try out another vendor, they want to add new technologies into their stack, and in order to do that they're often left with a choice of well, do I roll out like get another agent, do I go touch 10,000 computers, or a 100,000 computers in order to onboard this data? And what we have been able to offer them is the ability to reuse their existing deployed footprints of agents and their existing data collection technologies, to be able to use multiple tools and use the right tool for the right job, and really give them that choice, and not only give them the choice once, but with the concepts of things like the observability lake and replay, they can go back in time and say, you know what? I wanted to rehydrate all this data into a new tool, I'm no longer locked in to the way one vendor stores this, I can store this data in open formats and that's one of the coolest things about the observability late concept is that customers are no longer locked in to any particular vendor, the data is stored in open formats and so that gives them the choice to be able to go back later and choose any vendor, because they may want to do some AI or ML on that type of data and do some model training. They may want to be able to forward that data to a new cloud data warehouse, or try a different vendor for log search or a different vendor for time series data. And we're really giving them the choice and the tools to do that in a way in which was simply not possible before. >> You know you are bring up a point that's a big part of the upcoming AWS startup series Data as Code, the data engineering role has become so important and the word engineering is a key word in that, but there's not a lot of them, right? So like how many data engineers are there on the planet, and hopefully more will come in, come from these great programs in computer science but you got to engineer something but you're talking about developing on data, you're talking about doing replays and rehydrating, this is developing. So Data as Code is now a reality, how do you see Data as Code evolving from your perspective? Because it implies DevOps, Infrastructure as Code was DevOps, if Data as Code then you got DataOps, AIOps has been around for a while, what is Data as Code? And what does that mean to you Clint? >> I think for our customers, one, it means a number of I think sort of after-effects that maybe they have not yet been considering. One you mentioned which is it's hard to acquire that talent. I think it is also increasingly more critical that people who were working in jobs that used to be purely operational, are now being forced to learn, you know, developer centric tooling, things like GET, things like CI/CD pipelines. And that means that there's a lot of education that's going to have to happen because the vast majority of the people who have been doing things in the old way from the last 10 to 20 years, you know, they're going to have to get retrained and retooled. And I think that one is that's a huge opportunity for people who have that skillset, and I think that they will find that their compensation will be directly correlated to their ability to have those types of skills, but it also represents a massive opportunity for people who can catch this wave and find themselves in a place where they're going to have a significantly better career and more options available to them. >> Yeah and I've been thinking about what you just said about your customer environment having all these different things like Datadog and other agents. Those people that rolled those out can still work there, they don't have to rip and replace and then get new training on the new multiyear enterprise service agreement that some other vendor will sell them. You come in and it sounds like you're saying, hey, stay as you are, use Cribl, we'll have some data engineering capabilities for you, is that right? Is that? >> Yup, you got it. And I think one of the things that's a little bit different about our product and our market John, from kind of general-purpose data processing is for our users they often, they're often responsible for many tools and data engineering is not their full-time job, it's actually something they just need to do now, and so we've really built tool that's designed for your average security professional, your average IT professional, yes, we can utilize the same kind of DataOps techniques that you've been talking about, CI/CD pipelines, GITOps, that sort of stuff, but you don't have to, and if you're really just already familiar with administering a Datadog or a Splunk, you can get started with our product really easily, and it is designed to be able to be approachable to anybody with that type of skillset. >> It's interesting you, when you're talking you've remind me of the big wave that was coming, it's still here, shift left meant security from the beginning. What do you do with data shift up, right, down? Like what do you, what does that mean? Because what you're getting at here is that if you're a developer, you have to deal with data but you don't have to be a data engineer but you can be, right? So we're getting in this new world. Security had that same problem. Had to wait for that group to do things, creating tension on the CI/CD pipelining, so the developers who are building apps had to wait. Now you got shift left, what is data, what's the equivalent of the data version of shift left? >> Yeah so we're actually doing this right now. We just announced a new product a week ago called Cribl Edge. And this is enabling us to move processing of this data rather than doing it centrally in the stream to actually push this processing out to the edge, and to utilize a lot of unused capacity that you're already paying AWS, or paying Azure for, or maybe in your own data center, and utilize that capacity to do the processing rather than having to centralize and aggregate all of this data. So I think we're going to see a really interesting, and left from our side is towards the origination point rather than anything else, and that allows us to really unlock a lot of unused capacity and continue to drive the kind of cost down to make more data addressable back to the original thing we talked about the tension between data growth, if we want to offer more capacity to people, if we want to be able to answer more questions, we need to be able to cost-effectively query a lot more data. >> You guys had great success in the enterprise with what you got going on. Obviously the funding is just the scoreboard for that. You got good growth, what are the use cases, or what's the customer look like that's working for you where you're winning, or maybe said differently what pain points are out there the customer might be feeling right now that Cribl could fit in and solve? How would you describe that ideal persona, or environment, or problem, that the customer may have that they say, man, Cribl's a perfect fit? >> Yeah, this is a person who's working on tooling. So they administer a Splunk, or an Elastic, or a Datadog, they may be in a network operations center, a security operation center, they are struggling to get data into their tools, they're always at capacity, their tools always at the redline, they really wish they could do more for the business. They're kind of tired of being this department of no where everybody comes to them and says, "hey, can I get this data in?" And they're like, "I wish, but you know, we're all out of capacity, and you know, we have, we wish we could help you but we frankly can't right now." We help them by routing that data to multiple locations, we help them control costs by eliminating noise and waste, and we've been very successful at that in, you know, logos, like, you know, like a Shutterfly, or a, blanking on names, but we've been very successful in the enterprise, that's not great, and we continue to be successful with major logos inside of government, inside of banking, telco, et cetera. >> So basically it used to be the old hyperscalers, the ones with the data full problem, now everyone's got the, they're full of data and they got to really expand capacity and have more agility and more engineering around contributions of the business sounds like that's what you guys are solving. >> Yup and hopefully we help them do a little bit more with less. And I think that's a key problem for our enterprises, is that there's always a limit on the number of human resources that they have available at their disposal, which is why we try to make the software as easy to use as possible, and make it as widely applicable to those IT and security professionals who are, you know, kind of your run-of-the-mill tools administrator, our product is very approachable for them. >> Clint great to see you on theCUBE here, thanks for coming on. Quick plug for the company, you guys looking for hiring, what's going on? Give a quick update, take 30 seconds to give a plug. >> Yeah, absolutely. We are absolutely hiring cribl.io/jobs, we need people in every function from sales, to marketing, to engineering, to back office, GNA, HR, et cetera. So please check out our job site. If you are interested it in learning more you can go to cribl.io. We've got some great online sandboxes there which will help you educate yourself on the product, our documentation is freely available, you can sign up for up to a terabyte a day on our cloud, go to cribl.cloud and sign up free today. The product's easily accessible, and if you'd like to speak with us we'd love to have you in our community, and you can join the community from cribl.io as well. >> All right, Clint Sharp co-founder and CEO of Cribl, thanks for coming to theCUBE. Great to see you, I'm John Furrier your host thanks for watching. (upbeat music)

Published Date : Mar 31 2022

SUMMARY :

Clint, great to see you again, really great to be back. and really the cloud native and get it to the right place and get it to work? to be able to, you know, So is the Data as Code, is the ability to store that need to be engineered that they're needing to be that you guys are known for choice. is the ability to reuse their does that mean to you Clint? from the last 10 to 20 years, they don't have to rip and and it is designed to be but you don't have to be a data engineer and to utilize a lot of unused capacity that the customer may have and you know, we have, and they got to really expand capacity as easy to use as possible, Clint great to see you on theCUBE here, and you can join the community Great to see you, I'm

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Clint SharpPERSON

0.99+

JohnPERSON

0.99+

John FurrierPERSON

0.99+

10 timesQUANTITY

0.99+

ClintPERSON

0.99+

30 secondsQUANTITY

0.99+

100,000 computersQUANTITY

0.99+

ThursdayDATE

0.99+

CriblORGANIZATION

0.99+

AWSORGANIZATION

0.99+

25%QUANTITY

0.99+

American AirlinesORGANIZATION

0.99+

five timesQUANTITY

0.99+

10,000 computersQUANTITY

0.99+

2013DATE

0.99+

five yearsQUANTITY

0.99+

Palo Alto, CaliforniaLOCATION

0.99+

oneQUANTITY

0.99+

over $200 millionQUANTITY

0.99+

six years agoDATE

0.99+

CUBEORGANIZATION

0.98+

a week agoDATE

0.98+

firstQUANTITY

0.98+

telcoORGANIZATION

0.98+

DatadogORGANIZATION

0.97+

todayDATE

0.97+

AWSS3TITLE

0.97+

Log4ShellTITLE

0.96+

two and a half timesQUANTITY

0.94+

last couple of decadesDATE

0.89+

first conversationQUANTITY

0.89+

OneQUANTITY

0.87+

Hadoop World 2010EVENT

0.87+

Log4jTITLE

0.83+

cribl.ioORGANIZATION

0.81+

20 yearsQUANTITY

0.8+

AzureORGANIZATION

0.8+

first companyQUANTITY

0.79+

big waveEVENT

0.79+

theCUBEORGANIZATION

0.78+

up to a terabyte a dayQUANTITY

0.77+

Azure BlobTITLE

0.77+

cribl.cloudTITLE

0.74+

ExabeamORGANIZATION

0.72+

ShutterflyORGANIZATION

0.71+

bankingORGANIZATION

0.7+

DataOpsTITLE

0.7+

waveEVENT

0.68+

lastDATE

0.67+

cribl.ioTITLE

0.66+

thingsQUANTITY

0.65+

zillion companiesQUANTITY

0.63+

syslogTITLE

0.62+

10QUANTITY

0.61+

SplunkORGANIZATION

0.6+

AIOpsTITLE

0.6+

EdgeTITLE

0.6+

Data asTITLE

0.59+

cribl.io/jobsORGANIZATION

0.58+

ElasticsearchTITLE

0.58+

ElasticTITLE

0.55+

onceQUANTITY

0.5+

problemsQUANTITY

0.48+

CodeTITLE

0.46+

SplunkTITLE

0.44+