Image Title

Search Results for Matteo:

Steven Czerwinski & Jeff Lo, Scalyr | Scalyr Innovation Day 2019


 

>> from San Matteo. It's the Cube covering Scaler. Innovation Day. Brought to You by Scaler >> The Run Welcome to this special on the Ground Innovation Day. I'm John for a host of The Cube. We're here at scale. His headquarters in San Mateo, California Hardest Silicon Valley. But here the cofounder and CEO Steve, It's Irwin Ski and Jeff Low product marketing director. Thanks for having us. Thanks for having us. Thank you. But a great day so far talked Teo, the other co founders and team here. Great product opportunity. You guys been around for a couple of years, Got a lot of customers, Uh, just newly minted funded syriza and standard startup terms. That seems early, but you guys are far along, you guys, A unique architecture. What's so unique about the architecture? >> Well, thinks there's really three elements of the architecture's designed that I would highlight that differentiates us from our competitors. Three things that really set us apart. I think the biggest the 1st 1 is our use of a common our database. This is what allows us to provide a really superior search experience even though we're not using keyword indexing. Its purpose built for this problem domain and just provides us with great performance in scale. The second thing I would highlight would be the use of well, essentially were a cloud native solution. We have been architected in such a way that we can leverage the great advantage of cloud the scale, ability that cloud gives you the theological city. That cloud gives you andare. Architecture was built from the ground up to leverage that, uh and finally I would point out the way that we do our data. Um, the way that we don't silo data by data type, essentially any type of observe ability, data, whether it's logs or tracing or metrics. All that data comes into this great platform that we were in that provides a really great superior query performance over, >> and we talked earlier about Discover ability. I want to just quickly ask you about the keyword indexing and the cloud native. To me, that seems to be a two big pieces because a lot of the older all current standards people who are state of the art few years ago, 10 years ago, keyword index thing was a big part of it, and cloud native was still emerging except for those folks that were born the clouds. So >> this is a dynamic. How important is that? Oh, it's It's just critical. I mean, here, when we go to the white board, I love to talk about this in a little more detail in particular. So let's let's talk about keyword indexing, right? Because you're right. This is a lot of the technology that people leverage right now. It's what all of our competitors do in keyword indexing. Let's let's look at this from the point of view of a log ingestion pipeline. So in your first stage, you have your input, right? You've got your raw logs coming in. The first thing you do after that typically is parse. You're goingto parse out whatever fields you want from your logs. Now, all of our competitors, after they do that, they do in indexing step. Okay, this has a lot of expense to it. In fact, I'm going to dig into that after the log content is index. It's finally available for search. Where will be returned as a search result. Okay, this one little box, this little index box actually has a lot of costs associated with it. It contributes to the bloat of storage. It contributes to the cost of the overall product. In fact, that's why I love our competitors. Charge you based on how much you're indexing now, even how much you're ingesting. When you look at the cost for indexing, I think you can break it down into a few different categories. First of all, building the index. There's certain costs with just taking this data, building the index and storing it. Computational storage, memory, everything okay, But you build the index in order to get superior query performance, Right? So that kind of tells you that you're going to have another cost. You're going tohave an optimization cost. Where the index is that you're building are dependent on the queries that your users want to conduct, right, because you're trying to make sure you get as good of query performance as possible. So you have to take a look at the career. Is that your user performing the types of logs that you're coming in and you have to decide what indexing that you want to do? Okay. And that cost is shouldered by the burden of the customers. Um, okay, but nothing static in this world. So at some point your logs are going to change. The type of logs here in Justin is going to change. Maybe your query is goingto change. And so you have another category of costs, which is maintenance, right? You're going to have to react to changes in your infrastructure. It's used the type of logs you're ingesting, and basically, this is just creates a whole big loop where you have to keep an eye on your performance. You have to be constantly optimizing, maintaining and just going around in the circle. Right? And for us, we just thought that was ridiculous because all this costs is being born by the customer. And so when we designed the system, we just wanted to get rid of that. >> That's the classic shark fin. You see a fin on anything great whites going to eat you up or iceberg. You see that tip you don't see what's underneath? This seems to be the key problem, because the trend is more data. New data micro services gonna throw off new data type so that types is going up a CZ. Well, that's what that does that consistent with what you got just >> that's consistent. I mean, what we hear from our customers is they want flexibility, right? These are customers that are building service oriented, highly scalable applications on top of new infrastructure. They're reacting to changes everywhere, so they want to be able to not have to, you know, optimize their careers. They're not goingto want to maintain things. They just want to search product that works. That works over everything that they're ingesting. >> So, good plan. You eliminate that fly wheel of cost right for the index. But you guys, you were proprietary columnist, Or that's the key on >> your That's a Chiana and flexibility on data types. Yes, it does. And here, let me draw a little something to kind of highlight that because, you know, of course, it's a it begs the question. Okay, we're not doing keyword indexing. What do you do? What we do actually is leverage decades of research and distribute systems on commoner databases, and I'll use an example on or two >> People know that the data is, well, that's super fast, like a It's like a Ferrari. >> Yes, it's a fryer because you're able to do much more targeted essentially analysis on the data that you want to be searching over, right? And one way to look at this is, uh, no, Let's take a look at ah, Web access lock. Okay. And when we think about this and tables, we think that each line in the table represents, ah, particular entry from the access log. Right. And your columns represent what fields you've extracted. So for example, one the fields you might extract is thie HP status code. You know, Was it, um, a success or not? Right. Or you might have the your eye, or you might have the user agent of the incoming web request. Okay. Now, if you're not using a commoner database approach to execute a quarry where you're trying to count the number of non two hundreds that you've your Web server has responded with, you'd have to load in all the data for this >> table, right? >> And that's just its overkill in a commoner database. Essentially, what you do is you organize your data such that each column essentially has saved as a separate file. So if I'm doing a search where I just want to count the number of non two hundreds. I just have to read in these bites. And when your main bottleneck, it's sloshing bites in and out of Main Ram. This just gives you orders of magnitude better performance. And we've just built this optimize engine that does essentially this at its core and doesn't really well, really fast leveraging commoner database technology. >> So it lowers the overhead. You have to love the whole table in. That's going to take time. Clearing the table is going to take time. That seems to be the update. That's exactly right. Awesome, right? Okay. All right, Jeff. So you're the director of product marketing. So you got a genius pool of co founders here? Scaler. Been there, done that ball have successful track records as tech entrepreneurs, Not their first rodeo, making it all work. Getting it packaged for customers is the challenge that you guys have you been successful at it? What does it all mean? >> Yeah, it essentially means helping them explore and discover their data a lot more effectively than they happen before, you know, With applications and infrastructure becoming much more complex, much more distributed, our engineering customers are finding it increasingly difficult to find answers And so all of this technology that we've built is specifically designed to help him do that at much greater speed, Much greater ease, much more affordably and at scale. We always like to say we're fast, easy, affordable, at scale. >> You know, I noticed in getting to know you guys and interviewing people around around company. The tagline built by engineers for engineers is interesting. One. You guys are all super nerdy and geeky, so you get attacked and you take pride in the tech in the code. But also, your buyers are also engineers because they're dealing with cloud Native Wholenother Dev ops, level of scale where they love scale people in that market love infrastructures code. This is kind of the ethos of that market, but speed scale is what they live for, and that's their competitive advantage in most cases. How do you hit that point there? What's the alignment with the customers on scale and speed? >> Yeah, you know, with the couple of things that Stephen had mentioned, you know, the columnar database on DH, he mentioned cloud native. We like to refer to that as massively parallel or true multi tendency in the cloud those 11 two things give us really to key advantages when it comes to speed. So speed on in just that goes back to what Steven was talking about with the column. In our database, we're not having a weight to build the index so weakening unjust orders of magnitude faster than traditional solutions. So whereas a conventional solution might taking minutes even up to hours to ingest large sets of data, we can literally do it in seconds. It's the data's available immediately for used in research. One of our customers, in fact, that I'm thinking of down Australia actually uses our live tail because it actually works and as they push code out to production that can actually monitor what happens and see if the changes are impacting anything positively or negatively >> and speed two truths, a tagline the marking people came up with, which is cool. I love that kind of our fallouts. We have to get the content out there and get that let the people decide. But in your business, ingestion is critical. Getting the ingestion to value time frame nailed down is table stakes. People engineers want to test stuff. It doesn't work out of the box we ingest and they don't see value. They're not gonna kind of be within next levels. Kind of a psychology of the customer. >> Yeah, You know, when you're pushing code, you know, on an hourly basis, sometimes even minutes now, the last thing you want to do is wait for your data to analyse it, especially when a problem occurs. When a problem occurs and it's impacting a customer or impacting your overall business. You immediately go into firefighting mode, and you just can't wait to have that data become available so that speed to ingest becomes critical. You don't want to wait. The other aspect on the speed topic is B to search. So we talked about the types of searches that are calling. Our database affords us a couple that, within massively parallel and true multi tendency approach, basically means that you could do very, very ad hoc searches extremely quickly. You don't have to bill the keyword index. You don't have to have two, even build a query or learn how to build queries on DH, then run and then wait for it. And maybe in the meantime, wait to get a coffee or something like that. >> I mean, we grew up in Google search. Everyone who's exactly the Web knows what searches and discoveries kind the industry word in discovering navigation. But one of the things about searches about that made Google say Greg was relevance. You guys seem to have that same ethos around data discover, ability, speed and relevance. Talk about the relevance piece, because I think that, to me is what is everyone's trying to figure out as more data comes in? You mentioned some of the advantages Steven around, you know, complexity around data types. You know, Maur data types are coming on, so Relevance sees, is what everyone's chasing. >> So one of >> the things that I think we are very good at is helping people discover what is relevant. There are solutions out there. In fact, there's a lot of solutions out there that will focus on summarizing data, letting you easily monitor with a set of metrics, or even trace a single transaction from point A to point B through a set of services. Those are great for telling you that there is a problem or that problem exist. Maybe in this one service, this one server. But where we really shine is understanding why something has happened. Why a problem has occurred. And the ability to explore and discover through your data is what helps us get to that relevancy. >> Ameren meeting Larry and Sergey back into 1998. And you know, from day one it's fine. What you looking for him? And they did their thing. So I want to just quickly have you guys explain it. I think one thing that also has come up love to get your take on it, guys, is multi tendency urine in the clouds to get a lot of scale. We're out of resource talk about the debt. Why multi tendency is an important piece and what does that specifically mean? But the customer visa be potentially competitive solutions. And what do you guys bring for the tables? That seems to be an important discussion Point >> sure know. And it is one of the key piece of our architecture. I mean, when we talk about being designed for the cloud, this is a central part of that right? When you look at our competitors, for the most part, a lot of them have taken existing open the source off the shelf technologies and kind of taking that and shoved it into this, you know, square hole of, you know, let's run in the cloud, right? And so they're building. These SAS services were essentially they pretend like everyone's got access to a lot. Resource is but under the covers there, sitting there, spinning up thes open source solutions. Instances for each of the customers each of these instances are on ly provisioned with enough ramsi pew for that customer's needs, right? And so heaven forbid you try to issue more crews than you normally do or try to use Mohr you know, storage than you normally do, because your instance will just be capped out, right? Um, and also it's kind of inefficient in that when your users aren't issue inquiries, those CPU and RAM researchers are just sitting there idle instead, what we've done is we've built a system where we essentially have a big pool of resource is we have a big pool of CPU, a big pool of ram, a big pool of disc. Everyone comes in, get access to that, so it doesn't matter what customer you are. Your queries get full access to all these si pues that we have run around right? And that's that's the core of multi tendency is that we're able to not provision for just one look for each individual customer. But we have a big pool of resource is that everyone gets the >> land that's gonna hit the availability question on. And it's also have a side effect for all those app developers who want to build a I and stuff used data and build these micro services systems. >> They're going to get >> the benefit because you have that closed loop. Are you fly? Will, if you will. >> Yeah, yeah, the fight could just add the multi tendency really gives us a lot of economies of scale, both from, you know, the over provisioning and the ability to really effectively use resources. We also have the ability to pass those savings on to our customers. So there's that affordability piece that I think is extremely important. Find answers, this architectural force that >> Stephen I want to ask you because, you know, I know the devil's work pretty well. People are they're hard core, you know. They build their own stuff. They don't want us, have a vendor. Kuo. I can do this myself. There's always comes up there. But this use cases here. You guys seem to be doing well in that environment again. Engineering led solution, which I think gives you guys a great advantage. But what's the How do you handle the objection when you hear someone say, Well, I could do it. Just go do it myself. >> What I always like to point at is, yes, you can up to a decree, right? We often hear people that use open source technologies like elk. They can get that running and they can run it up to a certain scale like a you know, tens of gigabytes per day of logs. They're fine, right? But with those technologies, once it goes above a certain scale, it just becomes a lot more difficult to run. It's one those classic things you know, getting 50% of the way. There is easy getting 80% of the way. There is a lot harder. Getting 100% is almost impossible, right? And you, as whatever company that that that you're doing whatever product you're building, do you really want to spend your engineer? Resource is pushing through that curve, getting 80%. 100% of kind of good, a great solution. No, what we always pitches like Look, we've always solve these problems. These hard problems for this problem, too may come and leverage our technology. You don't have to spend your engineering capital on that. >> And then the people who are doing that scale that you guys provide, they want, they need those engineering resource is somewhere else. So I have to ask, you just basically followed question. Which is how does the customer know whether they have a non scaleable for scaleable solution? Because some of these SAS services air masquerading as scaleable solutions. >> No, they are. I mean, we we actually encourage our customers when they're in the pre sale stage to benchmark against us. We have ah customer right now that sending us terabytes of data per day as a trial just to show that we can meet the scale that they need. We encourage those same customers to go off and ask the other competitors to do that. And, you know, the proof is in the pudding. >> And how's the results look good? Yeah. So bring on the ingest Yes, that's that's That's the sales pitch. Yes, guys, thanks so much for sharing the inside. Even. Appreciate it, Jeff. Thanks for sharing. Appreciate it. I'm John for the Cube. Here for a special innovation Days scales >> headquarters in the heart of >> Silicon Valley's sent Matteo California. Thanks for watching.

Published Date : May 30 2019

SUMMARY :

Brought to You by Scaler That seems early, but you guys are far along, you guys, A unique architecture. way that we can leverage the great advantage of cloud the scale, ability that cloud gives you the theological I want to just quickly ask you about the keyword indexing So that kind of tells you that you're going to have another You see that tip you don't see what's underneath? so they want to be able to not have to, you know, optimize their careers. But you guys, you were proprietary columnist, Or that's the key on something to kind of highlight that because, you know, of course, So for example, one the fields you might extract is thie HP Essentially, what you do is you organize your data such Getting it packaged for customers is the challenge that you guys have you been successful than they happen before, you know, With applications and infrastructure becoming much more complex, You know, I noticed in getting to know you guys and interviewing people around around company. Yeah, you know, with the couple of things that Stephen had mentioned, you know, the columnar database on Getting the ingestion to value time frame nailed down is table stakes. the last thing you want to do is wait for your data to analyse it, especially when a problem occurs. Talk about the relevance piece, because I think that, to me is what is everyone's trying And the ability to explore and discover through your data And what do you guys bring for the tables? to use Mohr you know, storage than you normally do, because your instance will just be land that's gonna hit the availability question on. the benefit because you have that closed loop. We also have the ability to pass those savings on to our customers. But what's the How do you handle the objection when you hear someone say, Well, I could do it. What I always like to point at is, yes, you can up to a decree, So I have to ask, you just basically followed question. ask the other competitors to do that. And how's the results look good? Thanks for watching.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JeffPERSON

0.99+

Steven CzerwinskiPERSON

0.99+

StephenPERSON

0.99+

StevenPERSON

0.99+

50%QUANTITY

0.99+

1998DATE

0.99+

Jeff LowPERSON

0.99+

TeoPERSON

0.99+

StevePERSON

0.99+

Jeff LoPERSON

0.99+

80%QUANTITY

0.99+

LarryPERSON

0.99+

JohnPERSON

0.99+

100%QUANTITY

0.99+

Irwin SkiPERSON

0.99+

each columnQUANTITY

0.99+

each lineQUANTITY

0.99+

SergeyPERSON

0.99+

JustinPERSON

0.99+

two big piecesQUANTITY

0.99+

MatteoPERSON

0.99+

twoQUANTITY

0.99+

GoogleORGANIZATION

0.99+

GregPERSON

0.99+

OneQUANTITY

0.99+

bothQUANTITY

0.99+

11QUANTITY

0.99+

second thingQUANTITY

0.99+

10 years agoDATE

0.98+

HPORGANIZATION

0.98+

Three thingsQUANTITY

0.98+

tens of gigabytesQUANTITY

0.98+

one serviceQUANTITY

0.98+

firstQUANTITY

0.98+

eachQUANTITY

0.98+

three elementsQUANTITY

0.98+

oneQUANTITY

0.98+

FerrariORGANIZATION

0.97+

syrizaORGANIZATION

0.97+

first stageQUANTITY

0.96+

1st 1QUANTITY

0.96+

first thingQUANTITY

0.95+

two truthsQUANTITY

0.95+

San Mateo, California Hardest Silicon ValleyLOCATION

0.95+

one thingQUANTITY

0.95+

AustraliaLOCATION

0.95+

San MatteoORGANIZATION

0.94+

decadesQUANTITY

0.94+

few years agoDATE

0.94+

one serverQUANTITY

0.93+

FirstQUANTITY

0.91+

two thingsQUANTITY

0.91+

coupleQUANTITY

0.91+

one wayQUANTITY

0.9+

CubeORGANIZATION

0.9+

single transactionQUANTITY

0.9+

one lookQUANTITY

0.89+

one little boxQUANTITY

0.89+

ChianaORGANIZATION

0.88+

two hundredsQUANTITY

0.86+

ScalyrPERSON

0.86+

each individual customerQUANTITY

0.85+

Ground Innovation DayEVENT

0.83+

SASORGANIZATION

0.82+

terabytes of dataQUANTITY

0.8+

Innovation Day 2019EVENT

0.79+

ValleyLOCATION

0.73+

MohrORGANIZATION

0.72+

AmerenPERSON

0.7+

DiscoverORGANIZATION

0.64+

SiliconORGANIZATION

0.61+

couple of yearsQUANTITY

0.6+

ScalerORGANIZATION

0.6+

WholenotherPERSON

0.58+

Innovation DayEVENT

0.58+

CaliforniaLOCATION

0.48+

rodeoEVENT

0.44+

Casey Clark, Scalyr | Scalyr Innovation Day 2019


 

>> from San Matteo. It's the Cube covering scaler. Innovation Day. Brought to You by Scaler >> Ron Jon Furry with the Cube. We're here for an innovation day at Scale ER's headquarters in San Mateo, California Profile in the hot startups, technology leaders and also value problems. Our next guest is Casey Clark, whose chief customer officer for scale of great to See You See >> you as well. >> Thanks for having us. >> Thanks for coming in. >> So what does it talk about the customer value proposition? Let's get right to it. Who are your customers? Who you guys targeting give some examples of what they're what they're doing with >> you. We sell primarily to engineering driven companies. So you know, the top dog is that the CTO you know, their pride born in the cloud or moving heavily towards the cloud they're using, you know, things like micro services communities may be starting to look at that server list. So really kind of forward thinking, engineering driven businesses or where we start with, you know, some of the companies that we work with, you know, CareerBuilder, scripts, networks, Discovery networks, a lot of kind of modern e commerce media B to B B to C types of sass businesses as well. >> I want it. I want to drill down that little bit later. But, you know, basically born the cloud that seems to be That's a big cloud. Native. Absolutely. All right, So you guys are startup. Siri's a funded, which is, you know, Silicon Valley terms. You guys were right out of the gate. Talk about the status of the product. Evolution of the value proposition stages. You guys are in market selling two customers actively. What's the status of the products? Where Where is it from a customer's standpoint? >> Sure, Yeah, we've got, you know, over 300 customers and so fairly mature in terms of, you know, product market status. We were very fortunate to land some very large customers that pushed us when we were, you know, seven. So on employees, maybe three or four years ago, and so that that four system mature very quickly. Large enterprises that had anyway, this one customers alando in Germany. They're one of the largest commerce businesses in Europe and they have 23 1,000 engineers. He's in the product on the way basis, and we landed them when it was seven employees, you know, three or four years ago. And so that four system insurance it was very easy for us to go to other enterprises and say, Yeah, we can work with you And here's the proof points on how we've helped >> this business >> mature, how they've improved kind of their their speed to truth there. Time to answer whenever they have issues. >> And so the so. The kind of back up the playbook was early on, when had seven folks and growing beta status was that kind of commercially available? When did it? When was the tipping point for commercially available wanted that >> that probably tipped. When I joined about a little under four years ago, I had to convince Steve that he was ready to sell this product, right, as you'd expect with a kind of technical founder. He never thought the product was ready to go, but already had maybe a dozen or so kind of friends and family customers on DH. So I kind of came in and went on my network and started trying to figure out who are the right fit for this. Andi, we immediately found Eun attraction, the product just stood up and we started pushing. And so >> and you guys were tracking some good talent. Just looking. Valley Tech leaders are joining you guys, which is great sign when you got talent coming in on the customer side. Lots changed in four years. I'll see the edge of the network on digital transformation has been a punchline been kind of a cliche, but now I think it's more real. As people see the power of scale to cloud on premises. Seeing hybrid multi cloud is being validated. What is the current customer profile when you look at pure cloud versus on premise, You guys seeing different traction points? Can you share a little bit of color on that? >> Yeah, So I talked a little bit about our ideal customer profile being, you know, if he's kind of four categories e commerce, media BTB, sas B to see sass. You know, most of these companies are running. Some production were close in the cloud and probably majority or in the cloud. When we started this thing and it was only eight of us and Jesus has your were never talked about. We're seeing significant traction with azure and then specific regions. Southeast Asia G C. P. Is very hot. Sourcing a high demand there and then with the proliferation of micro services communities has absolutely taken off. I mean, I'll raise my hand and say I wasn't sure if it was going to communities and bases two years ago. I was say, I think Mason's going to want to bet the company on. Thank God we didn't do that. We want with communities on DH, you know? So we're seeing a lot more of kind of these distributed workloads. Distributed team development. >> Yeah, that's got a lot of head room now. The Cube Khan was just last week, so it's interesting kind of growth of that whole. Yet service measures right around the corner. Yeah, Micro Service is going to >> be a >> serviceman or data. >> Yeah, for sure it's been, and that's one of the big problems that we run in with logs that people just say that they're too voluminous. It's either too hard to search through it. It's too expensive. We don't know what to deal with it. And so they're trying to find other ways to kind of get observe ability and so you see, kind of a growth of some of the metrics companies like data dog infrastructure monitoring, phenomenal infrastructure, modern company. You've got lots of tracing companies come out and and really, they're coming out because there's just so many logs that's either too expensive, too hard, too slow to search through all that data. That's where your answers live on DH there, just extracting, summarizing value to try to kind of minimize the amount of search. You have to >> talk about the competition because you mentioned a few of them splunk ce out there as well, and there public a couple years ago and this different price point they get that. But what's why can't they scale to the level of you guys have because and how do you compare to them? Because, I mean, I know that is getting larger, but what's different about you guys visited the competition? >> Absolutely. This is one of the reasons why I joined the company. What excites me the most is I got to go talk to engineers and I could just talk shop. I don't really talk about the business value quite as much. We get there at some point, obviously, but we made some very key decisions early on in the company's history. I mean, really, before the company started to kind of main back and architectural decisions. One we don't use elastics search losing any sort of Cuban indexing, which is what you know. Almost every single logging tool use is on the back end. Keyword indexes. Elastic search are great for human legible words. Relatively stale lists where you're not looking through, you know, infinite numbers of high carnality kind of machine data. So we made an optimized decision to use no sequel databases Proprietary column in our database. So that's one aspect of things. How we process in store. The data is highly efficient. The other pieces is worse, asked business, But we're true. SAS were true multi tenant. And so when you put a query into the scaler, every CP corn every server is executing on just that quarry is very similar way. Google Search works. So not only do we get better performance, we get better costume better scalability across all of our customers, >> and you guys do sail to engineering led buyer, and you mentioned that a lot of sass companies that are a lot of time trying to come in and sell that market bump into people who want to build their own. Yeah, I don't need your help. I think I might get fired or it might make me look good. That seems to be a go to market dynamic or and or consumption peace. What's your response to that? How does that does that fared for you guys? >> Engineers want to engineer whether it's the right thing or not, right? And so that is always hard. And I can't come in and tell your baby's ugly right because your baby is beautiful in your eyes and so that is a hard conversation have. But that's why I kind of go back to what I was saying. If we just talk shop, we talk about, you know, the the engineering decisions around, you know, is that the right database? Is this the right architecture? And they think that they started nodding and nodding, nodding, And then we say, And the values are going to be X y and Z cost performance scale ability on dso when you kind of get them to understand that like Elastics, which is great for a lot of things. Product search Web search. Phenomenal, but log management, high card. Now that machine did. It's not what it's designed for. Okay. Okay, okay. And then we start to get them to come around and say, Not only can you reallocate I mean, we talked about how getting talent is. It's hard. Well, let's put them back on mission critical business, You know, ensuring objectives. And we get, you know, service that this is all we do. Like you gonna have a couple people in there part time managing a long service. This is all we do. And so you get things like like tracing that were rolling out this quarter, you know, better cost optimization, better scalability. Things you would never get with an >> open. So the initial reaction might be to go in and sell on hey, cheaper solution. And is an economic buyer. Not really for these kinds of products, because you're dealing with engineers. Yeah. They want to talk shop first. That seems to be the playbook. >> Are artists is getting that first meeting and the 1st 1 is hard because that, you know, they're busy. Everybody's busy, They just wave you off. They ignore the email, the calls in and we get that. But once we get in, we have kind of this consultation, you know, conversation around. Why, why we made these technology decisions. They get it. >> Let's do a first meeting right now. People watching this video, What's the architectural advantages? Let's talk shop. Yeah, why, you guys? >> Yeah, absolutely so kind of too technical differentiators. And then three sort of benefits that come from those two technical choices. One is what I mentioned this proprietary, you know, columnar. No sequel database specifically designed for kind of high card in ality machine, right? There is no indexes that need to be backed up or tuned. You know, it's it's It's a massively parallel grab t its simplest form. So one pieces that database. The other piece is that architecture where we get, you know, one performance benefits of throwing every CP corn every several unjust trickery. Very someone way. Google Search works If I go say, How do I make a pizza and Google? It's not like it goes like Casey server in a data center in Alaska and runs for a bit. They're throwing a tonic and pure power every query. So there's the performance piece. There is the scale, ability piece. We have one huge massive pool of shared compute resource is And so you're logged, William. Khun, Spike. But relative to the capacity we have, it means nothing. Right? But all these other services, they're single tenant, you know, hosted services. You know, there's a capacity limit. And you a single customer. If you're going, you know, doubles. Well, it wasn't designed to handle that log falling, doubling. And then, you know, the last piece is the cost. There is a huge economies of scale shared services. We we run the system at a significantly lower cost than what anybody else can. And so you get, you know, cost, benefits, performance by defense and scale, ability >> and the life of the engineer. The buyer here. What if some of the day in the life use case pain in the butt so they have a mean its challenges. There's a dead Bob's is basically usually the people who do Dev ups are pretty hard core, and they they love it and they tend to love the engineering side of it. But what of the hassles with them? >> Yeah, Yeah, >> but you saw >> So you know, kind of going back to what we're all about were all about speed to truth, right? In kind of a modern environment where you're deploying everyday multiple times per day. Ah, lot of times there's no que es your point directly to the production, right? And you're kind of but is on the line. When that code goes live, you need to be able to kind of get speed to truth as quickly as possible, right? You need to be able to identify one of problem went wrong when something went wrong immediately, and they needed to be able to come up with a resolution. Right? There's always two things that we always talk about. Meantime, to restore it meantime, to resolution right there is. You know, maybe the saris are responsible for me. Time to restore. So they're in scaler. They get alert there, immediately diving through the logs to regret. Okay, it's this service. Either we need to restart it. Or how do we kind of just put a Band Aid on top? It's to make sure customers don't see it right. And then it gets kicked over to developer who wrote the code and say, Okay, now. Meantime, the resolution, How long until we figure out what went wrong and how do we fix it to make sure it doesn't happen again? And that's where we help. >> You know, It's interesting case he mentioned the resolution piece. A lot of engineers that become operationalized prove your service, not operations. People just being called Deb ops is that they have to actually do this as an SL a basis when they do a lot of AP AP and only gets more complicated with service meshes right now with these micro services framework, because now you have service is being stood up and torn down and literally, without it, human intervention. So this notion of having a path of validation working with other services could be a pain in the butt time. >> Yeah, I mean, it's very difficult. We've, you know, with some of the large organizations we work with you worked with. They've tried to build their own service, mashes and they, you know, got into a massive conference room and try to write out a letter from services that are out there in the realities they can't figure out. There's no good way for them to map out like, who talks toe what? When and know each little service knows, like Okay, well, here's the downstream effects, and they kind of know what's next to them. They know their Jason sees, but they don't really know much further than that on the nice thing about, you know, logs and all kind of the voluminous data that is in there, which makes it very difficult to manage. But the answers are are in there, right? And so we provide a lot of value by giving you one place to look through all of >> that cube con. This has been a big topic because a lot of times just to be more hard core is that there could be downtime on the services They don't even know about >> it. Yeah. Yeah, That's exactly >> what discovering and visualizing that are surfacing is huge. Okay, what's the one thing that people should know about scaler that haven't talked you guys or know about? You guys should know about you guys Consider. >> Yeah. I mean, I think the reality is everybody's trying to move as quickly as possible. And there is a better way, you know, observe, ability, telemetry, monitoring, whatever you call your team Is court of the business right? Its core to moving faster, its core to providing a better user experience. And we have, you know, spent a significant amount of time building. You need technology to support your business is growth. Andi, I think you know you can look at the benefits I've talked about them cost performance, scalability. Right? But these airline well, with whatever you're looking at it, it's PML. If it's, you know, service up time. That's exactly what we provide. Is is a tool to help you give a better experience to your own customers. >> Casey. Thanks for spend the time. Is sharing that insight? Of course. We'd love speed the truth. It's our model to Cuba. Go to the events and try to get the data out there. We're here. The innovation dates scales Headquarters. I'm John for you. Thanks for watching

Published Date : May 30 2019

SUMMARY :

Brought to You by Scaler Mateo, California Profile in the hot startups, technology leaders and also value problems. Who you guys targeting give some examples of what they're what they're doing with the top dog is that the CTO you know, their pride born in the cloud or moving heavily towards the cloud But, you know, basically born the cloud that seems to be That's a big cloud. and we landed them when it was seven employees, you know, three or four years ago. Time to answer whenever they have issues. And so the so. I had to convince Steve that he was ready to sell this product, right, as you'd expect with a kind of technical and you guys were tracking some good talent. Yeah, So I talked a little bit about our ideal customer profile being, you know, if he's kind of four categories Yeah, Micro Service is going to Yeah, for sure it's been, and that's one of the big problems that we run in with logs that people just say that they're too voluminous. Because, I mean, I know that is getting larger, but what's different about you guys And so when you put a query into the scaler, and you guys do sail to engineering led buyer, and you mentioned that a lot of sass And we get, you know, service that this is all we do. So the initial reaction might be to go in and sell on hey, cheaper solution. Are artists is getting that first meeting and the 1st 1 is hard because that, you know, they're busy. Yeah, why, you guys? And then, you know, the last piece is the cost. and the life of the engineer. So you know, kind of going back to what we're all about were all about speed to truth, right? meshes right now with these micro services framework, because now you have service is being And so we provide a lot of value by giving you one place to look through all of the services They don't even know about that haven't talked you guys or know about? you know, observe, ability, telemetry, monitoring, whatever you call your team Is court of the business right? Thanks for spend the time.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
StevePERSON

0.99+

EuropeLOCATION

0.99+

AlaskaLOCATION

0.99+

Casey ClarkPERSON

0.99+

CaseyPERSON

0.99+

JesusPERSON

0.99+

seven folksQUANTITY

0.99+

two thingsQUANTITY

0.99+

GermanyLOCATION

0.99+

Ron Jon FurryPERSON

0.99+

seven employeesQUANTITY

0.99+

Silicon ValleyLOCATION

0.99+

SiriTITLE

0.99+

Scale ERORGANIZATION

0.99+

OneQUANTITY

0.99+

sevenQUANTITY

0.99+

CubaLOCATION

0.99+

threeDATE

0.99+

first meetingQUANTITY

0.99+

last weekDATE

0.99+

two years agoDATE

0.99+

JohnPERSON

0.99+

oneQUANTITY

0.99+

WilliamPERSON

0.99+

two customersQUANTITY

0.98+

four years agoDATE

0.98+

over 300 customersQUANTITY

0.98+

San Mateo, CaliforniaLOCATION

0.98+

a dozenQUANTITY

0.98+

two technical choicesQUANTITY

0.98+

GoogleORGANIZATION

0.98+

SpikePERSON

0.98+

four yearsQUANTITY

0.97+

Southeast AsiaLOCATION

0.97+

23 1,000 engineersQUANTITY

0.97+

eightQUANTITY

0.97+

San MatteoORGANIZATION

0.96+

one placeQUANTITY

0.95+

one aspectQUANTITY

0.94+

three sortQUANTITY

0.93+

AndiPERSON

0.93+

1st 1QUANTITY

0.93+

one piecesQUANTITY

0.93+

MasonORGANIZATION

0.92+

JasonPERSON

0.91+

single customerQUANTITY

0.91+

couple years agoDATE

0.88+

firstQUANTITY

0.88+

one thingQUANTITY

0.87+

four systemQUANTITY

0.87+

Valley TechORGANIZATION

0.87+

SASORGANIZATION

0.84+

CareerBuilderORGANIZATION

0.83+

KhunPERSON

0.83+

ScalyrEVENT

0.82+

fourQUANTITY

0.82+

DiscoveryORGANIZATION

0.8+

this quarterDATE

0.8+

single tenantQUANTITY

0.78+

under four years agoDATE

0.77+

CubanOTHER

0.76+

Innovation Day 2019EVENT

0.76+

one customersQUANTITY

0.76+

couple peopleQUANTITY

0.75+

SearchTITLE

0.75+

each littleQUANTITY

0.74+

G C. P.ORGANIZATION

0.73+

BobPERSON

0.71+

Google SearchTITLE

0.7+

serverQUANTITY

0.68+

ScalyrORGANIZATION

0.66+

CubeORGANIZATION

0.62+

ElasticsORGANIZATION

0.62+

CubeCOMMERCIAL_ITEM

0.56+

doublesQUANTITY

0.56+

Cube KhanORGANIZATION

0.55+

EunORGANIZATION

0.53+

doublingQUANTITY

0.5+

singleQUANTITY

0.48+

Claudia Carpenter, Scalyr & Dave McAllister, Scalyr | Scalyr Innovation Day 2019


 

>> from San Matteo. It's the Cube covering scaler. Innovation Day Brought to You by scaler. >> Welcome to this Special Cube Innovation Day. Here in San Mateo, California Scale is headquarters for a coast of the Cube. We're here with two great guests. Claudia Carpenter co founder Andy McAlister, Who's Dev evangelist? Uh, great to have you guys here a chat before we came on. Thanks for having us >> Great to be >> so scaler. It's all about the logs. The answer is in the logs. That's the title of the segment Them. I'll see the log files with a lot of exhaust in their data value extracting that, but it's got more operational impact. What's what's the Why is the answer in the locks? >> Because that's where the real information is. It's one thing to be able to tell that something is going around when your systems, but what is going wrong as engineers, what we tend to do is the old print. If it's like here's everything I can think of in this moment and leave it as breadcrumbs for myself to find later, then I need to go and look at those bread crumbs >> in a challenge. Of course, with this is that logs themselves are proliferating. There's lots of data. There's lots of services inside this logs, so you've gotta be able to find your answers as fast as possible. You can't afford Teo. Wait for something else. T lead you to them. You need to deep dive >> the way you guys have this saying it's the place to start. What does that mean? Why? Why is that the new approach? >> What We're trying to differentiate because there's this trend right now in the Dev Ops world towards metrics because they're much smaller to store it, pre digesting what's going on in your systems. And then you just play a lot of graphs and things like that. We agree with that. You do need to be able to see what's going on. You need to be able to set alerts. Metrics are good, but they only get you so far. A lot of people will go through. Look at metrics, dig through and then they stop, switch over and go to their logs. We like to start with the logs, build our metrics from them, and then we go direct to >> the source. I think a minute explain what you mean by metrics, because that has multiple meanings. Because the current way around metrics and you kind of talked about a new approach. Could you just take a minute? Explain what you meant by metrics and how logs are setting up the measures. The difference there. >> So to me, metrics is just counting things right? So at log files of these long textual representations of what's going on in my system and it's impossible to visually parce that I mean literally 10,000 lines. So you count. I've got five of this one in six of this one, and it's much smaller to store. I've got five of this one and six of this one, but that's also not very much information, so that's really the difference. >> But, you know, we have customers who use their metrics to help them indicate something might be wrong inside of here. The problem is, is that modern environments where we have instant gratification, needs and people you know, we'd be wait five seconds. Basically, it's a law sale online here. You need to know what's went wrong, not just where we went wrong or that something went wrong. So building for the logs to the metrics allows you to also have a perfect time back to that specific entrance ancient entrance that lets you be you out. What was wrong? >> He mention Claudia Death ops. And this is really kind of think of fun market because Dev Ops is now going mainstream and see the enterprise now started to adopt. It's still Jean Kim from Enterprise. Debs estimates only 3% of enterprise really there yet. So the action's on the cloud Native Public Cloud side where it's, you know, full blown, you know, cloud native more services. They're coming to see Cooper Netease things of that nature out there. And these services are being stood up and torn down while the rhythmically like. So with who the hell stores that data? That's the logs. The nature of log files and data is changing radically with Dev ops. I'm certainly this is going to be more complications but developers and figuring out what's what. How do you see that? What's your reaction to that trend? >> Yeah, so Dev Ops is a very exciting thing. At were Google. It was sort of like the new thing is the developers had to do their own operations, and that's where this comes from. Unfortunately, a lot of enterprise will just rename their ops people devil apps, and that's not the same thing. It's literally developers doing operations, Um, and right now that it's never been so exciting as as it is right now in the text axe, because you could get so much that's open source. Pre built glue this all these things together. But since you haven't written the code yourself, you've no way deal which going on. So it's kind of like Braille. You've got to go back and look and feel your way through it to figure out what's going on. And that's where logs come into play. >> The logs essentially, you know, lift up, get people eyesight into visibility of things that they care about. Absolute. So what's this red thing? Somebody read what is written? Rennes. >> One of the approaches. You'll hear things like golden signals. You'll hear youse, and you'll hear a red Corvette stands for rates, a rose and duration. And ready is a concept that says, How do you actually work with some of these complex technologies working with you're talking about and actually determined where your problems are. So if you think about it, rate is kind of how much traffic's going through a signal for this as a metric, it's accumulative number. So to back to Claudia's point, it's just number here. But if you're trapping goes up, you want to know what's going wrong here is self explanatory. Something broke, fix it, and then duration is how long things took. You talked about communities, Communities works hands in hands with this concept of micro services. Micro services are everywhere, and there were Khun B places that have thousands of little services, all serving the bigger need here. If one of them goes slow, you need to know what went slow as fast as possible. So rate duration and air is actually combined to give you the overall health of your system. While at the same point logs elect, you figure out what was causing >> the problem we'll take. I'm intrigued by what Claudia said. They're on this. You know, Braille concept is essentially a lot of people are flying blind date with what's going on, but you mentioned micro services. That's one area that's coming. Got state full data. Stateless data. They were given a P I economy. Certainly a state becomes important for these applications. You know, the developers don't may or may not know what's happening, so they need to have some intelligence. Also, security we've seen in the cloud. When you have a lot of people standing up instances whether it's on Amazon or other clouds, they don't actually have security on some of their things. So they got it. Figure out the trails of what the data looks like they need the log files to have understanding of. Did something happened? What happened? Why? What is the bottom line here? Claudia? What what people do to kind of get visibility So they're not flying blind as developers and organizations. >> Well, you gotta log everything you can within reason. They always have to take into account privacy and security. But logs much as you can and pull logs from every one of the components in your systems. The micro services that day was just talking about are so cool. And as engineers, we can't resist them way. Love, complexity >> and cool things. >> Things especially cool things and new things. >> New >> green things. Sorry, easily distracted. But there they are, harder to support. They can be a really difficult environment. So again it's back to bread crumbs, leaving that that trail and being able to go back and reconstruct what happened. >> Okay, what's the coolest thing about scaler since we thought about cool and relevant? You guys certainly in the relevant side thing. Check the box there. What's cool? What's cool about scaler telling us? >> That's great. Answer What isn't. But you know, honestly, when I came to work here, I no idea I was familiar with Log Management was really with long search and so forth. And the first time I actually saw the product, my jaw dropped. Okay, I now go to a trade show, for instance, and I'm showing people to use this. And I hit my return button to get my results. And you showed band with can be really bad and it stalls for 1/10 of a second, and I complain about it now. No, there is nothing quite as thrilling as getting your results as fast as you can think about them. Almost your thought processes the slow part of determining what's going on, and that is mind boggling. >> So the speed is the killer. >> The speed is like what killed me. But honestly, something that Chloe's been heavily involved in It takes you two minutes to get started. I mean, there's no long learning curve there. You get the product and you are there. You're ready to go >> close about ease of use and simplicity, because developers are fickle, but they're also loyal. Do you have a good product? They loved to get in that love the freebie. You know, the 30 day trial, They'll they'll kick the tires on anything. But the product isn't working. You hear about it when it does work. This mass traffic to people you know pound at the doorstep of the product. What's the compelling value proposition for the developer out there? Because they >> don't want to >> waste time. That's like the killer death to any product for development. Waste their time. They don't want to deal with it. >> So we live in the TL D our world right now. Frankly, if I have to read something, I usually move on on DH. That's the approach we take with scaler as well. Yes, we have some documentation, but I always feel like I have failed with the user interface design. If I require you to go read the documentation. So I try to take that into account with everything that we that we put out there making it really easy and fast it just jumping in, try stuff. >> How do you get to solve the complex complexity problem through attraction software? What's the secret sauce for the simplicity of this system? >> For me, it's a complete lack of patients. It's just like I wouldn't put up with that. I'm not gonna ask you to. Frankly, I view this sounds a little bit trite, but I've you Software's a relationship, and I view whoever is looking at it as a peer of mine, and I would be embarrassed if they couldn't figure it out if it wasn't obvious. But it is. We do have this sort of slope here of people who really know what's going on and people wanna optimize. This is your 80 20 split on people that don't know what just want to come in. I want both of them to be happy, so we need to blend those >> to talk about the value proposition of what you guys have because we've been covering you know log file mentioned Lock Management's Splunk events. We've gone, too. There's been no solution that I think may be going on 10 years old, that were once cutting edge. But the world changes so fast with Amazon Web services with Google Cloud with azure. Then get the international clouds out there as well. It's it's here. I mean, the scale is there, you got compute. You got the edge of the network right around the corner in the data problem's not going away. Log files going be needed. You have all this data exhausted, these value. >> If anything, there's always going to be more data that's out there. You're going to have more sources of that data coming in here. You're talking a little bit about you have the hybrid cloud. Where's part on prom? Part in the cloud. You could have multi clouds where across his boundaries. You're gonna have the wonderful coyote world where you have no idea when or where you're going to get an upload from too. This too and EJ environment. And you've got to worry about those and at the same time time, the logging, everything, the breadcrumbs. You have ephemeral events. They're not always there, and those are the ones that kill you. So the model is really simple and applaud Claudia for conning concept wise. But you're playing with concept of kiss, right? We'll hear its keep it simple and sophisticated at the same time. So I can teach you to do this demo in two minutes flat, and from there you can teach yourself everything else that this product's capable of doing it. That simple >> talk about who? The person out there that you want to use his product and why should they give scale or look what's in it for them. >> So for me, I think the perfect is to have Dev ops use it. It's developers. We really have designed a product less for ops and more for engineers. So one of the things that is different about scaler is you have somebody come in and set it up, parsed logs that ingestion of logs, which is different than splunk and sumo on DH. Then it's ready to use right out of the box. So for me, I think that our sweet spot, his engineers, because a lot of our formulations of things you do are more technical you're thinking about about you know what air the patterns here. I'm not going to say it's calculus, because then that wouldn't be simple. But it's along. Those >> engineers might be can also cloud Native is a really key party. People who were cloud native. We're actually looking at four in the cloud or cloud migration, >> right way C a lot. For instance, in the Croup. In any space from the Cloud Native Compute Foundation, we're seeing a tremendous instrument interest in Prometheus. We're seeing a lot of interest in usto with service mesh. The nice thing is that they are already all admitting logs themselves. And so, from our viewpoint, we bring them in. We put them together. So now you can look at each piece as it relates to the very other piece >> Claudia share with the folks who, watching this just some anecdotal use cases of what you guys have used internally, whether customers that give him a feel for how awesome scaler is and what's the what could they expect? >> Well, put me on the spot here. Um, >> I'll kick off. So we have a customer in Germany there need commerce shop, They have 1,000 engineers for here. When we started the product we replace because it was on a charge basis that was basically per user. They came back and they said, Oh my God, you don't understand our queries Air taking 15 minutes to get back By the time the quarry comes back, the engineer's forgotten why he asked the question for this. And so they loaded up. They rapidly discovered something unique. It's that they can discover things because anyone can use it. We now have 500 engineers that touch the log files every day, I will attest. Having written code myself, nobody reads log files for fun. But Scaler makes it easy to discover new things and new connections. And they actually look at what house >> discoveries of real value, proper >> discovery is a massive value proposition. Uh, where you figure out things that you don't know about back to that events thing that Claudia started about was, you can only measure the events that you can already considered. You can't measure things that didn't happen >> close. It quickly thought what the culture on David could chime in. What's the culture like here scaler? >> It is a unique culture and I know everyone probably says that about their startup, but we keep work life balance as a very important component. We're such nerds and unabashedly nerds. Wait, what we do. It's a joyful atmosphere to work in. Our founder, Steve Newman, is there in his flat, his flannel shirt, his socks cruising around. Um, and we are very much into our quality barcode. We have a lot of the principles of Google sort of combined into a start up. I mean to say it's a very honest environment, >> Sol. Heart problems make it a good environment. >> Yeah, and I value provide real values, are critical >> for me and have fun at the same point in time. The people here work hard, but they share what they're working on. They share information. They're not afraid to answer the what are you working on? Question. But we always managed to have fun. We are a pretty tight group that way. >> Well, thanks for sharing that insight. We have a lot of fun here in Innovation Day with the Q p. I'm John Furia. Thanks for watching

Published Date : May 30 2019

SUMMARY :

Innovation Day Brought to You by scaler. Uh, great to have you guys here a chat before we came on. The answer is in the logs. It's one thing to be able to tell that something is going around when your T lead you to them. the way you guys have this saying it's the place to start. You do need to be able to see what's going Because the current way around metrics and you kind of talked about a new approach. So you count. So building for the logs to the metrics allows you to also have a perfect time back to that mainstream and see the enterprise now started to adopt. it's never been so exciting as as it is right now in the text axe, because you could get so much that's open source. The logs essentially, you know, lift up, get people eyesight into visibility of things that they to give you the overall health of your system. You know, the developers don't may or may not know what's happening, so they need to have some intelligence. But logs much as you can and pull logs from every one of the components in your systems. So again it's back to bread crumbs, You guys certainly in the relevant side thing. But you know, honestly, when I came to work here, You get the product and you are there. You know, the 30 day trial, That's like the killer death to any product for development. That's the approach we take with scaler as well. Frankly, I view this sounds a little bit trite, but I've you Software's a relationship, to talk about the value proposition of what you guys have because we've been covering you know log file mentioned Lock Management's So the model is really simple and applaud The person out there that you want to use his product and why should they give scale or So one of the things that is different about scaler is you have somebody come in and set it up, We're actually looking at four in the cloud or So now you can look at each piece as it relates to the very other piece Well, put me on the spot here. Oh my God, you don't understand our queries Air taking 15 minutes to get back By the time the quarry you can only measure the events that you can already considered. What's the culture like here scaler? We have a lot of the principles of Google sort of combined into the what are you working on? We have a lot of fun here in Innovation Day with the Q p.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Andy McAlisterPERSON

0.99+

Steve NewmanPERSON

0.99+

Claudia CarpenterPERSON

0.99+

fiveQUANTITY

0.99+

GermanyLOCATION

0.99+

DavidPERSON

0.99+

Jean KimPERSON

0.99+

two minutesQUANTITY

0.99+

ClaudiaPERSON

0.99+

30 dayQUANTITY

0.99+

sixQUANTITY

0.99+

John FuriaPERSON

0.99+

Cloud Native Compute FoundationORGANIZATION

0.99+

10,000 linesQUANTITY

0.99+

five secondsQUANTITY

0.99+

15 minutesQUANTITY

0.99+

AmazonORGANIZATION

0.99+

500 engineersQUANTITY

0.99+

each pieceQUANTITY

0.99+

Dave McAllisterPERSON

0.99+

GoogleORGANIZATION

0.99+

1,000 engineersQUANTITY

0.99+

bothQUANTITY

0.99+

thousandsQUANTITY

0.99+

PrometheusTITLE

0.98+

first timeQUANTITY

0.97+

two great guestsQUANTITY

0.97+

ChloePERSON

0.97+

oneQUANTITY

0.95+

fourQUANTITY

0.94+

San Mateo, CaliforniaLOCATION

0.93+

one thingQUANTITY

0.93+

OneQUANTITY

0.93+

RennesPERSON

0.91+

Claudia DeathPERSON

0.91+

Cooper NeteasePERSON

0.91+

10 years oldQUANTITY

0.91+

3%QUANTITY

0.91+

San MatteoORGANIZATION

0.9+

opsTITLE

0.9+

one areaQUANTITY

0.89+

CorvetteCOMMERCIAL_ITEM

0.88+

Innovation DayEVENT

0.88+

Innovation Day 2019EVENT

0.87+

1/10 of a secondQUANTITY

0.82+

ScalyrPERSON

0.82+

LockTITLE

0.78+

a minuteQUANTITY

0.77+

ScalyrEVENT

0.77+

Amazon WebORGANIZATION

0.76+

80 20 splitQUANTITY

0.72+

DebsPERSON

0.65+

Khun BLOCATION

0.64+

minuteQUANTITY

0.63+

Dev OpsTITLE

0.62+

DayEVENT

0.62+

BrailleTITLE

0.51+

CroupORGANIZATION

0.43+

CloudTITLE

0.4+

ScaleORGANIZATION

0.34+

Shia Liu, Scalyr | Scalyr Innovation Day 2019


 

>> from San Matteo. It's the Cube covering scaler Innovation Day Brought to you by scaler. >> I'm John for the Cube. We are here in San Mateo, California, for special Innovation Day with scaler at their headquarters. Their new headquarters here. I'm here. She here. Lou, Who's Xia Liu? Who's the software engineering team? Good to see you. Thanks for joining. >> Thank you. >> So tell us, what do you do here? What kind of programming? What kind of engineering? >> Sure. Eso i'ma back and suffer engineer at scaler. What I work on from the day to day basis is building our highly scaleable distributed systems and serving our customers fast queries. >> What's the future that you're building? >> Yeah. So one of the project that I'm working on right now is it will help our infrastructure to move towards a more stateless infrastructure s o. The project itself is a meta data storage component and a series of AP ice that Comptel are back and servers where to find a lock file. That might sound really simple, but at the massive scale of ours, it is actually a significant challenge to do it fast and reliably. >> And you're getting date is a big challenge or run knows that data is the new oil date is the goal. Whatever the people saying, the states is super important. You guys have a unique architecture around data ingest What's so unique about it? You mind sharing? >> Of course, s O. We have a lot of things that we do or don't do. Uniquely. I would like to start with the ingestion front of things and what we don't do on that front. So we don't do keywords indexing which most other extinct existing solutions, too. By not doing that, not keeping the index files up to date with every single log message that's incoming. We saved a lot of time and resource, actually, from the moment that our customers applications generate a logline Teo that logline becoming available to for search in scaler. You y that takes just a couple of seconds on DH on other existing solutions. That can take hours. >> So that's the ingests I What about the query side? Because you got in just now. Query. What's that all about? >> Yeah, of course. Actually. Do you mind if we go to black board a little bit? >> Take a look. >> Okay. Grab a chart real quick. Um, so we have a lot of servers around here. We have, uh, Q >> servers. Let's see. >> These are accused servers and, um, a lot of back and servers, Um, just to reiterate on the interest inside a little bit. When locks come in, they will hit one of these Q servers, and you want them Any one of them. And the Q server will kind of batch the log messages together and then pick one of the bag and servers at random and send the batch of locks. Do them any Q can reach any back in servers. And that's how we kind of were able to handle gigs of laughs. How much ever log that you give us way in jazz? Dozens of terabytes of data on a daily basis. Um, and then it is this same farm of back and servers. That's kind of helping us on the query funds crave front. Um, our goal is when a query comes in, we summon all of these back and servers at once. We get all of their computation powers, all of their CPU cores, to serve this one queer Ari, And that is just a massively scalable multi tenant model and in my mind is really economies of scale at its best. >> So scales huge here. So they got the decoupled back in and accused Q system. But yet they're talking to each other. So what's the impact of the customer? What some of the order of magnitude scale we're talking about here? >> Absolutely. So for on the loch side, we talked about seconds response time from logs being generated, too. They see the lock show up and on the query side, um, the median response time of our queries is under 100 milli second. And we defined that response time from the moment the customer hit in the return button on their laptop to they see results show up and more than 90% of our queries return results in under one second. >> So what's the deployment model for the customers? So I'm a customer. Oh, that sounds great. Leighton sees a huge issue one of low late and seek. His legacy is really the lag issue for data. Do I buy it as a service on my deploying boxes? What does this look like here? >> Nope. Absolutely. Adult were 100 plan cloud native. All of this is actually in our cloud infrastructure and us a customer. You just start using us as a sulfur is a service, and when you submit a query, all of our back and servers are at your service. And what's best about this model is that asks Keller's business girls. We will add more back and servers at more computation power and you as a customer's still get all of that, and you don't need to pay us any extra for the increased queries. >> What's the customer news case for this given you, given example of who would benefit from this? >> Absolutely. So imagine your e commerce platform and you're having this huge black Friday sales. Seconds of time might mean millions of revenues to you, And you don't wantto waste any time on the logging front to debug into your system to look at your monitoring and see where the problem is. If you ever have a problem, so we give you a query response time on the magnitude of seconds versus other is existing solutions. Maybe you need to wait for minutes anxiously in front of your computer. >> She What's the unique thing here? This looks like a really good actor, decoupling things that might make sense. But what's the What's the secret sauce? You? What's the big magic here? >> Yeah, absolutely. So anyone can kind of do a huge server farm Route Fours query approach. But the 1st 80% of a brute force algorithm is easy. It's really the last 20%. That's kind of more difficult, challenging and really differentiate. That's from the rest of others. Solutions. So to start with, we make every effort we can teo identify and skip the work that we don't have to do. S O. Maybe we can come back to your seats. >> Cut. >> Okay, so it's so it's exciting. >> Yeah. So we there are a couple things we do here to skip the work that we don't have to do. As we always say, the fastest queries are those we don't even have to run, which is very true. We have this Colin, our database that wee boat in house highly performance for our use case that can lead us only scan the columns that the customer cares about and skipped all the rest. And we also build a data structure called bloom Filters And if a query term does not occur in those boom filters, we can just skip the whole data set that represents >> so that speed helps on the speed performance. >> Absolutely. Absolutely. If we don't even have to look at that data set, >> You know, I love talking to suffer engineers, people on the cutting edge because, you know, you guys were startup. Attracting talent is a big thing, and people love to work on hard problems. What's the hard problem that you guys are solving here? >> Yeah, absolutely. S o we we have this huge server farm at at our disposal. It's, however, as we always say, the key to brute force algorithms is really to recruit as much force as possible as fast as we can. If you have hundreds thousands, of course lying around. But you don't have an effective way to some of them around when you need them. Then there's no help having them around 11 of the most interesting things that my team does is we developed this customised scatter gather algorithm in order to assign the work in a way that faster back and servers will dynamically compensate for slower servers without any prior knowledge. And I just love that >> how fast is going to get? >> Well, I have no doubt that will one day reach light speed. >> Specialist. Physics is a good thing, but it's also a bottleneck. Just what? Your story. How did you get into this? >> Yeah, s o. I joined Scaler about eight months ago as an ap s server, Actually. Sorry. As an FBI engineer, actually eso during my FBI days. I use scaler, the product very heavily. And it just became increasingly fascinated about the speed at which our queria runs. And I was like, I really want to get behind the scene and see what's going on in the back end. That gives us such fast query. So here I am. Two months ago, I switched the back and team. >> Well, congratulations. And thanks for sharing that insight. >> Thank you, John. Thank >> jumper here with Cuban Sites Day and Innovation Day here in San Mateo. Thanks for watching

Published Date : May 30 2019

SUMMARY :

Day Brought to you by scaler. I'm John for the Cube. basis is building our highly scaleable distributed systems and serving That might sound really simple, but at the massive scale of ours, Whatever the people saying, not keeping the index files up to date with every single log message that's incoming. So that's the ingests I What about the query side? Yeah, of course. so we have a lot of servers around here. And the Q server will kind of batch the log messages together and What some of the order of magnitude scale we're So for on the loch side, we talked about seconds His legacy is really the lag issue for data. for the increased queries. so we give you a query response time on the magnitude of seconds versus She What's the unique thing here? the work that we don't have to do. the work that we don't have to do. If we don't even have to look at that data set, What's the hard problem that you guys are solving here? of the most interesting things that my team does is we developed this customised How did you get into this? behind the scene and see what's going on in the back end. And thanks for sharing that insight. Thanks for watching

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
San MateoLOCATION

0.99+

FBIORGANIZATION

0.99+

JohnPERSON

0.99+

Xia LiuPERSON

0.99+

ComptelORGANIZATION

0.99+

ColinPERSON

0.99+

Two months agoDATE

0.99+

LouPERSON

0.99+

San Mateo, CaliforniaLOCATION

0.99+

more than 90%QUANTITY

0.99+

KellerPERSON

0.98+

millionsQUANTITY

0.98+

Cuban Sites DayEVENT

0.98+

black FridayEVENT

0.98+

under 100 milli secondQUANTITY

0.97+

1st 80%QUANTITY

0.97+

Shia LiuPERSON

0.97+

Dozens of terabytes of dataQUANTITY

0.96+

hundreds thousandsQUANTITY

0.96+

under one secondQUANTITY

0.95+

Innovation DayEVENT

0.94+

oneQUANTITY

0.94+

Innovation DayEVENT

0.92+

around 11QUANTITY

0.88+

San MatteoORGANIZATION

0.87+

SecondsQUANTITY

0.85+

20%QUANTITY

0.83+

one dayQUANTITY

0.83+

eight months agoDATE

0.81+

ScalyrPERSON

0.78+

LeightonPERSON

0.77+

Route FoursOTHER

0.75+

single log messageQUANTITY

0.75+

100QUANTITY

0.74+

Scalyr Innovation Day 2019EVENT

0.73+

couple of secondsQUANTITY

0.73+

aboutDATE

0.61+

CubeORGANIZATION

0.57+

secondsQUANTITY

0.56+

planORGANIZATION

0.51+

minutesQUANTITY

0.49+

ScalerORGANIZATION

0.49+

scalerTITLE

0.38+