Image Title

Search Results for Steve Newman:

Steve Newman, Scalyr | Scalyr Innovation Day 2019


 

from San Mateo its the cube covering scaler innovation day brought to you by scaler Livan welcome to the special innovation day with the cube here in San Mateo California heart of Silicon Valley John for the cube our next guest as Steve Newman the co-founder scaler congratulations thanks for having us you guys got a great company here Thanks yeah go ahead glad to have you here so tell the story what's the backstory you guys found it interesting pedigree of founders all tech entrepreneurs tech tech savvy tech athletes as we say tell the backstory how'd it all start and had it all come together so I also traced the story back to I was part of the team that built the original Google Docs and a lot of the early people here at scaler either were part of that Google Docs team or you know they're people we met while we were at Google and really scaler is an outgrowth of the it's a solution to problems we were having trying to run that system at Google you know Google Docs of course became part of a whole ecosystem with Google Drive and Google sheets and there's that you know all these applications working together it's a very complicated system and keeping that humming behind the scenes became a very complicated problem well congratulate ago Google Docs is used by a lot of people so been great success scale is different though you guys are taking a different approach than the competition what's unique about it can you share kind of like the history of where it's going and where it came from and where it's going yeah so you know maybe it'd be helpful like just to kind of set the context a little bit to the blackboard yeah so you know I you know I talked about it's kind of probably put a little flesh on what I was saying about you know there's a very complicated system that we're trying to run in the whole Google Drive ecosystem too there are all these trends in the industry nowadays you know the move to the cloud and micro services and kubernetes and serverless and can use deployment is all everything like these are all great innovations makes you know people are building more complex applications they're evolving faster but it's making things a lot more complicated and to make that concrete imagine that you're running an e-commerce site back in the calm web 1.0 era so you're gonna have a web server maybe a patchy you've got a MySQL database behind that with your inventory and your shopping carts you may be an email gateway and some kind of payment gateway and that's about it that's your that's your system each one of these pieces involved you know going to Fry's buying a computer driving it over the data center slotting it into a rack you know a lot of sweat went into every one of those boxes but there's only about four boxes it's your whole system if you wanted to go faster you threw more hardware at it more ram exactly and like and you know not literally through but literally carried you literally brought in more hardware and so you know took a lot of work just to do the you know that simple system fast forward a couple of decades if you're running uh running an e-commerce site today well you know you're certainly not seeing the inside of a data center you know stripe will run the payments for you you know somebody's on will run the database server and say you know like this is much much you know you know one guy can get this going in an afternoon literally but nobody's running this today this is not a competitive operation today if you're an e-commerce today you also have personalization and advertising based on the surf service history or purchase history and you know there's a separate flow for gifts and you know then printing the you know interfacing to your delivery service and and you know you've got 150 blocks on this diagram and maybe your engineering team doesn't have to be so much larger because each one of those box is so much easier to run but it's still a complicated system and trying to actually understand what's working what's not working why isn't it working and and tracking that down and fixing it this is the challenge day and this and this is where we come in and that's the main focus for today is that you can figure it out but the complexity of the moving parts is the problem exactly so you know and so you see oh you know 10% of the time that somebody comes in to open their shopping cart it fails well you know the problem pops out here but the the root cause turns out to be a problem with your database system back here and and figuring that out you know that's that's the challenge okay so with cloud technology economics has changed how is cloud changing the game so it's interesting you know changes changes the game for our customers and it changes the game for us so for a customer you know kind of we touched on this a little bit like things are a lot easier people run stuff for you you know you're not running your own hardware you're not you know you're often you're not even running your own software you're just consuming a service it's a lot easier to scale up and down so you can do much more ambitious things and you can move a lot faster but you have these complexity problems for us what it presents an an economy of scale opportunity so to you know we step in to help you on the telemetry side what's happening in my system why is it happening when did it start happening what's causing it to happen that all takes a lot of data log data other kinds of data so every one of those components is generating data and by the way for our customers know that they're running a hundred and 50 services instead of four they are generating a lot more data and so traditionally if you're trying to manage that yourself running your own log management cluster or whatever solution you know it's a real challenge to you as you scale up as your system gets more complex you've got so much data to manage we've taken an approach where we're able to service all of our customers out of a single centralized cluster meaning we get an economy of scale each one of our customers gets to work with a basically log management engine that's to scale to our scale rather than the individual customers scale so the older versions of log management had the same kind of complexity challenges you just drew a lot ecommerce as the data types increase so does their complexity is that so the complexity increases and but you also get into just a data scale problem you know suddenly you're generating terabytes of data but you don't you know the you only want to devote a certain budget to the computing resources that are gonna process that data because we can share our processing across all of our customers we we fundamentally changed economics it's a little bit like when you go and run a search and Google thousands literally thousands of servers in that tenth of a second that Google is processing the query 3,000 servers on the Google site may have been involved those aren't your 3,000 servers you know you're sharing those with you know 50 million other people in your data center region but but for a millisecond there those 3,000 servers are all for you and that's that's a big part of how Google is able to give such amazing results so quickly but in still economically yeah economically for them and that's basically on a smaller scale that's what we're doing is you know taking the same hardware and making it all of it available to all of the customers people talk about metrics as the solution to scaling problems is that correct so this is a really interesting question so you know metrics are great you know basically the you know if you look up the definition of a metric it's basically just a measurement on number and you know and it's a great way to boil down you know so I've had 83 million people visit my website today and they did 163 million things in this add mirror and that's you can't make sense of that you can boil it down to you know this is the amount of traffic on the site this was the error rate this was the average response time so these you know these are great it's a great summarization to give you an overall flavor of what's going on the challenge with metrics is that they tend to measure they can be a great way to measure your problems your symptoms sites up it's down it's fast its slow when you want to get to then to the cause of that problem all right exactly why is the site now and I know something's wrong with the database but what's the error message and what you know what's the exact detail here and a metric isn't going to give that to you and in particular when people talk about metrics they tend to have in mind a specific approach to metrics where this flood of events and data very early is distilled down let's count the number of requests measure the average time and then throw away the data and keep the metric that's efficient you know throwing away data means you don't have to pay to manage the data and it gives you this summary but then as soon as you want to drill down you don't have any more data so if you want to look at a different metric one that you didn't set up in advance you can't do it and if you need to go into the the details you can't do an interesting story about that you know when you were at Google you mentioned you the problem statements came from Google but one of things I love about Google is they really kind of nailed the sre model and they clearly decoupled roles you know developers and site reliability engineers who are essentially one-to-many relationship with all the massive hardware and that's a nice operating model it's had a lot of efficiencies was tied together but you guys are kind of saying in a way that does developers use the cloud they become their own sres in a way because this cloud can give them that kind of Google like scale and in smaller ways not like Google size but but that's similar dynamic where there's a lot of compute and a lot of things happening on behalf of the application or the engineers developer as developers become the operator through their role what challenges do they have and what do you see that happening because that's interesting trim because as applications become larger cloud can service them at scale they then become their own sres what yeah well how does that roll out most how do you see that yes I mean and so this is something we see happening at more and more of our customers and one of the implications of that is you have all these people these developers who are now responsible for operations but but they're not special you know they're not that specialist SRE team they're specialists in developing code not in operations they're you know they they minor in operations and and they don't think of it as their real job you know that's the distraction something goes wrong all right they're they're called upon to help fix it they want to get it done as quickly as possible so they can get back to their real job so they're not gonna make the same mental investment in becoming an expert at operations and an expert at the operations tools and the telemetry tools you know they're not gonna be a log management expert on metrics expert um and so they need they need tools that have a gentle learning Kurt have a gentle learning curve and are gonna make it easy for them to get Ian's not really know what they're doing on this side of things but find an answer solve the problem and get back out and that's kind of a concept you guys have of speed to truth exactly so and we mean a couple of things by that sort of most literally we our tool is it's a high performance solution you you hand us your terabytes of log data you ask some question you know what's the trend on this error in this service over the last day and we you know we give you a quick answer Big Data scan through a give you a quick answer but really it's you know that's just part of the overall chain of events which goes from the you know the developer with a problem until they have a solution so they they have to figure out even how to approach the problem what question to ask us you know they have to pose the query and in our interface and so we've done a lot of work to to simplify that learning curve where instead of a complicated query language you can click a button get a graph and then start breaking down that just visually break that down which okay here's the error rate but how does that break down by server or user or whatever dimension and be able to drill down and explore in a you know very kind of straightforward way how would you describe the culture at scaler I mean you guys been around for a while you still growing fast growing startup you haven't done the B round yet got any you guys self-funded it got customers early they pushed you again now 300 plus customers what's the culture like here so you know it's been this has been a fun company to build in part because you know we're into you know the the heart of this company is the engineering team our customers our engineers so you know we're kind of the kind of the same group and that keeps the you know it kind of keeps the inside in the outside very close together and I think that's been a part of the culture we've built is you know we all know why we're building this what it's for you know we use scalar extensively internally but you but even you know even if we weren't we're it's the kind of thing we've used in the past and we're gonna use in the future and so you know I think people are really excited here because you know we understand why and you have an opinion of the future on how it should roll out what's the big problem statement you guys are solving as a company what's it how would you boil that down if asked so by a customer and engineer out there what real problem are you solving that's core problem big problem that's gonna be helping me you know at the end of the day it's giving people the confidence to keep you know building these kind of complicated systems and move quickly because because and this is the business pressure everyone is under you know whatever business you're in it has a digital element and your competitors are in the same you know doing the same thing and they are building these sophisticated systems and they're adding functionality and they're moving quickly you need to be able to do the same thing but it's easy then to get tangled up in this complexity so at the end of the day you know we're giving people the ability to understand those systems and and and the functionality and the software's getting stronger and stronger more complicated with service meshes and micro services as applications start to have these the ability to stand up and tear down services on the fly that's so annoying and they'll even wield more data exact you get more data it gets more complicated actually if you don't mind there's a little story I'd like to tell so hold on just will I clear this out this is going back back to Google and again you know kind of part of the inspiration of you know how he came to build scalar and this doesn't be a story of frustration of you know probably get ourselves into that operation and motivation yep so we were we were working on this project it was building a file system that could tie together Google Docs Google sheets Google Drive Google photos and the black diagram looks kind of like the thing I just erased but there was one particular problem we had that took us months and literally months and months and months to track down you know you'd like to solve a problem in a few minutes or a few hours but this one took months and it had to do with the the indexing system so you have all these files in Google Drive you wanna be able to search and so we had modeled out how we were gonna build this or this search engine you'd think you know Google searches a solve problem but actually so Google web search is four things the whole world can see there's also like Gmail search which is four things that only one person can see so it's lots of separate little indexes those are both solve problems at Google Google Drive is for things a few people can see you share it with your coworker or your whoever and it's actually a very different problem and but we looked at the statistics and we found that the average document our average file was shared with about 1.1 people in other words things were mostly private or maybe you share with one or two people so we said we're just gonna make if something's shared to three people we're just gonna make three copies of it and then now we have just the Gmail problem each copy is for one person and we did the math on how how much work is this going to be to build these indexes and in round numbers we were looking at something like at the time this would be so much larger now but at the time we had maybe one billion documents and files in the system each one was shared to about 1.1 people maybe it was a thousand words long on average and maybe it would change be edited once per day on average so we had about a trillion word updates per day if you multiply all that together and so we allocate it we put in a request and purchase machines to handle that much traffic and we started bringing up the system and immediately collapsed it was completely overloaded and we checked our numbers and we check them again yeah 1.1 about a billion whatever and but then work into the system with just way beyond them and we looked at our metrics so you know measuring the number documents measuring each of these things all the metrics looked right to make a month's long story short these metrics and averages were hiding some funny business there turned out there was this type of use case read of occasional documents that were shared to thousands of people and one of there was a specific example it was the signup sheet for the Google company picnic this is a spreadsheet it was shared to about 5,000 people so it wasn't the whole company but you know a big chunk of Mountain View which meant it was I don't know let's say 20 thousand words long because it had you know the name and a couple other things for each person this is one document but shared to 5,000 people and you know during the period people were signing up maybe it was changing a couple thousand times per day so you multiply out just this document and you get 200 billion word updates for that one document in a day where we're estimating a trillion for the whole earth and so there was something like a hundred documents in this kid Google was hamstringing your own thing we were hamstrung our own thing there were about a hundred examples like this so now we're up to 20 trillion and like that was the whole problem these hundred files and we would have never found that until we got way down into the details of the the logs which in this two months just took month so because we didn't have the tools because we didn't have scaler yeah and I think this is the kind of anomaly you might see with Web Services evolving with micro services which someone has an API interface with some other SAS as apps start to rely on each other this is a new dynamic we're seeing as SLA s are also tied together so the question is whose fault is it exactly you have to whose fault is it and also things get so much more varied now you know again web 1.0 e-commerce you buy a thing you buy a thing that's all the same now you're building a social media site or whatever you've got 8 followers you've got 8 million followers this person has three movies rented on Netflix this person has three thousand movies everything's different and so then you get these funny things hiding yeah you're flying blind if you don't get all the data exposed it's like it's like you know blind person trying to read Braille as we heard earlier see if thanks so much for sharing the insight great story I'm John furry you're here for the q4 innovation day at scalers headquarters thanks for watching

Published Date : May 30 2019

SUMMARY :

people the confidence to keep you know

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Steve NewmanPERSON

0.99+

San MateoLOCATION

0.99+

hundred filesQUANTITY

0.99+

50 millionQUANTITY

0.99+

3,000 serversQUANTITY

0.99+

150 blocksQUANTITY

0.99+

5,000 peopleQUANTITY

0.99+

oneQUANTITY

0.99+

8 followersQUANTITY

0.99+

GoogleORGANIZATION

0.99+

one billion documentsQUANTITY

0.99+

todayDATE

0.99+

20 thousand wordsQUANTITY

0.99+

two peopleQUANTITY

0.99+

each copyQUANTITY

0.99+

one personQUANTITY

0.99+

three peopleQUANTITY

0.99+

thousands of peopleQUANTITY

0.99+

Google DocsTITLE

0.99+

three thousand moviesQUANTITY

0.99+

thousandsQUANTITY

0.99+

GmailTITLE

0.99+

three moviesQUANTITY

0.99+

Silicon ValleyLOCATION

0.98+

Steve NewmanPERSON

0.98+

one documentQUANTITY

0.98+

MySQLTITLE

0.98+

83 million peopleQUANTITY

0.98+

four thingsQUANTITY

0.98+

NetflixORGANIZATION

0.98+

about 5,000 peopleQUANTITY

0.98+

about a billionQUANTITY

0.98+

three copiesQUANTITY

0.98+

two monthsQUANTITY

0.97+

bothQUANTITY

0.97+

each personQUANTITY

0.97+

thousands of serversQUANTITY

0.97+

each oneQUANTITY

0.97+

earthLOCATION

0.96+

JohnPERSON

0.96+

a trillionQUANTITY

0.96+

BrailleTITLE

0.95+

fourQUANTITY

0.95+

about a hundred examplesQUANTITY

0.94+

a thousand wordsQUANTITY

0.94+

singleQUANTITY

0.94+

a hundred and 50 servicesQUANTITY

0.94+

8 million followersQUANTITY

0.94+

eachQUANTITY

0.93+

q4 innovation dayEVENT

0.93+

300 plus customersQUANTITY

0.93+

163 million thingsQUANTITY

0.92+

one document in a dayQUANTITY

0.92+

about 1.1 peopleQUANTITY

0.92+

terabytesQUANTITY

0.91+

one particular problemQUANTITY

0.91+

once per dayQUANTITY

0.89+

one guyQUANTITY

0.88+

GoogleTITLE

0.88+

1.1QUANTITY

0.87+

IanPERSON

0.87+

hundred documentsQUANTITY

0.87+

up to 20 trillionQUANTITY

0.87+

monthsQUANTITY

0.86+

John furryPERSON

0.85+

10% ofQUANTITY

0.84+

Jeff Mathis, Scalyr & Steve Newman, Scalyr | Scalyr Innovation Day 2019


 

from San Mateo its the cube covering scalar innovation day brought to you by scaler but I'm John four with the cube we are here in San Mateo California official innovation day at Skylar's headquarters with Steve Neumann the founder of scalar and Jeff Mathis a software engineer guys thanks for joining me today thanks for having us thanks great to have you here so you guys introduced power queries what is all this about yes so the vision for scalar is to become the platform users trust when they want to observe their systems and power queries is a really important step along that journey power queries provide new insights into data with a powerful and expressive query language that's still easy to use so why is this important so we like to scaler we like to think that we're all about speed and a lot of what we're known for is the kind of the raw performance of the query engine that we've built that's sitting underneath this product which is one measure of speed but really we like to think of speed as the time from a question in someone's head to an answer on their screen and so the whole kind of user journey is part of that and you know kind of traditionally in our product we've we provided a set of basic capabilities for searching and counting and graphing that are kind of very easy for people to access and so you can get in quickly pose your question get an answer without even having to learn a query language and and that's been great but there are sometimes the need goes a little bit beyond that the question that some wants to ask is a little bit more complicated or the data needs a little bit of massaging and it just goes beyond the boundaries what you can do in kind of those basic you know sort of basic set of predefined abilities and so that's where we wanted to take a step forward and you know kind of create this more advanced language for for those more advanced cases you know I love the name power query so they want power and it's got to be fast and good so that aside you know queries been around people know search engines search technology discovery finding stuff but as ai/an comes around and more scales and that the system this seems to be a lot more focus on like inference into intuiting what's happening this has been a big trend what do you what's your opinion on that because this has become a big opportunity using data we've seen you know file companies go public we know who they are and they're out there but there's more data coming I mean it's not like it's stopping anytime soon so what's the what's the innovation that that just gonna take power queries to the next level yes so one of the features that I'm really excited about in the future of power queries is our autocomplete feature we've taken a lot of inspiration from just what your navbar does in the browser so the idea is to have a context-sensitive predictive autocomplete feature that's going to take into account a number of individual the syntactic context of where you are in the query what fields you have available to you what fields you've searched recently those kinds of factors Steve what's your take before we get to the customer impact what's the what's the difference it different what's weird whereas power queries gonna shine today and tomorrow so it's some it was a kind of both an interesting and fun challenge for us to design and build this because you're you know we're trying to you know by definition this is for the you know the more advanced use cases the more you know when you need something more powerful and so a big part of the design question for us is how do we how do we let people you know do more sophisticated things with their logs when the when they have that that use case while still making it some you know kind of preserving that that's speed and ease of use that that we like to think we're known for and and in particular you know they've been you know something where you know step one is go you know read this 300 page reference manual and you know learn this complicated query language you know if that was the approach then you know then we would have failed before we started and we had we have the benefit of a lot of hindsight you know there a lot of different sister e of people manipulating data you know working with these sophisticated different and different kinds of systems so there are you know we have users coming to us who are used to working with other other log management tools we have users or more comfortable than SQL we have users who really you know their focus is just a more conventional programming languages especially because you know one of the constituencies we serve our you know it's a trend nowadays that development engineers are responsible also for keeping their code working well in production so they're not experts in this stuff they're not log management experts they're not you know uh telemetry experts and we want them to be able to come in and kind of casual you know coming casually to this tool and get something done but we had all that context of drawn with these different history of languages that people are used to so we came up with about a dozen use cases that we thought kind of covered the spectrum of you know what would people bring bring people into a scenario like this and we actually game to those out well how would you solve this particular question if we were using an SQL like approach or an approach based on this tool or which based on that tool and so we we did this like big exploration and we were able to boil down boil everything down to about ten fairly simple commands that they're pretty much covered the gamut by comparison you know there are there other solutions that have over a hundred commands and it obviously if it's just a lot to learn there at the other end of the spectrum um SQL really does all this with one command select and it's incredibly powerful but you also really have to be a wizard sometimes to kind of shoehorn that into yeah even though sequels out there people know that but people want it easier ultimately machines are gonna be taking over you get the ten commands you almost couldn't get to the efficiency level simplifying the use cases what's the customer scenario looked like what's that why is design important what's what's in it for the customer yeah absolutely so the user experience was a really important focus for us when designing power queries we knew from the start that if tool took you ten minutes to relearn every time you wanted to use it then the query takes ten minutes to execute it doesn't take seconds to execute so one of the ways we approached this problem was to make sure we're constantly giving the user feedback that starts as soon you load the page you've immediately got access to some of the documentation you need you use the feature if you have type in correct syntax you'll get feedback from the system about how to fix that problem and so really focusing on the user experience was a big part of the yeah people gonna factor in the time it takes to actually do the query write it up if you have to code it up and figure it out that's time lag right there you want be as fast as possible interesting design point radical right absolutely so Steve how does it go fast Jeff how does it go fast what are you guys looking at here what's the magic so let me I'm going to step over to the whiteboard shock board here and we'll so chog in one hand Mike in the other will will evaluate my juggling skills but I wanted to start by showing an example of what one of these queries looks like you know I talked about how we kind of boil everything down to about 10 commands so so let's talk through a simple scenario let's say I'm running a tax site you know people come to our web site and they're you know they're putting their taxes together and they're downloading forms and tax laws are different in every state so I have different code that's running for you know you know people in California versus people in Michigan or whatever and I can you know it's easy to do things like graph the overall performance and error rate for my site but I might have a problem with the code for one specific state and it might not show up in those overall statistics very clearly so I don't know I want to get a sense of how well I'm how I am performing for each of the 50 states so I'm gonna and I'm gonna simplify this a little bit but you know I might have an access log for this system where we'll see entries like you know we're loading the tax form and it's for the state of California and the status code was 200 which means that was successful and then we load the tax form and the state is Texas and again that was a success and then we load the tax form for Michigan and the status was a 502 which is a server error and then you know and millions of these mixing with other kinds of logs from other parts of my system and so I want to pull up a report what percentage of requests are succeeding or failing by state and so let me sketch for it first with the query would look like for that and then I'll talk about how how we execute this at speed so so first of all I have to say what which you know of all my other you know I've drawn just the relevant logs but this is gonna be mixed in with all the other logs for my system I need to say which which logs I care about well maybe as simple as just calling out they all have the this page name in them tax form so that that's the first step of my query I'm searching for tax form and now I want to count these count how many of these there are how many of them succeeded or failed and I want to cluster that by state so I'm gonna clustering is with the group command so I'm gonna say I want to count the total number of requests which is just the count so count is a part of the language total is what I'm choosing to name that and I want to count the errors which is also going to be the count command but now I'm going to give it a condition I want to only count where the status is at least 500 and I rather you can see that but behind the plant is a 500 and I'm gonna group that by state so we're we're counting up how many of these values were above 500 and we're grouping it by this field and what's gonna come out of that is a table that'll say for each state the total number of requests the number of errors oh and sorry I actually left out a couple of steps but so it's but actually let's draw what this would give us so far so it's gonna show me for California maybe I had nine thousand one hundred and fifty two requests thirteen of them were errors for Texas I had and so on but I'm still not really there you know that might show me that California had you know maybe California had thirteen errors and Rodi had 12 errors but only there were only 12 requests for Rhode Island Rhode Island is broke you know I've broken my code for Rhode Island but it's only 12 errors because it's a smaller population so that's you know this analysis is still not quite gonna get me where I need to go so I can now add another command I've done this group now I'm gonna say I'm gonna say let which triggers a calculation let error rate equal errors divided by total and so that's going to give me the fraction and so for California you know that might be 0.01 or whatever but for Rhode Island it's gonna be one 100% of the requests are failing and then I can add another command to sort by the error rate and now my problem states are gonna pop to the top so real easy to use language it's great for the data scientists digging in their practitioners you don't need to be hard core coder to get into this exactly that's the idea you know groups or you know very simple commands that just directly you know kind of match the English description of what you're trying to do so then but you know yeah asked a great question then which is how do we take this whole thing and execute it quickly so I'm gonna erase here you're getting into speed now right so yeah bit like that how you get the speed exactly speed is good so simplicity to use I get that it's now speed becomes the next challenge exactly and the speed feeds into the simplicity also because you know step one for anything any tool like this is learning the tool yeah and that involves a lot of trial and error and if the trial and error involves waiting and then at the end of the wait for a query to run you learn that oh you did the query wrong that's very discouraging to people and so we actually think of speed really then becomes some ease of use but all right so how do we actually do this so you've got you know you'll have your whole mass of log data tax forms other forms internal services database logs that are you got your whole you know maybe terabytes of log data somewhere in there are the the really important stuff the tax form errors as well as all the other tax form logs mixed in with a bigger pile of everything else so step one is to filter from that huge pile of all your logs down to just the tax form logs and for that we were able to leverage our existing query engine and one of the main things that makes that engine there's kind of two things that make that that engine as fast it is as it is it's massively parallel so we we segment the data across hundreds of servers our servers so all this data is already distributed across all these servers and once your databases you guys build your own in-house ok got it exactly so this is on our system so we've already collected we're collecting the logs in real time so by the time the user comes and types in that query we already have the data and it's already spread out across all these service then the you know the first step of that query was just a search for tax form and so that's our existing query engine that's not the new thing we've built for power queries so that existing very highly optimized engine this server scans through these logs this service insula these logs each server does its share and they collectively produce a smaller set of data which is just the tax form logs and that's still distributed by the way so really each server is doing this independently and and is gonna continue locally doing the next step so so we're harnessing the horsepower of all these servers each page I only have to work with a small fraction of the data then the next step was that group command we were counting the requests counting the errors and rolling that up by state so that's the new engine we've built but again it each server can do just its little share so this server is gonna take whichever tax form logs it found and produce a little table of counts in it by state this server is gonna do the same thing so at each produce they're a little grouping table with just their share of the logs and then all of that funnels down to one central server where we do the later steps we do the division divide number of errors by total count and and then sort it but by now you know here we might have you might have trillions of log messages down to millions or billions of messages that are relevant to your query now we here we have 50 records you know just one for each state so suddenly the amount of data is very small and so the you know the later steps may be kind of interesting from a processing perspective but they're easy from a speed perspective so you solve a lot of database challenges by understanding kind of how things flow once you've got everything with the columnar database is there just give up perspective of like what if the alternative would be if we this is like I just drew this to a database and I'm running sequel trillions of log files I mean it's not trivial I mean it's a database problem then it's a user problem kind of combine what's order of magnitude difference if I was gonna do the old way yeah so I mean I mean the truth is there's a hundred old ways know how much pain yes they're healthy you know if you're gonna you know if you try to just throw this all into one you know SQL sir you know MySQL or PostgreSQL bytes of data and and by the way we're glossing over the data has to exist but also has to get into the system so you know in you know when you're checking you know am i letting everyone in Rhode Island down on the night before you know the 15th you need up to the moment information but the date you know your database is not necessarily even if it could hold the data it's not necessarily designed to be pulling that in in real time so you know just sort of a simple approach like let me spin up my SQL and throw all the data in it's it's just not even gonna happen I'm gonna have so now you're sharding the data or you're looking at some you know other database solution or ever in it it's a heavy lift either way it's a lot of extra effort taxing on the developers yeah you guys do the heavy lifting yeah okay what's next where's the scale features come in what do you see this evolving for the customers so you know so Jeff talked about Auto complete which you were really excited about because it's gonna again you know a lot of this is for the casual user you know they're you know they're a power user of you know JavaScript or Java or something you're they're building the code and then they've got to come in and solve the problem and get back to what they think of as their real job and so you know we think autocomplete and the way we're doing it we're we're really leveraging both the context of what you're typing as well as the history of what you and your team have done in queried in the past as well as the content of your data every think of it a little bit like the the browser location bar which somehow you type about two letters and it knows exactly which page you're looking for because it's relying on all those different kinds of cues yeah it seems like that this is foundational heavy-lift you myself minimize all that pain then you get the autocomplete start to get in a much more AI machine learning kicks in more intelligent reasoning you start to get a feel for the data it seems like yeah Steve thanks for sharing that there it is on the whiteboard I'm trying for a year thanks for watching this cube conversation

Published Date : May 30 2019

SUMMARY :

small and so the you know the later

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Jeff MathisPERSON

0.99+

JeffPERSON

0.99+

CaliforniaLOCATION

0.99+

ten minutesQUANTITY

0.99+

MichiganLOCATION

0.99+

Steve NeumannPERSON

0.99+

50 recordsQUANTITY

0.99+

Rhode IslandLOCATION

0.99+

12 errorsQUANTITY

0.99+

thirteen errorsQUANTITY

0.99+

StevePERSON

0.99+

TexasLOCATION

0.99+

nine thousandQUANTITY

0.99+

San MateoLOCATION

0.99+

millionsQUANTITY

0.99+

Steve NewmanPERSON

0.99+

thirteenQUANTITY

0.99+

JavaTITLE

0.99+

MySQLTITLE

0.99+

two thingsQUANTITY

0.99+

ten commandsQUANTITY

0.99+

50 statesQUANTITY

0.99+

each pageQUANTITY

0.99+

0.01QUANTITY

0.99+

300 pageQUANTITY

0.99+

Rhode IslandLOCATION

0.99+

todayDATE

0.99+

each serverQUANTITY

0.99+

each serverQUANTITY

0.99+

hundreds of serversQUANTITY

0.98+

500QUANTITY

0.98+

first stepQUANTITY

0.98+

over a hundred commandsQUANTITY

0.98+

tomorrowDATE

0.98+

JavaScriptTITLE

0.98+

RodiPERSON

0.98+

502OTHER

0.98+

oneQUANTITY

0.98+

step oneQUANTITY

0.97+

MikePERSON

0.97+

PostgreSQLTITLE

0.97+

billions of messagesQUANTITY

0.97+

12 requestsQUANTITY

0.96+

bothQUANTITY

0.96+

100%QUANTITY

0.96+

each stateQUANTITY

0.96+

200OTHER

0.95+

a yearQUANTITY

0.95+

one commandQUANTITY

0.95+

JohnPERSON

0.95+

firstQUANTITY

0.95+

about a dozen use casesQUANTITY

0.95+

about ten fairly simple commandsQUANTITY

0.95+

trillions of log messagesQUANTITY

0.95+

SQLTITLE

0.95+

EnglishOTHER

0.93+

about 10 commandsQUANTITY

0.93+

one central serverQUANTITY

0.92+

one measureQUANTITY

0.92+

above 500QUANTITY

0.9+

one of the main thingsQUANTITY

0.89+

eachQUANTITY

0.89+

one specific stateQUANTITY

0.89+

15thQUANTITY

0.89+

scalar innovation dayEVENT

0.88+

SkylarORGANIZATION

0.88+

ScalyrPERSON

0.84+

at least 500QUANTITY

0.84+

John Hart, Scalyr | Scalyr Innovation Day 2019


 

(upbeat music) >> From San Mateo, it's theCUBE, covering Scalyr Innovation Day, brought to you by Scalyr. >> Hello and welcome to the special Cube Innovation Day here in Silicon Valley in San Mateo, California at Scalyr's Headquarters. I'm John Furrier, host of theCUBE. John Hart's the Tech Lead Back End Engineering here at Scalyr. Thanks for having us. >> Thanks for having me John. >> So what's the secret sauce at Scalyr? You guys have unique differentiate as we have covered with some of your peers and the founders are all talking about it. But, you guys have a unique secret sauce. Take a minute to explain that. >> I think, yeah, it's a few different things. First of all, you've got just the design level, which is we don't use keyword indexes. So that's a big one right there off the top. On top of that, you've got a couple of different implementation paths. We've got our own custom written data store. So we're able to really control all the way down to the bytes on disk, how we lay things out, optimize for speed. We have a novel kind of scatter-gather approach for fanning out a query, to make sure we can get all of our nodes involved as quickly as possible. Then, finally, and this is just kind of being smart, which is we have a time series database for repetitive queries and that's on demand. You don't have to do anything, but we're going to speed up your queries in the background if we know it's a good idea. >> Talk about the time series. I think that's interesting because that comes to play. We hear about real time a lot. We talk a lot about in cyber security that time series has been beneficial. Where does time series fit for you guys in here? >> That's a good question. I think one of the big differences with Scalyr versus other uses of time series database is with Scalyr you're outputting your logs, there's all kinds of information in your logs. Some of that might be a good thing to put in a time series database, but I think with a lot of other products, you would have to decide that ahead of time. Like, hey, let's get this metric into the database. With Scalyr, the moment you have anything in your logs that you might want to put into a time series you just start querying it. You put in a dashboard (snaps) you've got a time series. So we're going to back propagate that for everything you've already given us. So all of those queries are fast from there on out. >> So it's built in from the beginning. >> Exactly, and you don't have to do anything. It's just on demand. >> So keywords been what other people have been used for years. That's been standard for these log management software packages and indexes. Indexes can slow things down. We've got a tutorial on that. Why is those two areas, haven't been innovated in awhile? When people just haven't figured it out, you guys have first? What's the differentiation for you guys? Why'd you guys get there? >> I think the main reason is that log data is just fundamentally different than most other things that you might use a database for. There's a couple of different reasons for that. So with log data, you're not in control of it. You can't design it. You know, an index is great if you're making a relational database. You've got control of your columns. You know what you're going to join on. You know what you want to index. Nobody designs their logs like they design their database tables. It's just a bunch of stuff. It's from systems you don't control. It's changing all the time. So just the number of distinct fields that you would have to index is really, really high. So if your system depends on indexing for good performance, you're going to have to make a lot of indexes. And indexes, of course, they're right amplifying. If you've got one gigabyte of raw data, then you've got to put five or six hundred indexes on top of it. You're going to have five or ten gigabytes of raw plus index data. That means you got to do a lot more IO, and at the end of the day, how much you have to read from disk, determines how fast your query's going to be. >> So, in essence indexes creates a lot of overhead. You shouldn't even need to do because of the nature of log files. >> Because the nature of log data, it's overhead that doesn't serve log data very well, yeah. >> And what about the log data that's changing? Cause one of the things we're seeing, Internet of Things, more connected devices, imagine the Teslas that are going to be connecting in, with all their data. >> Right >> All this stuff, cameras. You've got a huge amount of new kind of data. Up, down, status. This is going to be a tsunami of new types of log data. >> Yeah, and none of it are you going to have a ton of control over. Right, it's going to be changing a ton. Maybe you've got 20 different versions of devices out there that are all sending you different versions of logs. You've got to be able to handle all of it. So you want a system that is adaptive to your needs as they come up, as opposed to something you have to plan out with indexes ahead of time. >> So if someone asks you, say you guys say you're faster. Why? Is that true? Is the statement you're faster than others, and if so why? >> It is true. (laughs) And that really comes down to the secret sauce. The brute force, the key to brute force, and I think we've talked about this a little bit today, is you got to bring a lot of force, as quickly as you possibly can. And we do that. We've got a lot of custom code. We're not using off-the-shelf components. We're trying to get that time quick as we can. So I think our median performance is still better than 100 milliseconds. That might be for a query that's talking to two or three hundred machines, or maybe even more. All of which, to get, maybe it's going to scan a terabyte of data. All of that is going to come back within 100 milliseconds. It's extremely fast. >> Talk about why log data is different from other data types, for folks that are in these cloud native environments. Their time is precious. They are looking at a lot of different data. How is log data different? >> I think the fact that it's dynamic in terms of what's coming out is something new. It changes so rapidly. The other really big thing too is the way you query it changes from day to day. Most of the time you're going to your logs, you're trying to troubleshoot a problem. Today's problems are different than yesterdays problems. So every time you go in, you're using it in a different way. So it has to be very fast. It has to be exploratory. And that's one of the big things about Scalyr's speed. Is it enables this really exploratory. You can kind of move through the data quickly, as opposed to making a query, getting a cup of coffee, waiting for the query, and then deciding what you're going to do next. I'm kind of dating myself here, but it's like the first time you ever used Google. You're like, "Whoa, how did that happen?" That's what it's like the first time you use Scalyr. >> And you guys have a unique architecture, we talked about that. You guys have certain speeds. But it's not just the query speed. It's the time it takes to do the query. So you factor in a much bigger perspective than if someone has to build a query and then takes 15 minutes. >> Right. >> Game's over. >> Yeah, and instead you're just clicking on things. We're trying to make it very easy for you to move from oh here's an alert. Well here are the log files that caused that alert. Oh, what's the thread stack for that particular lock. Oh, I can go and look at everything else that happened in that thread. That's five or 10 seconds of Scalyr tops. >> You guys have unique engineering culture, that targets engineers, products built by engineers, for engineers. >> Yep. >> Great story. And it's real, and you guys building it everyday. What is the engineer threshold of pain when it comes to locked data? Have you seen any anecdotal, I mean, 'cause engineers that are in this space, they need access to it. There's SLAs now tied to it. People are sharing data. There's all kind of new ways, reasons why you need to have the Scalyr solution. But what's the pain point for most people to tolerate an inferior solution? >> Well for me, I actually have an answer for this. Right, because before I was Scalyr employee, I was a Scalyr customer and before I was a Scalyr customer, I was a Splunk customer. I used Splunk for about five years before I think Scalyr even necessarily existed and I was really happy with it because I needed it. Right? I had my own company. We were generating tons of logs. My support guys needed to use those logs. And, prior to using something like a Splunk, I was SSHing it to servers to check the log files, which is of course, not scalable. So I was really happy with the product as an idea existed, but it just kept gnawing at us. You know, every time we would query, sometimes it would be fast, sometimes it would be really slow. Sometimes the results would be down because an indexing server was down. It was just. >> You mean the Splunk solution? >> Yeah, the Splunk solution. Yeah, it was just extremely painful. So I read, actually, one of the blog posts written by Steve Newman and thought, that's a great idea. That is how you should attack this problem. No indexes. Brute forces. All the flexibility you get from that. I loved it and then I forgot about it for like six months. (laughs) Because I was busy, right. But then six months later I was really frustrated again with Splunk again being really, really slow, and I thought, what was the name of that company again? I looked them up. I installed it. And within, certainly within a day, I was blown away by the performance. Within a week, I had uninstalled Scalyr, excuse me, Splunk, from every single one of my servers and switched to Scalyr instead. >> And you're happy with that? Does it work for you? Came to join the company? >> Yeah, exactly. In kind of conversations with the support team here, I was one of their early customers to use Windows, so I had a lot of questions, they had questions for me, how did I get it working, it wasn't a supported platform. And all of my emails were responded to by two guys named Steve. So I figured that was probably the support team. Pretty funny they've got a support team of two people, both named Steve. And then at one point, in one email, Steve Newman said to me, "You may have realized there's only two of us here." And that's when I kind of went, "Oh wait, so there's two people total." And two guys I assumed in a basement. They weren't in a basement, but I assumed they were in a basement. They had software that was way better for my needs than Splunk, which at the time was worth probably eight, ten billion dollars. It's a public company. Thousands of engineers. So that's when I thought, "Huh. When I get a chance, "Maybe I should go work with these guys." >> You know it's interesting. Maybe create a new category, brute force as a service. >> Yeah. >> This is what they're doing. They're bringing in the right tool at the right time. >> Yep. >> For the right problem, for speed, and to solve the problem, no? >> Yeah. >> They care how it gets done. >> Get as much data as you can and get that answer back as quickly as you can. >> So this is the big challenge. Final question for you is obviously, you know, a lot of people we talked to in the DevOps world they're really fickle. On one hand, they'll try anything. If they like it, they'll stay with it. But if they don't, you'll know about it. Where's the value point for people to start thinking about Scalyr. Is it ingest to value, ingesting is one part, that's kind of a trial. Where's the value immediately come in? Where do you see, what's the first sign of light value, once the ingestion happens. >> So part of it is this, it's a very short period of time from the ingestion to the time you're querying on it is very, very short. So you got a real time view of what's happening on your servers not a five minutes ago view. That by itself can pay for it right there. If you're a DevOps person and you've got some alarm pinging. If that alarm is from 10 minutes ago, that means your customers are already annoyed. If you're going to have to wait another 10 minutes just to even see what's happening, you've got a really big problem, right. So being able to have the alarm, and you know that's triggering on something that happened a second or two ago, and then immediately being able to dive in with no interruption to your work flow, no reason not to dive in, that's a pretty big one right there. >> So pretty immediate impact. >> Yeah. >> So okay, for people that don't know Scalyr, what should they know about Scalyr as a company from a value proposition as a former customer now, key employee in the back end, and engineering. What is the key things they should know about? >> So speed, we keep talking about it, right? We have a really really good cost basis. Because we're not making those indexes, we don't have to store as much data. It's just generally cheaper for it to run. Right, so we actually have a really good cost point. And we get you from the alerts. You don't have to decide stuff ahead of time. You can do it all on the fly, ad hoc, we get you from the alerts, to your answers as quickly as you possibly can. That's pretty good. >> Every culture has its own unique kind of feature. What's Scalyr's culture here? I mean Intel was Moore's law, Cadence was Moore's law. What's the culture here, at Scalyr like? >> That's a good question. I guess I would say I'm just tremendously proud to be working with these engineers. Right? We're all here because we want to get better and we want to work on really, really hard problems writing our own code, not just running and kind of patching together open source systems that already exist. We want to be doing something cutting edge. So that's I would say the biggest one. >> And big problem's behind that, you've got AI right around the corner. Applying AI is going to be a natural extension. >> Yeah, 'cause we got the data. And can deal with the data. >> Ciao, thanks for the insight. Appreciate it. >> Thank you. Good talking to you. >> John Furrier here. Innovation Day with theCUBE here in Silicon Valley in San Mateo, at Scalyr's headquarters. I'm John Furrier. Thanks for watching. (upbeat music)

Published Date : May 30 2019

SUMMARY :

brought to you by Scalyr. John Hart's the Tech Lead Back End Engineering But, you guys have a unique secret sauce. You don't have to do anything, but we're going to speed up I think that's interesting because that comes to play. Some of that might be a good thing to put Exactly, and you don't have to do anything. What's the differentiation for you guys? So just the number of distinct fields You shouldn't even need to do because of the nature Because the nature of log data, it's overhead imagine the Teslas that are going to be connecting in, This is going to be a tsunami of new types of log data. as opposed to something you have to plan out Is the statement you're faster than others, All of that is going to come back within 100 milliseconds. They are looking at a lot of different data. Most of the time you're going to your logs, It's the time it takes to do the query. We're trying to make it very easy for you to move You guys have unique engineering culture, There's all kind of new ways, reasons why you need So I was really happy with the product as an idea existed, All the flexibility you get from that. So I figured that was probably the support team. You know it's interesting. They're bringing in the right tool at the right time. and get that answer back as quickly as you can. Is it ingest to value, ingesting is one part, So being able to have the alarm, What is the key things they should know about? we get you from the alerts, to your answers What's the culture here, at Scalyr like? to be working with these engineers. Applying AI is going to be a natural extension. And can deal with the data. Ciao, thanks for the insight. Good talking to you. Innovation Day with theCUBE here in Silicon Valley

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
fiveQUANTITY

0.99+

StevePERSON

0.99+

Steve NewmanPERSON

0.99+

twoQUANTITY

0.99+

JohnPERSON

0.99+

two guysQUANTITY

0.99+

John HartPERSON

0.99+

two peopleQUANTITY

0.99+

John FurrierPERSON

0.99+

Silicon ValleyLOCATION

0.99+

ScalyrORGANIZATION

0.99+

15 minutesQUANTITY

0.99+

one emailQUANTITY

0.99+

Thousands of engineersQUANTITY

0.99+

San MateoLOCATION

0.99+

oneQUANTITY

0.99+

100 millisecondsQUANTITY

0.99+

SplunkORGANIZATION

0.99+

IntelORGANIZATION

0.99+

WindowsTITLE

0.99+

San Mateo, CaliforniaLOCATION

0.99+

bothQUANTITY

0.99+

six months laterDATE

0.99+

10 secondsQUANTITY

0.99+

three hundred machinesQUANTITY

0.99+

MoorePERSON

0.99+

one pointQUANTITY

0.99+

firstQUANTITY

0.98+

one gigabyteQUANTITY

0.98+

20 different versionsQUANTITY

0.98+

TodayDATE

0.98+

two areasQUANTITY

0.98+

yesterdaysDATE

0.98+

one partQUANTITY

0.98+

eight, ten billion dollarsQUANTITY

0.98+

todayDATE

0.98+

10 minutesQUANTITY

0.98+

10 minutes agoDATE

0.97+

ten gigabytesQUANTITY

0.97+

Cube Innovation DayEVENT

0.97+

first timeQUANTITY

0.96+

GoogleORGANIZATION

0.96+

CadencePERSON

0.96+

five minutes agoDATE

0.94+

TeslasORGANIZATION

0.94+

a dayQUANTITY

0.94+

six hundred indexesQUANTITY

0.94+

six monthsQUANTITY

0.93+

first signQUANTITY

0.93+

ScalyrTITLE

0.92+

Scalyr Innovation Day 2019EVENT

0.91+

Innovation DayEVENT

0.9+

FirstQUANTITY

0.87+

a tonQUANTITY

0.84+

a weekQUANTITY

0.81+

yearsQUANTITY

0.81+

tons of logsQUANTITY

0.8+

SplunkTITLE

0.79+

a terabyte of dataQUANTITY

0.79+

Claudia Carpenter, Scalyr & Dave McAllister, Scalyr | Scalyr Innovation Day 2019


 

>> from San Matteo. It's the Cube covering scaler. Innovation Day Brought to You by scaler. >> Welcome to this Special Cube Innovation Day. Here in San Mateo, California Scale is headquarters for a coast of the Cube. We're here with two great guests. Claudia Carpenter co founder Andy McAlister, Who's Dev evangelist? Uh, great to have you guys here a chat before we came on. Thanks for having us >> Great to be >> so scaler. It's all about the logs. The answer is in the logs. That's the title of the segment Them. I'll see the log files with a lot of exhaust in their data value extracting that, but it's got more operational impact. What's what's the Why is the answer in the locks? >> Because that's where the real information is. It's one thing to be able to tell that something is going around when your systems, but what is going wrong as engineers, what we tend to do is the old print. If it's like here's everything I can think of in this moment and leave it as breadcrumbs for myself to find later, then I need to go and look at those bread crumbs >> in a challenge. Of course, with this is that logs themselves are proliferating. There's lots of data. There's lots of services inside this logs, so you've gotta be able to find your answers as fast as possible. You can't afford Teo. Wait for something else. T lead you to them. You need to deep dive >> the way you guys have this saying it's the place to start. What does that mean? Why? Why is that the new approach? >> What We're trying to differentiate because there's this trend right now in the Dev Ops world towards metrics because they're much smaller to store it, pre digesting what's going on in your systems. And then you just play a lot of graphs and things like that. We agree with that. You do need to be able to see what's going on. You need to be able to set alerts. Metrics are good, but they only get you so far. A lot of people will go through. Look at metrics, dig through and then they stop, switch over and go to their logs. We like to start with the logs, build our metrics from them, and then we go direct to >> the source. I think a minute explain what you mean by metrics, because that has multiple meanings. Because the current way around metrics and you kind of talked about a new approach. Could you just take a minute? Explain what you meant by metrics and how logs are setting up the measures. The difference there. >> So to me, metrics is just counting things right? So at log files of these long textual representations of what's going on in my system and it's impossible to visually parce that I mean literally 10,000 lines. So you count. I've got five of this one in six of this one, and it's much smaller to store. I've got five of this one and six of this one, but that's also not very much information, so that's really the difference. >> But, you know, we have customers who use their metrics to help them indicate something might be wrong inside of here. The problem is, is that modern environments where we have instant gratification, needs and people you know, we'd be wait five seconds. Basically, it's a law sale online here. You need to know what's went wrong, not just where we went wrong or that something went wrong. So building for the logs to the metrics allows you to also have a perfect time back to that specific entrance ancient entrance that lets you be you out. What was wrong? >> He mention Claudia Death ops. And this is really kind of think of fun market because Dev Ops is now going mainstream and see the enterprise now started to adopt. It's still Jean Kim from Enterprise. Debs estimates only 3% of enterprise really there yet. So the action's on the cloud Native Public Cloud side where it's, you know, full blown, you know, cloud native more services. They're coming to see Cooper Netease things of that nature out there. And these services are being stood up and torn down while the rhythmically like. So with who the hell stores that data? That's the logs. The nature of log files and data is changing radically with Dev ops. I'm certainly this is going to be more complications but developers and figuring out what's what. How do you see that? What's your reaction to that trend? >> Yeah, so Dev Ops is a very exciting thing. At were Google. It was sort of like the new thing is the developers had to do their own operations, and that's where this comes from. Unfortunately, a lot of enterprise will just rename their ops people devil apps, and that's not the same thing. It's literally developers doing operations, Um, and right now that it's never been so exciting as as it is right now in the text axe, because you could get so much that's open source. Pre built glue this all these things together. But since you haven't written the code yourself, you've no way deal which going on. So it's kind of like Braille. You've got to go back and look and feel your way through it to figure out what's going on. And that's where logs come into play. >> The logs essentially, you know, lift up, get people eyesight into visibility of things that they care about. Absolute. So what's this red thing? Somebody read what is written? Rennes. >> One of the approaches. You'll hear things like golden signals. You'll hear youse, and you'll hear a red Corvette stands for rates, a rose and duration. And ready is a concept that says, How do you actually work with some of these complex technologies working with you're talking about and actually determined where your problems are. So if you think about it, rate is kind of how much traffic's going through a signal for this as a metric, it's accumulative number. So to back to Claudia's point, it's just number here. But if you're trapping goes up, you want to know what's going wrong here is self explanatory. Something broke, fix it, and then duration is how long things took. You talked about communities, Communities works hands in hands with this concept of micro services. Micro services are everywhere, and there were Khun B places that have thousands of little services, all serving the bigger need here. If one of them goes slow, you need to know what went slow as fast as possible. So rate duration and air is actually combined to give you the overall health of your system. While at the same point logs elect, you figure out what was causing >> the problem we'll take. I'm intrigued by what Claudia said. They're on this. You know, Braille concept is essentially a lot of people are flying blind date with what's going on, but you mentioned micro services. That's one area that's coming. Got state full data. Stateless data. They were given a P I economy. Certainly a state becomes important for these applications. You know, the developers don't may or may not know what's happening, so they need to have some intelligence. Also, security we've seen in the cloud. When you have a lot of people standing up instances whether it's on Amazon or other clouds, they don't actually have security on some of their things. So they got it. Figure out the trails of what the data looks like they need the log files to have understanding of. Did something happened? What happened? Why? What is the bottom line here? Claudia? What what people do to kind of get visibility So they're not flying blind as developers and organizations. >> Well, you gotta log everything you can within reason. They always have to take into account privacy and security. But logs much as you can and pull logs from every one of the components in your systems. The micro services that day was just talking about are so cool. And as engineers, we can't resist them way. Love, complexity >> and cool things. >> Things especially cool things and new things. >> New >> green things. Sorry, easily distracted. But there they are, harder to support. They can be a really difficult environment. So again it's back to bread crumbs, leaving that that trail and being able to go back and reconstruct what happened. >> Okay, what's the coolest thing about scaler since we thought about cool and relevant? You guys certainly in the relevant side thing. Check the box there. What's cool? What's cool about scaler telling us? >> That's great. Answer What isn't. But you know, honestly, when I came to work here, I no idea I was familiar with Log Management was really with long search and so forth. And the first time I actually saw the product, my jaw dropped. Okay, I now go to a trade show, for instance, and I'm showing people to use this. And I hit my return button to get my results. And you showed band with can be really bad and it stalls for 1/10 of a second, and I complain about it now. No, there is nothing quite as thrilling as getting your results as fast as you can think about them. Almost your thought processes the slow part of determining what's going on, and that is mind boggling. >> So the speed is the killer. >> The speed is like what killed me. But honestly, something that Chloe's been heavily involved in It takes you two minutes to get started. I mean, there's no long learning curve there. You get the product and you are there. You're ready to go >> close about ease of use and simplicity, because developers are fickle, but they're also loyal. Do you have a good product? They loved to get in that love the freebie. You know, the 30 day trial, They'll they'll kick the tires on anything. But the product isn't working. You hear about it when it does work. This mass traffic to people you know pound at the doorstep of the product. What's the compelling value proposition for the developer out there? Because they >> don't want to >> waste time. That's like the killer death to any product for development. Waste their time. They don't want to deal with it. >> So we live in the TL D our world right now. Frankly, if I have to read something, I usually move on on DH. That's the approach we take with scaler as well. Yes, we have some documentation, but I always feel like I have failed with the user interface design. If I require you to go read the documentation. So I try to take that into account with everything that we that we put out there making it really easy and fast it just jumping in, try stuff. >> How do you get to solve the complex complexity problem through attraction software? What's the secret sauce for the simplicity of this system? >> For me, it's a complete lack of patients. It's just like I wouldn't put up with that. I'm not gonna ask you to. Frankly, I view this sounds a little bit trite, but I've you Software's a relationship, and I view whoever is looking at it as a peer of mine, and I would be embarrassed if they couldn't figure it out if it wasn't obvious. But it is. We do have this sort of slope here of people who really know what's going on and people wanna optimize. This is your 80 20 split on people that don't know what just want to come in. I want both of them to be happy, so we need to blend those >> to talk about the value proposition of what you guys have because we've been covering you know log file mentioned Lock Management's Splunk events. We've gone, too. There's been no solution that I think may be going on 10 years old, that were once cutting edge. But the world changes so fast with Amazon Web services with Google Cloud with azure. Then get the international clouds out there as well. It's it's here. I mean, the scale is there, you got compute. You got the edge of the network right around the corner in the data problem's not going away. Log files going be needed. You have all this data exhausted, these value. >> If anything, there's always going to be more data that's out there. You're going to have more sources of that data coming in here. You're talking a little bit about you have the hybrid cloud. Where's part on prom? Part in the cloud. You could have multi clouds where across his boundaries. You're gonna have the wonderful coyote world where you have no idea when or where you're going to get an upload from too. This too and EJ environment. And you've got to worry about those and at the same time time, the logging, everything, the breadcrumbs. You have ephemeral events. They're not always there, and those are the ones that kill you. So the model is really simple and applaud Claudia for conning concept wise. But you're playing with concept of kiss, right? We'll hear its keep it simple and sophisticated at the same time. So I can teach you to do this demo in two minutes flat, and from there you can teach yourself everything else that this product's capable of doing it. That simple >> talk about who? The person out there that you want to use his product and why should they give scale or look what's in it for them. >> So for me, I think the perfect is to have Dev ops use it. It's developers. We really have designed a product less for ops and more for engineers. So one of the things that is different about scaler is you have somebody come in and set it up, parsed logs that ingestion of logs, which is different than splunk and sumo on DH. Then it's ready to use right out of the box. So for me, I think that our sweet spot, his engineers, because a lot of our formulations of things you do are more technical you're thinking about about you know what air the patterns here. I'm not going to say it's calculus, because then that wouldn't be simple. But it's along. Those >> engineers might be can also cloud Native is a really key party. People who were cloud native. We're actually looking at four in the cloud or cloud migration, >> right way C a lot. For instance, in the Croup. In any space from the Cloud Native Compute Foundation, we're seeing a tremendous instrument interest in Prometheus. We're seeing a lot of interest in usto with service mesh. The nice thing is that they are already all admitting logs themselves. And so, from our viewpoint, we bring them in. We put them together. So now you can look at each piece as it relates to the very other piece >> Claudia share with the folks who, watching this just some anecdotal use cases of what you guys have used internally, whether customers that give him a feel for how awesome scaler is and what's the what could they expect? >> Well, put me on the spot here. Um, >> I'll kick off. So we have a customer in Germany there need commerce shop, They have 1,000 engineers for here. When we started the product we replace because it was on a charge basis that was basically per user. They came back and they said, Oh my God, you don't understand our queries Air taking 15 minutes to get back By the time the quarry comes back, the engineer's forgotten why he asked the question for this. And so they loaded up. They rapidly discovered something unique. It's that they can discover things because anyone can use it. We now have 500 engineers that touch the log files every day, I will attest. Having written code myself, nobody reads log files for fun. But Scaler makes it easy to discover new things and new connections. And they actually look at what house >> discoveries of real value, proper >> discovery is a massive value proposition. Uh, where you figure out things that you don't know about back to that events thing that Claudia started about was, you can only measure the events that you can already considered. You can't measure things that didn't happen >> close. It quickly thought what the culture on David could chime in. What's the culture like here scaler? >> It is a unique culture and I know everyone probably says that about their startup, but we keep work life balance as a very important component. We're such nerds and unabashedly nerds. Wait, what we do. It's a joyful atmosphere to work in. Our founder, Steve Newman, is there in his flat, his flannel shirt, his socks cruising around. Um, and we are very much into our quality barcode. We have a lot of the principles of Google sort of combined into a start up. I mean to say it's a very honest environment, >> Sol. Heart problems make it a good environment. >> Yeah, and I value provide real values, are critical >> for me and have fun at the same point in time. The people here work hard, but they share what they're working on. They share information. They're not afraid to answer the what are you working on? Question. But we always managed to have fun. We are a pretty tight group that way. >> Well, thanks for sharing that insight. We have a lot of fun here in Innovation Day with the Q p. I'm John Furia. Thanks for watching

Published Date : May 30 2019

SUMMARY :

Innovation Day Brought to You by scaler. Uh, great to have you guys here a chat before we came on. The answer is in the logs. It's one thing to be able to tell that something is going around when your T lead you to them. the way you guys have this saying it's the place to start. You do need to be able to see what's going Because the current way around metrics and you kind of talked about a new approach. So you count. So building for the logs to the metrics allows you to also have a perfect time back to that mainstream and see the enterprise now started to adopt. it's never been so exciting as as it is right now in the text axe, because you could get so much that's open source. The logs essentially, you know, lift up, get people eyesight into visibility of things that they to give you the overall health of your system. You know, the developers don't may or may not know what's happening, so they need to have some intelligence. But logs much as you can and pull logs from every one of the components in your systems. So again it's back to bread crumbs, You guys certainly in the relevant side thing. But you know, honestly, when I came to work here, You get the product and you are there. You know, the 30 day trial, That's like the killer death to any product for development. That's the approach we take with scaler as well. Frankly, I view this sounds a little bit trite, but I've you Software's a relationship, to talk about the value proposition of what you guys have because we've been covering you know log file mentioned Lock Management's So the model is really simple and applaud The person out there that you want to use his product and why should they give scale or So one of the things that is different about scaler is you have somebody come in and set it up, We're actually looking at four in the cloud or So now you can look at each piece as it relates to the very other piece Well, put me on the spot here. Oh my God, you don't understand our queries Air taking 15 minutes to get back By the time the quarry you can only measure the events that you can already considered. What's the culture like here scaler? We have a lot of the principles of Google sort of combined into the what are you working on? We have a lot of fun here in Innovation Day with the Q p.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Andy McAlisterPERSON

0.99+

Steve NewmanPERSON

0.99+

Claudia CarpenterPERSON

0.99+

fiveQUANTITY

0.99+

GermanyLOCATION

0.99+

DavidPERSON

0.99+

Jean KimPERSON

0.99+

two minutesQUANTITY

0.99+

ClaudiaPERSON

0.99+

30 dayQUANTITY

0.99+

sixQUANTITY

0.99+

John FuriaPERSON

0.99+

Cloud Native Compute FoundationORGANIZATION

0.99+

10,000 linesQUANTITY

0.99+

five secondsQUANTITY

0.99+

15 minutesQUANTITY

0.99+

AmazonORGANIZATION

0.99+

500 engineersQUANTITY

0.99+

each pieceQUANTITY

0.99+

Dave McAllisterPERSON

0.99+

GoogleORGANIZATION

0.99+

1,000 engineersQUANTITY

0.99+

bothQUANTITY

0.99+

thousandsQUANTITY

0.99+

PrometheusTITLE

0.98+

first timeQUANTITY

0.97+

two great guestsQUANTITY

0.97+

ChloePERSON

0.97+

oneQUANTITY

0.95+

fourQUANTITY

0.94+

San Mateo, CaliforniaLOCATION

0.93+

one thingQUANTITY

0.93+

OneQUANTITY

0.93+

RennesPERSON

0.91+

Claudia DeathPERSON

0.91+

Cooper NeteasePERSON

0.91+

10 years oldQUANTITY

0.91+

3%QUANTITY

0.91+

San MatteoORGANIZATION

0.9+

opsTITLE

0.9+

one areaQUANTITY

0.89+

CorvetteCOMMERCIAL_ITEM

0.88+

Innovation DayEVENT

0.88+

Innovation Day 2019EVENT

0.87+

1/10 of a secondQUANTITY

0.82+

ScalyrPERSON

0.82+

LockTITLE

0.78+

a minuteQUANTITY

0.77+

ScalyrEVENT

0.77+

Amazon WebORGANIZATION

0.76+

80 20 splitQUANTITY

0.72+

DebsPERSON

0.65+

Khun BLOCATION

0.64+

minuteQUANTITY

0.63+

Dev OpsTITLE

0.62+

DayEVENT

0.62+

BrailleTITLE

0.51+

CroupORGANIZATION

0.43+

CloudTITLE

0.4+

ScaleORGANIZATION

0.34+

Christine Heckart, Scalyr | CUBEConversation, February 2019


 

(music) >> Everyone, welcome to a special CUBE Conversation. We're here in Palo Alto, theCUBE Studios, I'm John Furrier, the host of theCUBE video, we're here with a very special guest and the new CEO of a hot startup, Christine Heckart, CEO Scalyr. Welcome to theCUBE, great to see you. >> Thank you. >> Thanks for coming on. So, you're the new CEO of Scalyr, the CEO transitioned. >> Super great founder, great engineering team. >> Yes, yes. >> Hot startup, lot of finance and a lot of customers. Tell us about Scalyr. >> So, Scalyr was founded by a guy named Steve Newman. He is a serial entrepreneur. Scalyr is his 7th company. His 6th company was called Writely and it got bought by Google and is what we all know and love as Google Docs today. So, when he was inside Google, building out Google Docs he had the same problem that a lot of engineers do right now especially if they're on a modern stack. It's really hard to troubleshoot. It's hard to figure out what's running well and if there's a problem where it's at and fix it quickly. And so he left in 2011 and he founded Scalyr. >> And so, the company has how many employees? Just give us the quick numbers of employees, funding, venture involved, customers... Give us the quick numbers. >> The company has a little over 50 employees. It just took a Series A round about a year, a little under a year-and-a-half ago. Led by Shasta Ventures. There are 300 paying customers. We grew the core customer base last year by 170% revenue. So, it's growing very quickly. We more than doubled the employees in the last year. So, like you say, it's on fire and we're trying to scale up ourselves as we help our customers scale. >> So growth is obviously rocket ship growth is an attractive, enticing opportunity for you. You've been there, done that. So, what else attracted you to the opportunity? What made you make the move to take the leadership helm as the chief of Scalyr? >> The thing that attracted me most to Scalyr is that the world runs on code right now. And for companies for whom the code is the company downtime is money, it's critical. But, in these modern stacks, it's really hard to figure out where the problem is. Everything's been so abstracted. And if you're cloud-based, if you're moving to serverless, if you're on Kubernetes or some kind of container platform trying to do orchestration... Any of that makes it faster and easier to build a service but a lot harder to figure out if and where there's a problem within the service. And Scalyr's designed by engineers for engineers on modern stacks to help them figure out where that problem is and get it solved very quickly. >> So obviously the new wave is the cloud. Cloud natives search for big opportunities converging. What's the market opportunity? What are you guys going after in terms of, if you look at the marketplace, what's the segment you're going after? Lay that out, what segment are you in? Is it just cloud, is it a piece of cloud native, what's the market opportunity? >> We serve customers who have applications built on a new stack a cloud-based stack. And typically the people who use us most and who love us most are the site-reliability engineers, responsible for keeping it up and running. Dev Ops, true developers... One of our largest customers is a company called Zalando. They're an older company that did a digital transition, and so they do online e-commerce now, one of the largest in Europe. And for their engineers, 25% of their engineers use the product daily. 50% use it weekly. So, it's part of the workflow. It helps them do their jobs better. So, it's a utility. And the founder, you said, worked at Google, obviously he saw the scale there. They have a site reliability engineer concept that's obviously run a huge infrastructure. Is that kind-of the market you're going after? Dev Ops, SRE types? >> Yep, so we're an observability tool. There's kind-of two camps of observability. We've started in the logging space. So, what we're really known for is the fast logging tool. And the reason why we're known for being fast is unlike all the other architectures that were optimized for the more traditional stack, we've been written and optimized for the new stack and we're the only architecture that doesn't use keyword index in order to do that search. And that's what makes us fast. But it's also what makes us more affordable. And it contributes to, the architecture contributes to the simplicity of how you can use the tool and how the tool is written. >> So, the core tech is, under the hood would be, what, what's the core tech in that. Because speed obviously means you've got some technology there. What's the core technology that makes that speed work? >> So, we're a true multi-tenancy product, we run on Amazon ourself, it's a multi-tenancy system, it uses massive parallel processing. And basically we can ingest any data, in fact we're designed for machine data, for logs, for things that don't, they're not full documents, it's not like a video or something on the World Wide Web. These are little tiny events that come in and there's lots and lots and lots of them. Scalyr is the name of the company, we scale up and we scale out. And what we do is, when you go to run a query we throw every processor in our system at every query that comes in. And the reason why that becomes important in this multi-tenancy architecture is the more customers we have, the more data that we ingest, the more servers we have to throw at every query for every customer. So as we grow, the service gets better, it gets faster, it gets more affordable for all customers. >> That's the best thing about the cloud, you can bring that compute to bear so you have a little flywheel of acceleration. Talk about the role of data, because this is interesting, one of the core problems we hear a lot in the cloud native world is that so many, now, sets of services being deployed Kubernetes is becoming the de-facto sceme for orchestration around micro-services, containers obviously they're our standard as well. Which means there's more instrumentation, right? So, I could almost see how the founder saw this future because he lived it. >> Exactly. >> He lived the future, and now the real world's going "hey, we have that Google-like problem, we have tons of services playing around but it's not just logging and getting a query back in minutes. These services are talking to applications through each other. This is like mission critical. >> Very mission critical. >> Is this what you guys are doing? >> Right, if you are running in a traditional environment and you're running sort-of traditional applications there are really good logging solutions out there for that. That's what Splunk was founded on, they're amazing at doing that. But, nobody had built an optimized logging system and an observability system for the new stack. And that's what we're designed to do. And you use, you said, in minutes. And minutes is what it takes for most log queries in a traditional environment. 96% of all of our queries happen in less than a second. We're fast. >> So, this is really what the Agile teams need, Dev Ops teams need. >> Yes. When code is money, when it's the company, when every second of downtime, or even a service that's impaired, it might not be hard down but it's not running the way that it should, that impacts the customer experience, it impacts how many customers you can get if you're a real-time business, it impacts revenue. It's important to get that service up and running quickly. >> So, you guys are re-imagining logging, which is more mission critical rather than okay, where the breach is, what's going on in the basic logs, like Splunk used to do. So, talk about the product. Who's the target persona, how is it consumed, you mentioned on the cloud, is it SAS? How does someone get involved, do they just download it, do they get a consult, talk about the product and the target audiences. >> So, it is SAS, it's delivered by SAS. We don't have a non-prime service today or an offering. And, typically it's the site-reliability engineer, the architects, the developers themselves, Dev Ops for sure, Cloud Ops, they're the ones that are using the tool day-to-day. And it's a beautiful dashboard, a lot of it is just point and click. You can go in, if you want to add English-language query, you don't have to learn a special query language to use this, that's why people say it's so fast and easy to learn to use and I think that's why we get the kind of daily usage we have. You don't have to be an expert in the tool, it's very intuitive, you get a dashboard, you can just keep clicking down off of a chart and get all the way to the code. In fact, we can link you from where the problem is straight into the code that underlies that so you can then go and solve the problem. >> So, it's really easy to get into. >> Very. >> So I don't need do any kind of elaborate configurations? >> No. You don't need to do elaborate configurations and, as importantly, you don't need to learn a new specialized query language. Which, again, in the more traditional systems you find that there's only a few people that really know how to use the product because you have to learn the query language. It's kind-of like CLI or something in networking. And so there's a few specialists and they're very good, but if you're an engineer and there's a problem and you want to use the tool, you don't have time to become an expert. You've got to just use it. And so, even though it's designed to search machine language, you can use English, it's pretty easy to figure out how to write that query, and it comes back so quickly, if you didn't get it quite right you can just refine and do the search again and narrow down. >> I can see why the V.C.'s like this, the venture capitalists, because it markets good, big wave, cloud native lot of growth there. Certainly hyper-scalers, enterprisers are coming next, so I can imagine that's more head room. Product is consumable, SAS, in the cloud, technology that's fast, compelling, >> You're good, you can be on the pitch team. >> Final check box is customers. >> Yes. >> So, how many customers do you have? >> We have 300 paying customers. That doubled in the last year, and we have some big names and a lot of small companies. So, some of the fun ones are Giphy, my kids love that, my husband, right? Using them every day. NBC Universal, kind-of on the other side of that. Companies for whom the application is the business. And it can be a traditional company that's trying to launch new digital transformation initiatives, or it can be companies that were born in the cloud. >> And that's only going to get better, again, the markup. There's more companies going to the cloud. Talk about multi-cloud, because you know we had conversations in the past before you came on Scalyr around multi-cloud. That's only going to increase the sets of microservices and the role of data. Not just code, because code is data. Data is code. It's going to be a whole data ops movement coming soon, we see that tsunami coming. How does the multi-cloud fit into all of this in your mind? Is it too early, is that coming later? Or, is it available now? Could your customers have the multi-cloud now? >> For our customers, if they are in a multi-cloud environment today, we're an ideal tool for them 'cause we can run on any of their clouds. Most customers are not yet in multi-cloud, but they're trying to get there. Just like most customers are not yet fully containerized, but you want to pick a tool today that will grow with you and get you to tomorrow. And that's where Scalyr comes in, because we are designed and optimized for that environment. And, there's kind-of no scale too big for us. The company was named very deliberately. We can scale up, we can scale out, and we can continue to be simple and fast as your business scales. >> Christine, you've had a track record, you've had a great career, you've seen a lot of waves of innovation. You've been working for big companies, a dozen start-ups, now you're back at a start-up. So, I got to ask you a personal question, how does it feel? What's it like back into the trenches? And, you've got a hot start-up here. One month on the job, what's going on there? >> I love it. I really love it. You know, there's 50 people in the company every one of them is high-energy they're so committed to the cause. You know, when the world runs on code and you help that code run better, you're making an impact on the world every single day. These people know it, they feel it. They're very committed. And, unlike some of the much bigger companies I've been at, you can innovate so quickly. So, I just finished my first 30 days onboarding, I have talked to our big customers, a couple dozen of our really big customers. And, they all say a couple of things over and over again, there's just some consistent themes. Fast always comes up, it's usually the first word. Simple comes up. Affordable, which is nice. People pay a lot of money for these tools and they don't always feel good about all that money. We can come in and be much more affordable and they appreciate that. But, the thing that kept coming up over and over again was the customer service and the customer support. And nobody, I come from worlds where nobody ever raves about customer service and customer support. So, it was odd and I dug a little bit, and there were two pieces to that. One, because we're 50 people, when somebody has a problem, we're all-in. It gets solved quickly. A lot of times we can sort-of flag that problem for the customer because we're keeping track. But the other thing that was brought up is when they need something that maybe we don't deliver today they ask for it. And a lot of times we can give it to them pretty quickly. There's not some big, huge long roadmap process. We're a small company, we can't always do it quickly, but a lot of times we can turn stuff around and it's great. >> Well, you're hitting the ground running, got your running shoes on, sounds like a great opportunity. You've got a lot of work to do! What are some of the priorities? I'm sure hiring is big. Take a minute to give the plug on for any hirings you have. >> So, we're just moving to brand new facilities in downtown San Mateo a couple blocks from Caltrain. And that is because we doubled the company size last year, and we need to double it again this year. So, we are hiring, if you know of any great people, please send them to us. We announced some new things at Amazon Reinvent, late last year, one of which is new distributed tracing. We're on the very leading edge of this trend, and it's an important one. It's probably a conversation maybe with Steve himself. Yeah, he's very knowledgeable, and it's a fascinating area because the APM systems, again, kind-of the traditional if you can say that for APM, have all been built for the front-end, for the websites. But, once you move into these container environments you need that same kind of capability for the back end. And so you need something called distributed tracing. It turns out that if you're born in the logs like we are doing that distributed tracing which links them together and gives you a picture systemically of what's happening and how you link the events for a fuller picture. We're kind-of uniquely good at that. So, we've got that coming out later this quarter. >> That'll attract some engineers 'cause that's a hard problem. >> It's a hard, a lot of the problems we solve are hard, interesting problems, and they're problems for the new stack, and they're problems at scale. And smart engineers like to work on that. >> You know, state's a big one, stateless applications, state is a huge problem I'm sure you guys are on, this is where the tracing plays in. >> Yes, exactly. >> Final question for you before we end is competition. Certainly people who are in the new world, going cloud native, they get it, they get the complexity, they get the opportunity as well. So, there's a lot of investment there. But, the folks that are looking at Scalyr like "ooh, what's the competitive lens"? How do you answer that? What's your response to differentiate, being different from the competition? So, there's lots and lots of observability tools, and even logging tools in the market. And from that standpoint you could say there's tons of competition. They're all built on keyword indexing, so they're all optimized for looking back, for yesterday's world. We're the only ones that are built on this very new architecture, designed for the future stack, designed for the new stack. And, we're the only ones that don't use keyword indexing. And, what we have is this amazing, multi-tenancy, columnar-based approach that gives you these advantages of fast, simple, and affordable. >> So you're staking the ground in the marketplace of speed, sub-second response, 2 queries, 4 runtime applications that are mission critical to businesses. Is that right? >> Said very well, thank you. >> Well, that's what we do here at theCUBE, we figure it out, we get the data. >> Christine, thanks for coming out. Congratulations on the new role. We'll be following you guys. Love the name, Scalyr. Scaling is table stakes now in the cloud. If you don't compete at scale, or operate at scale, or develop at scale, you're probably going to be in trouble. So, theCUBE's covering it as always. Thanks for watching, I'm John Furrier.

Published Date : Feb 8 2019

SUMMARY :

and the new CEO of a hot startup, the CEO transitioned. Tell us about Scalyr. he had the same problem that a lot of engineers do right now And so, the company has how many employees? We more than doubled the employees in the last year. So, what else attracted you to the opportunity? is that the world runs on code right now. Lay that out, what segment are you in? And the founder, you said, worked at Google, the simplicity of how you can use the tool So, the core tech is, under the hood would be, is the more customers we have, one of the core problems we hear a lot He lived the future, and now the real world's and an observability system for the new stack. So, this is really what the Agile teams need, that impacts the customer experience, So, talk about the product. and get all the way to the code. and you want to use the tool, in the cloud, So, some of the fun ones are Giphy, How does the multi-cloud fit into all of this that will grow with you and get you to tomorrow. So, I got to ask you a personal question, and the customer support. What are some of the priorities? kind-of the traditional if you can say that for APM, 'cause that's a hard problem. It's a hard, a lot of the problems we solve I'm sure you guys are on, designed for the new stack. mission critical to businesses. we figure it out, we get the data. Scaling is table stakes now in the cloud.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Christine HeckartPERSON

0.99+

Steve NewmanPERSON

0.99+

StevePERSON

0.99+

ChristinePERSON

0.99+

2011DATE

0.99+

EuropeLOCATION

0.99+

John FurrierPERSON

0.99+

ZalandoORGANIZATION

0.99+

ScalyrPERSON

0.99+

50%QUANTITY

0.99+

AmazonORGANIZATION

0.99+

50 peopleQUANTITY

0.99+

February 2019DATE

0.99+

two piecesQUANTITY

0.99+

Palo AltoLOCATION

0.99+

GoogleORGANIZATION

0.99+

96%QUANTITY

0.99+

ScalyrORGANIZATION

0.99+

NBC UniversalORGANIZATION

0.99+

6th companyQUANTITY

0.99+

last yearDATE

0.99+

7th companyQUANTITY

0.99+

first wordQUANTITY

0.99+

less than a secondQUANTITY

0.99+

25%QUANTITY

0.99+

Shasta VenturesORGANIZATION

0.99+

One monthQUANTITY

0.99+

CaltrainLOCATION

0.99+

this yearDATE

0.99+

theCUBE StudiosORGANIZATION

0.99+

tomorrowDATE

0.99+

300 paying customersQUANTITY

0.99+

todayDATE

0.99+

2 queriesQUANTITY

0.99+

theCUBEORGANIZATION

0.98+

OneQUANTITY

0.98+

Google DocsTITLE

0.98+

oneQUANTITY

0.98+

over 50 employeesQUANTITY

0.98+

late last yearDATE

0.97+

EnglishOTHER

0.97+

SASORGANIZATION

0.97+

SplunkORGANIZATION

0.97+

first 30 daysQUANTITY

0.96+

two campsQUANTITY

0.96+

yesterdayDATE

0.96+

later this quarterDATE

0.96+

Series AOTHER

0.94+

a year-and-a-half agoDATE

0.94+

Dev OpsTITLE

0.92+

170% revenueQUANTITY

0.92+

WritelyORGANIZATION

0.9+

single dayQUANTITY

0.89+

secondQUANTITY

0.88+

a dozenQUANTITY

0.87+

couple dozenQUANTITY

0.87+

KubernetesTITLE

0.86+

Cloud OpsTITLE

0.79+

San MateoLOCATION

0.75+

ConversationEVENT

0.75+

4 runtimeQUANTITY

0.7+