Ian Smith, Chronosphere | KubeCon + CloudNativeCon NA 2022`
(upbeat music) >> Good Friday morning everyone from Motor City, Lisa Martin here with John Furrier. This is our third day, theCUBE's third day of coverage of KubeCon + CloudNativeCon 22' North America. John, we've had some amazing conversations the last three days. We've had some good conversations about observability. We're going to take that one step further and look beyond its three pillars. >> Yeah, this is going to be a great segment. Looking forward to this. This is about in depth conversation on observability. The guest is technical and it's on the front lines with customers. Looking forward to this segment. Should be great. >> Yeah. Ian Smith is here, the field CTO at Chronosphere. Ian, welcome to theCUBE. Great to have you. >> Thank you so much. It's great to be here. >> All right. Talk about the traditional three pillars, approach, and observability. What are some of the challenges with that, and how does Chronosphere solve those? >> Sure. So hopefully everyone knows people think of the three pillars as logs, metrics and traces. What do you do with that? There's no action there. It's just data, right? You collect this data, you go put it somewhere, but it's not actually talking about any sort of outcomes. And I think that's really the heart of the issue, is you're not achieving anything. You're just collecting a whole bunch of data. Where do you put it? What are you... What can you do with it? Those are the fundamental questions. And so one of the things that we're focused on at Chronosphere is, well, what are those outcomes? What is the real value of that? And for example, thinking about phases of observability. When you have an incident or you're trying to investigate something through observability, you probably want to know what's going on. You want to triage any problems you detect. And then finally, you want to understand the cause of those and be able to take longer term steps to address them. >> What do customers do when they start thinking about it? Because observability has that promise. Hey, you know, get the data, we'll throw AI at it. >> Ian: Yeah. >> And that'll solve the problem. When they get over their skis, when do they realize that they're really not tackling it properly, or the ones that are taking the right approach? What's the revelation? What's your take on that? You're in the front lines. What's going on with the customer? The good and the bad. What's the scene look like? >> Yeah, so I think the bad is, you know, you end up buying a lot of things or implementing even in open source or self building, and it's very disconnected. You're not... You don't have a workflow, you don't have a path to success. If you ask different teams, like how do you address these particular problems? They're going to give you a bunch of different answers. And then if you ask about what their success rate is, it's probably very uneven. Another key indicator of problems is that, well, do you always need particular senior engineers in your instance or to help answer particular performance problems? And it's a massive anti pattern, right? You have your senior engineers who are probably need to be focused on innovation and competitive differentiation, but then they become the bottleneck. And you have this massive sort of wedge of maybe less experienced engineers, but no less valuable in the overall company perspective, who aren't effective at being able to address these problems because the tooling isn't right, the workflows are incorrect. >> So the senior engineers are getting pulled in to kind of fix and troubleshoot or observe what the observability data did or didn't deliver. >> Correct. Yeah. And you know, the promise of observability, a lot of people talk about unknown unknowns and there's a lot of, you know, crafting complex queries and all this other things. It's a very romantic sort of deep dive approach. But realistically, you need to make it very accessible. If you're relying on complex query languages and the required knowledge about the architecture and everything every other team is doing, that knowledge is going to be super concentrated in just a couple of heads. And those heads shouldn't be woken up every time at 3:00 AM. They shouldn't be on every instant call. But oftentimes they are the sort of linchpin to addressing, oh, as a business we need to be up 99.99% of the time. So how do we accomplish that? Well, we're going to end up burning those people. >> Lisa: Yeah. >> But also it leads to a great dissatisfaction in the bulk of the engineers who are, you know, just trying to build and operate the services. >> So talk... You mentioned that some of the problems with the traditional three pillars are, it's not outcome based, it leads to silo approaches. What is Chronosphere's definition and can you walk us through those three phases and how that really gives you that competitive edge in the market? >> Yeah, so the three phases being know, triage and understand. So just knowing about a problem, and you can relate this very specifically to capabilities, but it's not capabilities first, not feature function first. So know, I need to be able to alert on things. So I do need to collect data that gives me those signals. But particularly as you know, the industry starts moving towards as slows. You start getting more business relevant data. Everyone knows about alert storms. And as you mentioned, you know, there's this great white hope of AI and machine learning, but AI machine learning is putting a trust in sort of a black box, or the more likely reality is that really statistical model. And you have to go and spend a very significant amount time programming it for sort of not great outcomes. So know, okay, I want to know that I have a problem, I want to maybe understand the symptoms of that particular problem. And then triage, okay, maybe I have a lot of things going wrong at the same time, but I need to be very precise about my resources. I need to be able to understand the scope and importance. Maybe I have five major SLOs being violated right now. Which one is the greatest business impact? Which symptoms are impacting my most valuable customers? And then from there, not getting into the situation, which is very common where, okay, well we have every... Your customer facing engineering team, they have to be on the call. So we have 15 customer facing web services. They all have to be on that call. Triage is that really important aspect of really mitigating the cost to the organization because everyone goes, oh, well I achieved my MTTR and my experience from a variety of vendors is that most organizations, unless you're essentially failing as a business, you achieve your SLA, you know, three nines, four nines, whatever it is. But the cost of doing that becomes incredibly extreme. >> This is huge point. I want to dig into that if you don't mind, 'cause you know, we've been all seeing the cost of ownership miles in it all, the cost of doing business, cost of the shark fan, the iceberg, what's under the water, all those metaphors. >> Ian: Yeah. >> When you look at what you're talking about here, there are actually, actually real hardcore costs that might be under the water, so to speak, like labor, senior engineering time, 'cause Cloud Native engineers are coding in the pipelines. A lot of impact. Can you quantify and just share an example or illustrate where the costs are? 'Cause this is something that's kind of not obvious. >> Ian: Yeah. >> On the hard costs. It's not like a dollar amount, but time resource breach, wrong triage, gap in the data. What are some of the costs? >> Yeah, and I think they're actually far more important than the hard costs of infrastructure and licensing. And of course there are many organizations out there using open source observability components together. And they go, Oh it's free. No licensing costs. But you think again about those outcomes. Okay, I have these 15 teams and okay, I have X number of incidents a month, if I pull a representative from every single one of those teams on. And it turns out that, you know, as we get down in further phases, we need to be able to understand and remediate the issue. But actually only two teams required of that. There's 13 individuals who do not need to be on the call. Okay, yes, I met my SLA and MTTR, but if I am from a competitive standpoint, I'm comparing myself to a very similar organization that only need to impact those two engineers versus the 15 that I had over here. Who is going to be the most competitive? Who's going to be most differentiated? And it's not just in terms of number of lines of code, but leading to burnout of your engineers and the churn of that VPs of engineering, particularly in today's economy, the hardest thing to do is acquire engineers and retain them. So why do you want to burn them unnecessarily on when you can say, okay, well I can achieve the same or better result if I think more clearly about my observability, but reduce the number of people involved, reduce the number of, you know, senior engineers involved, and ultimately have those resources more focused on innovation. >> You know, one thing I want, at least want get in there, but one thing that's come up a lot this year, more than I've ever seen before, we've heard about the skill gaps, obviously, but burnout is huge. >> Ian: Yes. >> That's coming up more and more. This is a real... This actually doesn't help the skills gap either. >> Ian: Correct. >> Because you got skills gap, that's a cost potentially. >> Ian: Yeah. >> And then you got burnout. >> Ian: Yeah. >> People just kind of sitting on their hands or just walking away. >> Yeah. So one of the things that we're doing with Chronosphere is, you know, while we do deal with the, you know, the pillar data, but we're thinking about it more, what can you achieve with that? Right? So, and aligning with the know, triage and understand. And so you think about things like alerts, you know, dashboards, you be able to start triaging your symptoms. But really importantly, how do we bring the capabilities of things like distributed tracing where they can actually impact this? And it's not just in the context of, well, what can we do in this one incident? So there may be scenarios where you, absolutely do need those power users or those really sophisticated engineers. But from a product challenge perspective, what I'm personally really excited about is how do you capture that insight and those capabilities and then feed that back in from a product perspective so it's accessible. So you know, everyone talks about unknown unknowns in observability and then everyone sort of is a little dismissive of monitoring, but monitoring that thing, that democratizes access and the decision making capacity. So if you say I once worked at an organization and there were three engineers in the whole company who could generate the list of customers who were impacted by a particular incident. And I was in post sales at the time. So anytime there was a major incident, need to go generate that list. Those three engineers were on every single incident until one of them got frustrated and built a tool. But he built it entirely on his own. But can you think from an observability perspective, can you build a thing that it makes all those kinds of capabilities accessible to the first point where you take that alert, you know, which customers are affected or whatever other context was useful last time, but took an hour, two hours to achieve. And so that's what really makes a dramatic difference over time, is it's not about the day one experience, but how does the product evolve with the requirements and the workflow- >> And Cloud Native engineers, they're coding so they can actually be reactive. That's interesting, a platform and a tool. >> Ian: Yes. >> And platform engineering is the hottest topic at this event. And this year, I would say with Cloud Native hearing a lot more. I mean, I think that comes from the fact that SREs not really SRE, I think it's more a platform engineer. >> Ian: Yes. >> Not everyone's an... Not company has an SRE or SRE environment. But platform engineering is becoming that new layer that enables the developers. >> Ian: Correct. >> This is what you're talking about. >> Yeah. And there's lots of different labels for it, but I think organizations that really think about it well they're thinking about things like those teams, that developer efficiency, developer productivity. Because again, it's about the outcomes. It's not, oh, we just need to keep the site reliable. Yes, you can do that, but as we talked about, there are many different ways that you can burn unnecessary resources. But if you focus on developer efficiency and productivity, there's retainment, there's that competitive differentiation. >> Let's uplevel those business outcomes. Obviously you talked about in three phases, know, triage and understand. You've got great alignment with the Cloud Native engineers, the end users. Imagine that you're facilitating company's ability to reduce churn, attract more talent, retain talent. But what are some of the business outcomes? Like to the customer experience to the brand? >> Ian: Sure. >> Talk about it in some of those contexts. >> Yeah. One of the things that not a lot of organizations think about is, what is the reliability of my observability solution? It's like, well, that's not what I'm focused on. I'm focused on the reliability of my own website. Okay, let's take the, common open source pattern. I'm going to deploy my observability solution next to my core site infrastructure. Okay, I now have a platform problem because DNS stopped working in cloud provider of my choice. It's also affecting my observability solution. So at the moment that I need- >> And the tool chain and everything else. >> Yeah. At the moment that I need it the most to understand what's going on and to be able to know triage and understand that fails me at the same time. It's like, so reliability has this very big impact. So being able to make sure that my solution's reliable so that when I need it the most, and I can affect reliability of my own solution, my own SLA. That's a really key aspect of it. One of the things though that we, look at is it's not just about the outcomes and the value, it's ROI, right? It's what are you investing to put into that? So we've talked a little bit about the engineering cost, there's the infrastructure cost, but there's also a massive data explosion, particularly with Cloud Native. >> Yes. Give us... Alright, put that into real world examples. A customer that you think really articulates the value of what Chronosphere is delivering and why you're different in the market. >> Yeah, so DoorDash is a great customer example. They're here at KubeCon talking about their experience with Chronosphere and you know, the Cloud Native technologies, Prometheus and those other components align with Chronosphere. But being able to undergo, you know, a transformation, they're a Cloud Native organization, but going a transformation from StatsD to very heavy microservices, very heavy Kubernetes and orchestration. And doing that with your massive explosion, particularly during the last couple of years, obviously that's had a very positive impact on their business. But being able to do that in a cost effective way, right? One of the dirty little secrets about observability in particular is your business growth might be, let's say 50%, 60%, your infrastructure spend in the cloud providers is maybe going to be another 10, 15% on top of that. But then you have the intersection of, well my engineers need more data to diagnose things. The business needs more data to understand what's going on. Plus we've had this massive explosion of containers and everything like that. So oftentimes your business growth is going to be more than doubled with your observability data growth and SaaS solutions and even your on-premises solutions. What's the main cost driver? It's the volume of data that you're processing and storing. And so Chronosphere one of the key things that we do, because we're focused on organizational pain for larger scale organizations, is well, how do we extract the maximum volume of the data you're generating without having to store all of that data and then present it not just from a cost perspective, but also from a performance perspective. >> Yes. >> John: Yeah. >> And so feeding all into developer productivity and also lowering that investment so that your return can stand out more clearly and more valuably when you are assessing that TCO. >> Better insights and outcomes drives developer productivity for sure. That also has top theme here at KubeCon this year. It always is, but this is more than ever 'cause of the velocity. My question for you, given that you're the field chief technology officer for Chronosphere and you have a unique position, you've got a great experience in the industry, been involved in some really big companies and cutting edge. What's the competitive landscape? 'Cause the customers sometimes are confused by all the pitches they're getting from other vendors. Some are bolting on observability. Some have created like I would say, a shim layer or horizontally scalable platform or platform engineering approach. It's a data problem. Okay. This is a data architecture challenge. You mentioned that many times. What's the difference between a pretender and a player in this space? What's the winning architecture look like? What's a, I won't say phony or fake solution, but ones that customers should be aware of? Because my opinion, if you have a gap in the data or you configure it wrong, like a bolt on and say DNS crashes you're dead in the water. >> Ian: Yeah. >> What's the right approach from a customer standpoint? How do they squint through all the noise to figure out what's the right approach? >> Yeah, so I mean, I think one of the ways, and I've worked with customers in a pre-sales capacity for a very long time I know all the tricks of guiding you through. I think it needs to be very clear that customers should not be guided by the vendor. You don't talk to one vendor and they decide, Oh, I'm going to evaluate based off this. We need to particularly get away from feature based evaluations. Features are very important, but they're all have to be aligned around outcomes. And then you have to clearly understand, where am I today? What do I do today? And what is going to be the transformation that I have to go through to take advantage of these features? They can get very entrancing to say, Oh, there's a list of 25 features that this solution has that no one else has, but how am I going to get value out of that? >> I mean, distributed tracing is a distributed word. Distributed is the key word. This is a system architecture. The holistic big picture comes in. How do they figure that out? Knowing what they're transforming into? How does it fit in? >> Ian: Yeah. >> What's the right approach? >> Too often I say distributed tracing, particularly, you know, bought, because again, look at the shiny features look at the the premise and the MTTR expectations, all these other things. And then it's off to the side. We go through the traditional usage of metrics very often, very log heavy approaches, maybe even some legacy APM. And then it's sort of at last resort. And out of all the tools, I think distributed tracing is the worst in the problem we talked about earlier where the most sophisticated engineers, the ones who are being longest tenured, are the only ones who end up using it. So adoption is really, really poor. So again, what do we do today? Well, we alert, we probably want to understand our symptoms, but then what is the key problem? Oh, we spend a lot of time digging into the where the problem exists in my architecture, we talked about, you know, getting every engineer in at the same time, but how do we reduce the number of engineers involved? How do we make it so that, well, this looks like a great day one experience, but what is my day 30 experience like? Day 90. How is the product get more valuable? How do I get my most senior engineers out of this, not just on day one, but as we progress through it? >> You got to operationalize it. That's the key. >> Yeah, Correct. >> Summarize this as we wrap here. When you're in customer conversations, what is the key factor behind Chronosphere's success? If you can boil it down to that key nugget, what is it? >> I think the key nugget is that we're not just fixated on sort of like technical features and functions and frankly gimmicks of like, Oh, what could you possibly do with these three pillars of data? It's more about what can we do to solve organizational pain at the high level? You know, things like what is the cost of these solutions? But then also on the individual level, it's like, what exactly is an engineer trying to do? And how is their quality of life affected by this kind of tooling? And it's something I'm very passionate about. >> Sounds like it. Well, the quality of life's important, right? For everybody, for the business, and ultimately ends up affecting the overall customer experience. So great job, Ian, thank you so much for joining John and me talking about what you guys are doing beyond the three pillars of observability at Chronosphere. We appreciate your insights. >> Thank you so much. >> John: All right. >> All right. For John Furrier and our guest, I'm Lisa Martin. You're watching theCUBE live Friday morning from KubeCon + CloudNativeCon 22' from Detroit. Our next guest joins theCUBE momentarily, so stick around. (upbeat music)
SUMMARY :
the last three days. it's on the front lines Ian Smith is here, the It's great to be here. What are some of the challenges with that, the cause of those and be able to take Hey, you know, get the And that'll solve the problem. They're going to give you a So the senior engineers and the required knowledge in the bulk of the and how that really gives you the cost to the organization cost of the shark fan, are coding in the pipelines. What are some of the costs? reduce the number of, you know, but burnout is huge. the skills gap either. Because you got skills gap, People just kind of And it's not just in the context of, And Cloud Native engineers, is the hottest topic that enables the developers. Because again, it's about the outcomes. the Cloud Native engineers, Talk about it in One of the things that not the most to understand what's the value of what One of the dirty little when you are assessing that TCO. 'cause of the velocity. And then you have to clearly understand, Distributed is the key word. And out of all the tools, That's the key. If you can boil it down the cost of these solutions? beyond the three pillars For John Furrier and our
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Ian | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Ian Smith | PERSON | 0.99+ |
Lisa | PERSON | 0.99+ |
two hours | QUANTITY | 0.99+ |
15 teams | QUANTITY | 0.99+ |
John Furrier | PERSON | 0.99+ |
13 individuals | QUANTITY | 0.99+ |
25 features | QUANTITY | 0.99+ |
50% | QUANTITY | 0.99+ |
three engineers | QUANTITY | 0.99+ |
three engineers | QUANTITY | 0.99+ |
60% | QUANTITY | 0.99+ |
two teams | QUANTITY | 0.99+ |
an hour | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
third day | QUANTITY | 0.99+ |
15 | QUANTITY | 0.99+ |
10, 15% | QUANTITY | 0.99+ |
Detroit | LOCATION | 0.99+ |
two engineers | QUANTITY | 0.99+ |
3:00 AM | DATE | 0.99+ |
KubeCon | EVENT | 0.99+ |
15 customer | QUANTITY | 0.99+ |
Friday morning | DATE | 0.99+ |
first point | QUANTITY | 0.99+ |
KubeCon | ORGANIZATION | 0.99+ |
Cloud Native | ORGANIZATION | 0.99+ |
three phases | QUANTITY | 0.98+ |
one | QUANTITY | 0.98+ |
three pillars | QUANTITY | 0.98+ |
DoorDash | ORGANIZATION | 0.98+ |
One | QUANTITY | 0.97+ |
this year | DATE | 0.97+ |
theCUBE | ORGANIZATION | 0.96+ |
three nines | QUANTITY | 0.95+ |
three pillars | QUANTITY | 0.94+ |
day one | QUANTITY | 0.94+ |
one step | QUANTITY | 0.93+ |
Chronosphere | TITLE | 0.92+ |
one incident | QUANTITY | 0.92+ |
North America | LOCATION | 0.92+ |
CloudNativeCon | EVENT | 0.91+ |
Prometheus | TITLE | 0.91+ |
99.99% | QUANTITY | 0.9+ |
first | QUANTITY | 0.89+ |
one thing | QUANTITY | 0.89+ |
four nines | QUANTITY | 0.86+ |
last couple of years | DATE | 0.85+ |
one vendor | QUANTITY | 0.85+ |
Chronosphere | ORGANIZATION | 0.84+ |
Day 90 | QUANTITY | 0.84+ |
Cloud Native | TITLE | 0.83+ |
Omer Trajman, Rocana - #BigDataNYC 2016 - #theCUBE
>> Announcer: From New York, it's the Cube. Covering Big Data New York City 2016. Brought to you by Headline Sponsors, Cisco, IBM, NVIDIA, and our ecosystem sponsors. Now, here are your hosts, Dave Vellante and George Gilbert. >> Welcome back to New York City everybody, this is the Cube, the worldwide leader in live tech coverage, and we've been going wall to wall since Monday here at Strata plus Hadoop World, Big Data NYC is our show within the show. Omer Trajman is here, he's the CEO of Rocana, Cube alum, good to see you again. >> Yeah you too, it's good to be here again. >> What's the deal with the shirt, it says, 'your boss is useless', what are you talking about? >> So, if I wasn't on mic'd up, I'd get up and show you, ~but you can see in the faint print that it's not talking about how your boss is useless, right, it's talking about how you make better use of data and what your boss' expectations are. The point we're trying to get across is that context matters. If you're looking at a small fraction of the information then you're not going to get the full picture, you're not going understand what's actually going on. You have to look at everything, you have no choice today. >> So Rocana has some ambitious plans to enter this market, generally referred to as IT operations, if I can call it that, why does the world need another play on IT operations? >> In IT operations? If you look at the current state of IT operations in general, and specifically people think of this largely versus monitoring, is I've got a bunch of systems, I can't keep track of everything, so I'm going to pick and choose what I pay attention to. I'm going to look at data selectively, I'm only going to keep it for as long as I can afford to keep it, and I'm not going to pay attention to the stuff that's outside that hasn't caused problems, yet. The problem is, the yet, right? You all have seen the Delta outages, the Southwest issues, the Neiman Marcus website, right? There's plenty of examples of where someone just wasn't looking at information, no one was paying attention to it or collecting it and they got blindsided. And in today's pace of business where everything is digital, everyone's interacting with the machines directly, everything's got to be up all the time. Or at least you have to know that something's gone askew and fix it quickly. And so our take is, what we call total operational visibility. You got to pay attention to everything all the time and that's easier said than done. >> Well, because that requires you got to pay attention to all the data, although this reminds me of IP meta in 2010, said, "Sampling is dead", alright? Do you agree he's right? >> Trajman: I agree. And so it's much more than that, of course right, sampling is dead, you want to look at all the details all the time, you want to look at it from all sources. You want to keep enough histories so if you're the CIO of a retailer, if your CEO says, "Are we ready for Cyber Monday, can you take a look at last year's lead up and this years", and the CEO's going to look back at them and say, "I have seven days of data (chuckles), "what are you talking about, last year". You have to keep it for as long as you need to, to address business issues. But collecting the data, that's step one, right? I think that's where people struggle today, but they don't realize that you can't just collect it all and give someone a search box, or say, "go build your charts". Companies don't have data scientists to throw at these problems. You actually have to have the analytics built in. Things that are purpose built for data center and IT operations, the machine learning models, the built in cubes, the built in views, visualizations that just work out of the box, and show you billions of events a day, the way you need to look at that information. That's prebuilt, that comes out of the box, that's also a key differentiator. >> Would it be fair to say that Hadoop has historically has been this repository for all sorts of data, and but it was a tool set, and that Splunk was the anti-Hadoop, sort of out of the box. It was an application that had some... It collected certain types of data and it had views out of the box for that data. Sounds like you're trying to take the best of each world where you have the full extensibility and visibility that you can collect with all your data in Hadoop but you've pre built all the analytic infrastructure that you need to see your operations in context. >> I think when you look at Hadoop and Splunk and your concert of Rocana's the best of both worlds, is very apt. It's a prepackaged application, it just installs. You don't have to go in under the covers and stitch everything together. It has the power of scalability that Hadoop has, it has the openness, right, 'cause you can still get at the data and do what you need with it, but you get an application that's creating value, day one. >> Okay, so maybe take us... Peel back the onion one layer, if you can go back to last year's Cyber Monday and you've got out of the box functionality, tell us how you make sense out of the data for each organization, so that the context is meaningful for them. >> Yeah, absolutely. What's interesting is that it's not a one time task, right? Every time you're trying to solve a slightly different problem, or move the business in different direction, you want to look at data differently. So we think of this more as a toolkit that helps you navigate where to find the root cause or isolate where a particular problem is, or where you need to invest, or grow the business. In the Cyber Monday example, right what you want to look at is, let me take a zoom out view, I just want to see trends over time, the months leading up or the weeks leading up to Cyber Monday. Let's look at it this year. Let's look at it last year. Let's stack on the graph everything from the edge caching, to the application, to my proxy servers to my host servers through to my network, gimmie the broad view of everything, and just show me the trend lines and show me how those trend lines are deviating. Where is there unexpected patterns and behavior, and then I'm going to zoom in on those. And what's causing those, is there a new disconfiguration, did someone deploy a new network infrastructure, what has caused some change? Or is it just... It's all good, people are making more money, more people are coming to the website it's actually a capacity issue, we just need to add more servers. So you get the step back, show me everything without a query, and then drag and drop, zoom in to isolate where are there particular issues that I need to pay attention to. >> Vellante: And this is infrastructure? >> Trajman: It's infrastructure all the way through application... >> Correct? It is? So you can do application performance management, as well? >> We don't natively do the instrumentation there's a whole domain which is, bytecode instrumentation, we partner with companies that provide APM functionality, take that feed and incorporate it. Similar to a partner with companies that do wire level deep packet inspection. >> Vellante: I was going to say... >> Yeah, take that feed and incorporate it. Some stuff we do out of the box. NetFlow, things like IPFIX, STATSD, Syslog, log4j, right? There's kind of a lot of stuff that everyone needs standard interfaces that we do out of the box. And there's also pre-configured, content oriented parsers and visualizations for an OpenStack or for Cloud Foundry or for a Blue Coat System. There's certain things that we see everywhere that we can just handle out of the box, and then there's things that are very specific to each customer. >> A lot of talk about machine learning, deep learning, AI, at this event, how do you leverage that? >> How do we fit in? It's interesting 'cause we talk about the power delivers in the product but part of it is that it's transparent. Our users, who are actually on the console day to day or trying to use Rocana to solve problems, they're not data scientists. They don't understand the difference between analytic queries and full text search. They understand understand machine learning models. >> They're IT people, is that correct? >> They're IT folks, whose job it is to keep the lights on, right? And so, they expect the software to just do all of that. We employ the data scientists, we deliver the machine learning models. The software dynamically builds models continuously for everything it's looking at and then shows it in a manner that someone can just look at it and make sense of it. >> So it might be fair to say, maybe replay this, and if it's coming out right, most people, and even the focus of IBM's big roll out this week is, people have got their data links populated and they're just now beginning to experiment with the advanced analytics. You've got an application where it's already got the advanced analytics baked into such an extent that the operator doesn't really care or need to know about it. >> So here's the caveat, people have their data links populated with the data they know they need to look at. And that's largely line of business driven, which is a great area to apply big data machine learning, analytics, that's where the data scientists are employed. That's why what IBM is saying makes sense. When you get to the underlying infrastructure that runs it day to day, the data lakes are not populated. >> Interviewer: Oh, okay. >> They're data puddles. They do not have the content of information, the wealth of information, and so, instead of saying, "hey, let's populate them, "and then let's try to think about "how to analyze them, and then let's try to think about "how get insights from them, and then let's try to think "about, and then and then", how about we just have a product that does it all for you? That just shows you what to do. >> I don't want to pollute my data lake with that information, do I? >> What you want is, you want to take the business feeds that have been analyzed and you want to overlay them, so you want to send those over to probably a much larger lake, which is all the machine data underneath it. Because what you end up with especially as people move towards more elastic environments, or the hybrid cloud environments, in those environments, if a disk fails or machine fails it may not matter. Unless you can see the topline revenue have an impact, maybe it's fine to just leave the dead machine there and isolate it. How IT operates in those environments requires knowledge of the business in order to become more efficient. >> You want to link the infrastructure to the value. >> Trajman: Exactly. >> You're taking feeds essentially, from the business data and that's informing prioritization. >> That's exactly right. So take as an example, Point of Sale systems. All the Point of Sale systems today, they're just PCs, they're computers, right? I have to monitor them and the infrastructure to make sure it's up and running. As a side effect, I also know the transactions. As an IT person, I not only know that a system is up, I know that it's generating the same amount of revenue, or a different amount of revenue than it did last week, or that another system is doing. So I can both isolate a problem as an IT person, right, as an operator, but I can also go to the business and say, "Hey nothing's wrong with the system, we're not making as much money as we were, why is that", and let's have a conversation about that. So it brings IT into a conversation with the business that they've never been able to have before, using the data they've always had. They've always had access to. >> Omer, We were talking a little before about how many more companies are starting to move big parts of their workloads into public cloud. But the notion of hybrid cloud, having a hybrid cloud strategy is still a bit of a squishy term. >> Trajman: Yeah. (laughs) >> Help us fill in, for perhaps, those customers who are trying to figure out how to do it, where you add value and make that possible. >> Well, what's happening is the world's actually getting more complex with cloud, it's another place that I can use to cost effectively balance my workloads. We do see more people moving towards public cloud or setting up private cloud. We don't see anyone whole scale, saying "I'm shutting down everything", and "I'm going to send everything to Amazon" or "I'm going to send everything to Microsoft". Even in the public cloud, it's a multi cloud strategy. And so what you've done is, you've expanded the number of data centers. Maybe I add, a half dozen data centers, now I've got a half dozen more in each of these cloud providers. It actually exacerbates the need for being able to do multi-tier monitoring. Let me monitor at full fidelity, full scale, everything that's happening in each piece of my infrastructure, aggregate the key parts of that, forward them onto something central so I can see everything that's going on in one place, but also be able to dive into the details. And that hybrid model keeps you from clogging up the pipes, it keeps you from information overload, but now you need it more than ever. >> To what extent does that actually allow you, not just to monitor, but to re-mediate? >> The sooner you notice that there's an issue, the sooner you can address that issue. The sooner you see how that issue impacts other systems, the more likely you are to identify the common root cause. An example is a customer that we worked with prior to Rocana, had spent an entire weekend isolating an issue, it was a ticket that had gotten escalated, they found the root cause, it was a core system, and they looked at it and said, "Well if that core system was actually "the root cause, these other four systems "should have also had issues". They went back into the ticketing system, sure enough, there were tickets that just didn't get escalated. Had they seen all of those issues at the same time, had they been able to quickly spin the cube view of everything, they would have found it significantly faster. They would have drawn that commonality and seen the relationships much more quickly. It requires having all the data in the same place. >> Part of the actionable information is to help triage the tickets in a sense, of that's the connection to remediation. >> Trajman: Context is everything. >> Okay. >> So how's it going? Rocana's kind of a heavy lift. (Trajman laughs) You're going after some pretty entrenched businesses that have been used to doing things a certain way. How's business? How you guys doing? >> Business is, it's amazing, I mean, the need is so severe. We had a prospective customer we were talking to, who's just starting to think about this digital transformation initiative and what they needed from an operational visibility perspective. We connected them with an existing customer that had rolled out a system and, the new prospect looked at the existing customer, called us up and said, "That," (laughs) "that's what we want, right there". Everyone's got centralized log analytics, total operational visibility, people are recognizing these are necessary to support where the business has to go and businesses are now realizing they have to digitize everything. They have to have the same kind of experience that Amazon and Google and Facebook and everyone else has. Consumers have come to expect it. This is what is required from IT in order to support it, and so we're actually getting... You say it's a heavy lift, we're getting pulled by the market. I don't think we've had a conversation where someone hasn't said, "I need that", that's what we're going through today that is my number one pang. >> That's good. Heavy lifts are good if you've got the stomach for it. >> Trajman: That's what I do. >> If you got a tailwind, that's fantastic. It sounds like things are going well. Omer, congratulations on the success we really appreciate you sharing it with our Cube audience. >> Thank you very much, thanks for having me. >> You're welcome. Keep it right there everybody. We'll be back with our next guest, this is the Cube, we're live, day four from NYC. Be right back.
SUMMARY :
Brought to you by Headline Sponsors, Cube alum, good to see you again. good to be here again. fraction of the information and I'm not going to pay attention the way you need to look the best of each world where you have the it has the openness, right, 'cause you can for each organization, so that the context from the edge caching, to the application, Trajman: It's infrastructure all the do the instrumentation that we do out of the box. on the console day to day We employ the data scientists, that the operator doesn't really care that runs it day to day, They do not have the and you want to overlay them, infrastructure to the value. essentially, from the business and the infrastructure But the notion of hybrid and make that possible. and "I'm going to send the sooner you can address that issue. Part of the actionable information How you guys doing? They have to have the you've got the stomach for it. Omer, congratulations on the success Thank you very much, Keep it right there everybody.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Trajman | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
NYC | LOCATION | 0.99+ |
George Gilbert | PERSON | 0.99+ |
NVIDIA | ORGANIZATION | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Omer Trajman | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
2010 | DATE | 0.99+ |
ORGANIZATION | 0.99+ | |
last year | DATE | 0.99+ |
ORGANIZATION | 0.99+ | |
Neiman Marcus | ORGANIZATION | 0.99+ |
New York | LOCATION | 0.99+ |
seven days | QUANTITY | 0.99+ |
New York City | LOCATION | 0.99+ |
four systems | QUANTITY | 0.99+ |
Hadoop | TITLE | 0.99+ |
last week | DATE | 0.99+ |
Rocana | ORGANIZATION | 0.99+ |
this year | DATE | 0.99+ |
Rocana | PERSON | 0.99+ |
each piece | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
both | QUANTITY | 0.98+ |
Monday | DATE | 0.98+ |
each | QUANTITY | 0.98+ |
each organization | QUANTITY | 0.97+ |
each customer | QUANTITY | 0.97+ |
Big Data | ORGANIZATION | 0.97+ |
log4j | TITLE | 0.96+ |
this week | DATE | 0.95+ |
IPFIX | TITLE | 0.95+ |
Cyber Monday | EVENT | 0.95+ |
OpenStack | TITLE | 0.95+ |
day four | QUANTITY | 0.94+ |
step one | QUANTITY | 0.93+ |
STATSD | TITLE | 0.93+ |
both worlds | QUANTITY | 0.93+ |
Syslog | TITLE | 0.93+ |
Southwest | ORGANIZATION | 0.92+ |
billions of events a day | QUANTITY | 0.92+ |
each world | QUANTITY | 0.91+ |
Splunk | TITLE | 0.9+ |
Splunk | PERSON | 0.89+ |
one layer | QUANTITY | 0.89+ |
#BigDataNYC | EVENT | 0.87+ |
a half dozen data centers | QUANTITY | 0.87+ |
Cloud Foundry | TITLE | 0.85+ |
one | QUANTITY | 0.85+ |
Hadoop | PERSON | 0.84+ |
Vellante | ORGANIZATION | 0.84+ |
half dozen | QUANTITY | 0.83+ |
one time task | QUANTITY | 0.82+ |
Hadoop World | LOCATION | 0.8+ |
NetFlow | TITLE | 0.8+ |
Cube | ORGANIZATION | 0.79+ |
Omer | PERSON | 0.76+ |
2016 | DATE | 0.75+ |
Delta | ORGANIZATION | 0.75+ |
this years | DATE | 0.73+ |
Cube | COMMERCIAL_ITEM | 0.73+ |
Headline | ORGANIZATION | 0.7+ |
day one | QUANTITY | 0.66+ |
Rocana | TITLE | 0.62+ |
Big Data | TITLE | 0.53+ |
Strata | LOCATION | 0.5+ |