Nancy Gohring, 451 Research | Sumo Logic Illuminate 2019
>> from Burlingame, California It's the Cube covering Suma logic Illuminate 2019. Brought to You by Sumer Logic. >> Hey, welcome back, everybody. Jeffrey here with the Cube worth, assume a logic illuminate 2019 of it. It's at the Hyatt Regency San Francisco airport. About 809 100 people are second year. It's a 30 year of the event, excited to be here and watch it grow. We've seen a bunch of these things grow from little to bigger over a number of years, and it's always funded kind of beer for the zenith. We're excited to do it by our next guest. She's an analyst. It's Nancy Goering, senior analyst for 4 51 research. Nancy, great to see you. >> Thank you for having me. >> Absolutely so first off, Just kind of impressions of the event here. >> Yeah, good stuff. You know, like he's definitely trying to, you know, get on top of some of the big trends. You know, The big news here was their new Cooper nineties monitoring, also obviously kind of staying on the the leading edge of the cloud. Native Technologies. >> It's it's amazing how fast it's growing, you know, doing some research for this. Then I found some of your stuff out on the Internet and just one quote. I think it's from years ago, but just for people to kind of understand the scale, I think, he said, Google was launching four billion containers a week. Twitter had 12,000. Service is uber 4000. Micro service is Yelp and Justin 25 million data points per minute. I think this is like a two or three year old presentation. I mean, the scale in which the data is moving is astronomical. >> Yeah, well, I mean, if you think of Google launching four billion containers every week, they're collecting a number of different data points about a container spinning up about the operation of that container while it's alive about the container spinning down. So it's not even just four billion pieces of data. It's, you know, multiply that by 10 20 or many more. So, yeah, So the volume of operations dated that people are faced with is just, you know, out of this world, and some of that is beginning to get abstracted away, terms of what you need to look at. So, you know, Kubernetes is an orchestration engine so that's helping move things around. You still need to collect that data to inform automation tools, right? So even if you was, even if humans aren't really looking at it, it's being used to drive automation, right? It still has to be collected, >> right, And they're still configurations and settings and and dials. And it seems like a lot of the breaches that we hear about today are people just miss configuring something on us. It's human error. And so how do we kind of square the circle? Because the date is only growing. The quantity sources, the complexity, Yeah, the lack of structure. And that's before we had a I ot And now we got edge devices and they're all reporting in from from home. Yeah, crazy problem. It's >> really, I think, driving a lot of the investments in the focus and more sophisticated analytics, right? So that's why you're hearing a lot more about machine learning. And a I in this space is because humans can't just look at that huge volume >> of data and >> figure out what it means. So the development of machine learning tools, for instance, is gonna pull out a piece of data that's important. Here is the anomaly. This is the thing you should be paying attention to. Andi, obviously getting increasingly sophisticated, right? In terms of correlating data from different parts of your infrastructure in order to yet make sense of it, >> right? And then, Oh, by the way, they're all made up of micro service is a literal interconnected in AP eyes. The third party providers. Yeah. I mean, the complexity is ridicu >> and then, you know, and I've been actually thinking and talking a lot recently about organizational issues within companies that exacerbate some of these challenges. So you mentioned Micro Service is so ah, lot of times, you know, you've got Dev ops groups and an individual Dev Ops group is responsible for a or multiple. Micro service is right. They're all running, sort of autonomous. They're doing their own thing, right? So they could move quickly. But is there anybody overseeing the application that's made up of maybe 1000 Micro Service's? And in some cases, the answer is no. And so it may look like all the Micro Service's are operating well, but the user experience actually is not good, and no one really notices until the user starts complaining. So it's like things start. You know, you have to think about organizational things. Who's responsible for that, right? You know, if you're on a Dev ops team and your job has been to support the certain service's and not the whole, like who's responsible for the whole application and that's it's a challenge, it's something. Actually, in our surveys, we're hearing from people that they're looking for people that skill set, someone who understands how to look at Micro Service's as they work together to deliver a service, right? It's it's a It's a pain point. Shouldn't >> the project the product manager for that application would hopefully have some instances abilities to kind of what they're trying to optimize for? >> In some cases, they're not technical enough, right? A product manager doesn't necessarily have the depth to know that, or they're not used to using the types of tools that the Dev Ops team or the operations team would use to track the performance of an application. So sometimes it's just a matter of having the right tooling in front of them, >> and then even the performance I was like What do you optimizing four you optimized for security up the mind thing for speed are optimizing for yeah, you can optimize for everything if you got a stack rank order at some point in time. So that would also then drive in a different prioritization or the way that you look at those doctorsservices performance. Yeah, interesting. It's another big topic that comes up often is the vision of a single pane of glass in You know, I can't help but think is in my work day. You know how often I'm tabbing between, you know, sales force and email and slack and asana and, um, a couple of browsers air open. I mean, it's it's it's bananas, you know, it's no longer just that that email is the only thing that's open on my desk all day and only imagine the Dev Ops world. No, we saw just crazy complexity around again, managing all the micro service's of the AP eyes. So what's kind of the story? What are you seeing in kind of the development of that? And there's so many vendors now, and so many service is yeah, it's not just we're just gonna put in HB open view, and that's the standard, and that's what we're all right on. >> So if you're looking at it from the lens of of monitoring or observe ability or performance. Traditionally, you had different tools that looked at, say, different layers of a service, so you had a tool that was looking at infrastructure. Was your infrastructure monitoring tool. You had an application performance monitoring tool. You might have a network performance monitoring tool. You might have point tools that are looking just at the data base layer. But as things get more complicated, Azadliq ations are getting much more complex. Looking at that data in a silo tool tends to obscure the bigger picture. You don't understand when you're looking at the's separate tools how some piece of infrastructure might be impacting the application, for instance. And so the idea is to bring all of that operations data about the performance of an application in tow. One spot where you can run again, these more sophisticated analytics so that you can understand the relationship between the different layers of the application stack also horizontally, right? So how micro service's that are dependent on each other? How one micro service might be impacting the performance of another. So that's conceptually the idea behind having a single pane of glass. Now the execution can happen in a bunch of different ways, so you can have one vendor. There are vendors that are growing horizontally, so they're collecting data across the stack. And there's other vendors that are positioning themselves as that sort of central data repositories, so they may not directly collect all of that data. But they might in just some data that another monitoring vendor has collected. So there's an end. You know, there's there's always going to be good arguments for best of breed tools, right? So, you know, in most cases, businesses are not going to settle on just one monitoring tool that does it all. But that's conceptually the reason, right, and you want to bring all of this data together. However you get it, however, it's being collected so that you can analyze it and understand that big picture performance of a complicated application, >> right? But then, even then, as you said, you don't even want, you're not really monitoring the application performance per se. You're just waiting for the you're waiting for some of those needles to fall out of the haystack because you just you just can't get that much stuff. And you know, it's where do you focus your priority? You know what's most critical? What needs attention now. And if without a machine to help kind of point you in the right direction, you're gonna have a hard time finding that needle. >> And there's a lot of different approaches that are beginning to develop. So one is this idea of SL owes or service level objectives. And so, for instance, a really common service level objective that teams are looking at is Leighton. See, So this Leighton see of the service should never drop under whatever ah 100 milliseconds. And and if it does, I want to be alerted. And also if it drops below that objective for a certain amount of time, that can actually help you as a team. Allocate, resource is so if you're not living up to that service level objective, maybe you should shift some people's time toe working on improving the application instead of developing a new feature, right? So it can really help you prioritize your time because you know what? There was a time when people in operations teams or Dev. Ops teams had a really hard time, and they still d'oh figuring out which problems are important because you've always people always have a lot of performance problems going on. So which do you focus your time on? And it's been pretty opaque. It's hard to see. Is this performance impacting the bottom line of my business? Is this impacting? You know, my customers? Are we losing business over this? Like that's That's a really common question that people I can't answer, right? So there you people are beginning to develop these approaches to try to figure out how to prioritize work on performance problems. It's >> interesting because the other one that and some of you mentioned before is kind of this post incident review instead of a post boredom. And, you know, you talked about culture and words matter, and I think that's a really interesting take because it's it's it implies we're gonna learn, and we're gonna go forward. It's dead. Um, yeah, you know, we're gonna yell at each other and someone's gonna get blamed. That's exactly it. And we're going to move on. So, you know, how is that kind of evolved in. And how does that really help organizations do a better job? >> There's, I mean, there's there's much more of a focus on setting aside time to do that kind of analysis, right? So look at how we're performing as a team. Look at how we responded to an incident so that you can find ways that you can do better next time and some of that Israel tactical right? It's tweaking alerts. Did we not get an alert? You know, did we not even know this problem was happening? So maybe you build new alerts or sport get rid of a bunch of alerts that did nothing. You know, there's there's a lot you can learn on again to To your point, I think part of the reason people have started calling in a post Incident review instead of a postmortem is because yet you don't want that to be a session where people are feeling like Blaine. You know, this is my fault. I screwed up. I spent way too long on this, so I >> had to >> set things out properly. It's it's meant to be productive. Let's find the weak points and fill them right. Fill those gaps. >> It's funny you had another. There's another thing I found where you were talking about not not necessarily the Post Borden, but you know, people, people being much more proactive, much more, you know, thoughtful as to how they are going to take care of these things. And it is really more of a social cultural change unnecessarily. The technical piece that culture pieces. So so >> it is and especially, you know, right now there's a lot of focus on on tooling and that can cause some, you know, interesting issues. So, you know, especially in an organization that has really adopted Dev ops practices like the idea of a Dev Ops team is that it's very autonomous. They do what they do, what they need to do right to move fast and to get the job done. And that often includes choosing your own tools, but that that has created a number of problems, especially in monitoring. So if you have 100 Dev ops teams and they all have chosen their own, monitoring tools like this is not efficient, so it's not. It's not a good idea because those tools aren't talking to each other, even though they're micro service's that are dependent on each other. It's inefficient. From a business perspective. You've got all these relationships with vendors, and in some cases, with a single vendor, you might have 50 instances of the same monitoring tool that you know you have 50 accounts with them, like that's just totally inefficient. And then you've got people on a Dev ops, an individual, all the all the individual Dev ops teams have a person who's supposed to be the resident expert in these tools, like maybe you should share that knowledge across. But my point is, you get into the situation where you have hundreds of monitoring tools, sometimes 40 50 monitoring tools. You realize that's a problem. How do you address that problem? Because you're gonna have to go out and tell people you can't use this tool that you love. That helps you do your job that you chose. And so again, this whole cultural question comes out like, How do you manage that transition in a way that's gonna be productive? >> Thea other one that you brought up that was interesting is where the the sport team basically tells the business team you only have X number of incidents. We're gonna give you a budget. Yeah, exceed the budget. We're not going to help you. It's a really different way to think about prioritization. I >> don't necessarily think that's a great approach, but I mean, there was somebody who did that, but I think it's kind of it's kind of >> an interesting thing. And you talked about it in that. I think it was one of your presentations or speeches where, you know, it makes you kind of rethink. You know, why do we have so many incidents? Yeah, and there shouldn't be that many incidents, and maybe some of the responsibility should be shifted to think about why in the how and is more of a systemic problem than a feature problem or a bug, right? It's a broken code. So again, I think there's so many kind of cultural opportunities to rethink this. In a world of continuous development, continuous publishing and continuous pushing out of new code. Yeah, yeah, sure. All right. Nancy will. Thanks for taking a few minutes, and it's really great to talk to you. Thanks >> for having me. >> Alright. She's Nancy. I'm Jeff. You're watching the Cube where it's Uma Logic illuminate 2019. Thanks for watching. We'll see next time
SUMMARY :
from Burlingame, California It's the Cube covering It's at the Hyatt Regency San Francisco airport. You know, like he's definitely trying to, you know, get on top of some of the big trends. It's it's amazing how fast it's growing, you know, doing some research for this. So even if you was, even if humans aren't really looking at it, And it seems like a lot of the breaches that we hear about today are people just miss configuring And a I in this space is because humans This is the thing you should be paying attention to. I mean, the complexity is ridicu So you mentioned Micro Service is so ah, lot of times, you know, you've got Dev ops groups and an individual So sometimes it's just a matter of having the right tooling in front of them, or the way that you look at those doctorsservices performance. And so the idea is to bring all of that operations And you know, it's where do you focus your priority? So it can really help you prioritize your time because you know what? interesting because the other one that and some of you mentioned before is kind of this post incident review instead You know, there's there's a lot you can learn on again to To your point, It's it's meant to be productive. not necessarily the Post Borden, but you know, people, people being much more proactive, and that can cause some, you know, interesting issues. tells the business team you only have X number of incidents. you know, it makes you kind of rethink. Thanks for watching.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Nancy Goering | PERSON | 0.99+ |
Nancy | PERSON | 0.99+ |
Nancy Gohring | PERSON | 0.99+ |
Jeff | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
two | QUANTITY | 0.99+ |
50 accounts | QUANTITY | 0.99+ |
Leighton | ORGANIZATION | 0.99+ |
30 year | QUANTITY | 0.99+ |
12,000 | QUANTITY | 0.99+ |
Jeffrey | PERSON | 0.99+ |
50 instances | QUANTITY | 0.99+ |
three year | QUANTITY | 0.99+ |
four billion pieces | QUANTITY | 0.99+ |
one quote | QUANTITY | 0.99+ |
Burlingame, California | LOCATION | 0.99+ |
today | DATE | 0.99+ |
2019 | DATE | 0.99+ |
Yelp | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.98+ | |
Justin | PERSON | 0.98+ |
40 50 monitoring tools | QUANTITY | 0.98+ |
Uma Logic illuminate | TITLE | 0.98+ |
second year | QUANTITY | 0.97+ |
one | QUANTITY | 0.97+ |
100 milliseconds | QUANTITY | 0.97+ |
About 809 100 people | QUANTITY | 0.96+ |
1000 Micro Service | QUANTITY | 0.96+ |
100 Dev ops | QUANTITY | 0.96+ |
hundreds of monitoring tools | QUANTITY | 0.95+ |
one vendor | QUANTITY | 0.95+ |
four billion containers | QUANTITY | 0.95+ |
Blaine | PERSON | 0.94+ |
Cooper nineties | ORGANIZATION | 0.94+ |
One spot | QUANTITY | 0.94+ |
first | QUANTITY | 0.93+ |
Dev Ops | ORGANIZATION | 0.93+ |
Cube | TITLE | 0.92+ |
SL | ORGANIZATION | 0.92+ |
single pane | QUANTITY | 0.91+ |
single vendor | QUANTITY | 0.9+ |
four billion containers a week | QUANTITY | 0.9+ |
single pane of | QUANTITY | 0.88+ |
Hyatt Regency | LOCATION | 0.83+ |
one monitoring tool | QUANTITY | 0.82+ |
years ago | DATE | 0.81+ |
10 20 | QUANTITY | 0.81+ |
Andi | PERSON | 0.79+ |
25 million data points per | QUANTITY | 0.79+ |
4 | QUANTITY | 0.78+ |
Israel | LOCATION | 0.77+ |
Native Technologies | ORGANIZATION | 0.75+ |
research | QUANTITY | 0.73+ |
San Francisco airport | LOCATION | 0.68+ |
4000 | QUANTITY | 0.64+ |
Sumo Logic Illuminate | TITLE | 0.63+ |
Kubernetes | TITLE | 0.61+ |
Sumer Logic | ORGANIZATION | 0.58+ |
every | QUANTITY | 0.56+ |
51 | OTHER | 0.54+ |
uber | QUANTITY | 0.53+ |
vendors | QUANTITY | 0.52+ |
service | QUANTITY | 0.51+ |
Ops | TITLE | 0.5+ |
number of | QUANTITY | 0.5+ |
Borden | ORGANIZATION | 0.48+ |
Suma logic | TITLE | 0.45+ |
asana | ORGANIZATION | 0.43+ |
Illuminate 2019 | EVENT | 0.4+ |
451 Research | ORGANIZATION | 0.37+ |
Nancy Gohring, 451 Research | Sumo Logic Illuminate 2019
>> Narrator: From Burlingame, California, it's theCUBE, covering Sumo Logic Illuminate 2019! Brought to you by Sumo Logic. >> Hey, welcome back everybody. Jeff Frick here with theCUBE. We're at the Sumo Logic Illuminate 2019 event. It's at the Hyatt Regency San Francisco Airport, about eight hundred, nine hundred people, our second year. It's the third year of the event. Excited to be here and watch it grow. We've seen a bunch of these things grow from little to big over a number of years and it's always fun to kind of be here for the zenith. We're excited to be joined by our next guest, she's an analyst. It's Nancy Gohring, Senior Analyst for 451 Research. Nancy, great to see you. >> Thank you, thanks for having me. >> Absolutely. So first off, just kind of impressions of the event here. >> Yeah, good stuff, you know? Definitely trying to, you know, get on top of some of the big trends, you know, the big news here was their new Kubernetes monitoring tool. So obviously kind of staying on the leading edge of the cloud-native technologies. >> It's amazing how fast it's growing, you know. Doing some research for this event, I found some of your stuff out on the internet, and just one quote, I think it's from years ago, but just for people to kind of understand the scale, I think you said Google was launching four billion containers a week, Twitter had twelve thousand services, Uber four thousand microservices, Yelp ingesting twenty-five million data points per minute, and I think this is a two or three year old presentation, I mean, the scale in which the data is moving is astronomical. >> Yeah, well if you think of Google launching four billion containers every week, they're collecting a number of different data points about a container spinning up, about the operation of that container while it's alive, about the container spinning down. So it's not even just four billion pieces of data, it's, you know, multiply that by ten or twenty or many more. So yeah, so the volume of operations data that people are faced with, is just, you know out of this world. And some of that is beginning to get abstracted away in terms of what you need to look at so you know Kubernetes is an orchestration engine so that's helping move thing around. You still need to collect that data to inform automation tools, right, so even humans aren't really looking at it, it's being used to drive automation. >> Right. >> It still has to be collected. >> Right. And there's still configurations and settings and dials and it seems like a lot of the breaches that we hear about today are just people misconfiguring something on AWS >> Yeah, it's human error. >> It's human error. And so how do we kind of square the circle cause the data's only growing the quantity, the sources, the complexity, the lack of structure and that's before we add IOT and now we have edge devices and they're all reporting in from home. >> Yeah >> Crazy problems. >> It's really, I think, driving a lot of the investments and the focus in more sophisticated analytics, right, so that's why you're hearing a lot more about machine learning and AI in this space. It's because humans can't just look at that huge volume of data and figure out what it means. So, the development of machine learning tools, for instance, is going to pull out a piece of data that's important. Like, here's the anomaly, this is the thing you should be paying attention to. And then obviously getting increasingly sophisticated, right, in terms of correlating data from different parts of your infrastructure in order to make sense of it. >> Right. And then, oh, by the way, they're all made up of microservices that are all interconnected and API is the third party providers >> Yeah. >> I mean the complexity is ridiculous. >> Yeah, and then, you know, and I've been actually thinking and talking a lot recently about organizational issues within companies that exacerbates some of these challenges. So you mentioned microservices. So, a lot of times, you know, you've got DevOps groups and an individual DevOps group is responsible for a, or multiple, microservices, right. They're all running sort of autonomous. They're doing their own thing, right, so that they can move quickly. But is there anybody overseeing the application that's made up of maybe a thousand microservices? And in some cases the answer is "no". And so it may look like all the microservices are operating well, but the user experience actually is not good. And no one really notices until the user starts complaining. So, it's like things start, you know you have to think about organizational things. Who's responsible for that, right? If you're on a DevOps team and your job as been to support these certain services and not the whole, like, who's responsible for the whole application? >> Right. >> And that's, it's a challenge. It's something, actually, in our surveys, we're hearing from people that they're looking for people, that skill set, someone who understands how to look at microservices as they work together to deliver a service, right, it's a pain point. >> Shouldn't the project, or the product manager for that application would hopefully have some visibilities to kind of what they're trying to optimize for. >> In some cases they're not technical enough, right, a product manager doesn't necessarily have the depth to know that. Or they're not used to using tools that the DevOps team or the operations team would use to track the performance of an application. >> Right. >> So sometimes it's just a matter of having the right tooling in front of them >> And then even the performance. It's like, what are you optimizing for? Are you optimizing for security? Are you optimizing for speed? Are you optimizing for... >> Experience... >> You can't optimize for everything. You've got to stack rank order at some point in time, so that would also then drive in a different prioritization or the way that you look at those microservices' performance. >> Yeah, yeah. >> Interesting. So another big topic that comes up often is the vision of a single pane of glass. And, you know, I can't help but think as in my work day how often I'm tabbing between you know, sales force, and email, and slack, and Asana, and a couple of browsers are open. I mean, it's bananas, you know. It's no longer just that email is the only thing that's open on my desk all day. >> Yeah. >> And then you can only imagine the DevOps world that we saw just crazy complexity around, again, managing all the microservices, the APIs, so what kinds of, sort of, what are you seeing in kind of the development of that? And there's so many vendors now, and so many services. >> Yeah. >> It's not just, we're just going to put in HP open view and that's the standard and that's what we're all on. >> So if you're looking at it from the lens of monitoring or observability or performance, traditionally you had different tools that looked at, say, different layers of a service. So you had a tool that was looking at infrastructure - it was your infrastructure monitoring tool. You had an application performance monitoring tool. You might have a network performance monitoring tool. You might have point tools that are looking just at the data base layer. But as things get more complicated, as applications are getting much more complex, looking at that data in a silo tool tends to obscure the bigger picture. You don't understand when you're looking at the separate tools how some piece of infrastructure might be impacting the application, for instance. And so, the idea is to bring all of that operations data about the performance of an application into one spot where you can run, again, these more sophisticated analytics so that you can understand the relationship between the different layers of the application stack, also horizontally, right, so, how microservices that are dependent on eachother how one microservice might be impacting the performance of another, so that's conceptually the idea behind having a single pane of glass. Now the execution can happen in a bunch of different ways. So you can have one vendor, there are vendors that are growing horizontally, so they're collecting data across the stack. There's other vendors that are positioning themselves as that sort of central data repository. So they may not directly collect all of that data, but they might ingest some data that another monitoring vendor has collected. So, there's, and, you know, there's always going to be good arguments for best of breed tools right, so, you know, in most cases, businesses are not going to settle on just one monitoring tool that does it all. But that's conceptually the reason, right, is you want to bring all of this data together however you get it, however it's being collected, so that you can analyze it and understand that "big picture" performance of a complicated application. >> Right. But then, even then, as you said, you don't even want to, you're not really monitoring the application performance per se, you're just waiting for the, you're waiting for some of those needles to fall out of the haystack, cause you just, you just can. There's so much stuff. And you know, it's where do you focus your priority. You know, what's most critical, what needs attention now. >> (Nancy) Yeah. >> And if, without a machine to help kind of, point you in the right direction, you're going to have a hard time finding that needle. >> Yeah, and there's a lot of different approaches that are beginning to develop. And one is this idea of SLO's, or Service Level Objectives. And so, for instance a really common Service Level Objective that teams are looking at is latency. So, the latency of the service should never drop under whatever- a hundred milliseconds, and if it does, I want to be alerted. And also, if it drops below that objective for a certain amount of time that can actually help you as a team allocate resources. So, if you're not living up to that Service Level Objective, maybe you should shift some people's time to working on improving the application instead of developing a new feature. Right? >> (Jeff) Right. >> So it can really help you prioritize your time because you know what? There was a time, people in operations teams, or DevOps teams, had a really hard time, and they still do, figuring out which problems are important. 'Cause you've always, people always have a lot of performance problems going on. So which do you focus your time on? And it's been pretty opaque. It's hard to see, is this performance impacting the bottom line in my business? Is this impacting, you know, my customers? Are we losing business over this? Like, that's, that's a really common question that people can't answer. >> Right. >> So, yeah, people are beginning to develop these approaches to try to figure out how to prioritize work on performance problems. >> It's interesting 'cause the other one that you've mentioned before, kind of this post incident review instead of a post mortem and you know, you talked about culture, and "words matter" >> (Nancy) Yeah. >> And I think that's a really interesting take because it's, it implies, we're going to learn, and we're going to go forward as opposed to "it's dead". >> (Linda) Yeah. >> And, you know, we're going to yell at eachother, and someone's going to get blamed... >> (Linda) That's exactly it... >> And we're going to move on. So, you know, how has that kind of evolved and how does that really help organizations do a better job? >> There's, I mean, there's much more of a focus on setting aside time to do that kind of analysis, right? So look at how we're performing as a team. Look at how we responded to an incident so that you can find ways that you can do better next time. And some of that is real tactical, right, it's tweaking alerts. Did we not get an alert? You know, did we not even know this problem was happening? So maybe you build new alerts or get rid of a bunch of alerts that did nothing. You know, there's a lot you can learn and again, to your point, I think part of the reason people have started calling it a post incident review instead of a post mortem is because, yeah, you don't want that to be as session where people are feeling like blame, you know, this is my fault, I screwed up, I spent way too long on this, or I hadn't set things up properly. It's meant to be productive. >> Right. >> Let's find the weak points and fill them. Right? Fill those gaps. >> It's funny you had another, there was another thing I found, you were talking about not, not necessarily the post mortem but, you know, people being much more pro-active, much more, you know, thoughtful as to how they are going to take care of these things. And it is really more of a social, cultural change than necessarily the technical piece. That culture piece is so, so important. >> It is, and especially, you know, right now there's a lot of focus on tooling and that can cause some, you know, interesting issues. So you know, especially in an organization that has really adopted DevOps practices like, the idea of a DevOps team is that it's very autonomous. They do what they need to do, right, to move fast and to get the job done and that often includes choosing your own tools. But that has created a number of problems especially in monitoring. So if you have a hundred DevOps teams and they all have chosen their own monitoring tools, like, this is not efficient. So it's not a good idea because those tools aren't talking to each other, even though they're microservices that are dependent on each other. It's inefficient from a business perspective. You've got all these relationships with vendors and in some cases with a single vendor. You might have fifty instances of the same monitoring tool that, you know, you have fifty accounts with them. Like that's just totally inefficient. And then you've got people on a DevOps and individual, all the individual DevOps teams have a person who's supposed to be the resident expert in these tools, like, maybe you should share that knowledge across... But my point is you get into this situation where you have hundreds of monitoring tools. Sometimes forty, fifty monitoring tools. You realize that's a problem. How do you address that problem? 'Cause you're going to have to go out and tell people you can't use this tool that you love, that helps you do your job, that you chose. So again this whole cultural question comes up. Like, how do you manage that transition in a way that's going to be productive? >> The other one that you brought up that was interesting was where the support team basically tells the business team you only have X-number of incidents, we're going to give you a budget. (laughs) >> Yeah. >> If you exceed the budget we're not going to help you. It's a really different way to think about prioritization... >> Yeah, I don't necessarily think that's a great approach. I mean there was somebody who did that but like... >> But I think its kind of, it's kind of an interesting thing. And you talked about it in that I think it was one of your presentations or speeches where, you know, it makes you kind of re-think, you know, why do we have so many incidents? >> Yeah. >> And there shouldn't be that many incidents. And maybe some of the responsibility should be shifted to think about why, and the how, and is it more of a systemic problem than a feature problem, or a bug, or... >> Right >> A piece of broken code, so again I think there's so many, kind of, cultural opportunities to re-think this, in this world of continuous development, continuous publishing, continuous pushing out of new code. >> Yeah, yeah. For sure. (laughs) >> Alright Nancy, well thanks for taking a few minutes and it was really great to talk to you. >> Thanks for having me. >> Alright, she's Nancy, I'm Jeff. You're watching theCUBE, where it's Sumo Logic Illuminate 2019. Thanks for watching We'll see you next time (electonic music)
SUMMARY :
Brought to you by Sumo Logic. and it's always fun to kind of just kind of impressions of the event here. So obviously kind of staying on the leading edge I think you said And some of that is beginning to get abstracted and it seems like a lot of the breaches the lack of structure and the focus in more sophisticated and API is the third party providers and then, you know, that they're looking or the product manager or the operations team what are you optimizing for? or the way that you look at And, you know, And then you can only imagine and that's the standard so that you can understand the And you know, point you in the right direction, that can actually help you as a team So it can really help you prioritize these approaches to try to and we're going to go forward you know, you know, to an incident so that you can find Let's find the weak points much more, you know, that helps you do your job, The other one that you brought up If you exceed the budget we're not I mean there was somebody who did that And you talked about it in that And maybe some of the responsibility to re-think this, Yeah, yeah. and it was really great to talk to you. We'll see you next time
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Nancy Gohring | PERSON | 0.99+ |
Nancy | PERSON | 0.99+ |
Jeff Frick | PERSON | 0.99+ |
Jeff | PERSON | 0.99+ |
two | QUANTITY | 0.99+ |
Linda | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
twelve thousand services | QUANTITY | 0.99+ |
ten | QUANTITY | 0.99+ |
fifty accounts | QUANTITY | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
fifty instances | QUANTITY | 0.99+ |
Burlingame, California | LOCATION | 0.99+ |
Uber | ORGANIZATION | 0.99+ |
one quote | QUANTITY | 0.99+ |
Sumo Logic | ORGANIZATION | 0.99+ |
twenty | QUANTITY | 0.99+ |
second year | QUANTITY | 0.99+ |
Yelp | ORGANIZATION | 0.99+ |
third year | QUANTITY | 0.98+ |
four thousand microservices | QUANTITY | 0.98+ |
HP | ORGANIZATION | 0.98+ |
one | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
Sumo Logic Illuminate 2019 | EVENT | 0.98+ |
one vendor | QUANTITY | 0.98+ |
451 Research | ORGANIZATION | 0.97+ |
Sumo Logic Illuminate 2019 | TITLE | 0.97+ |
four billion containers | QUANTITY | 0.96+ |
about eight hundred | QUANTITY | 0.96+ |
first | QUANTITY | 0.93+ |
single pane | QUANTITY | 0.93+ |
four billion containers a week | QUANTITY | 0.92+ |
one spot | QUANTITY | 0.91+ |
fifty monitoring tools | QUANTITY | 0.9+ |
hundreds of monitoring tools | QUANTITY | 0.89+ |
four billion pieces of data | QUANTITY | 0.89+ |
DevOps | TITLE | 0.88+ |
DevOps | ORGANIZATION | 0.88+ |
Hyatt Regency San Francisco Airport | LOCATION | 0.87+ |
nine hundred people | QUANTITY | 0.87+ |
forty, | QUANTITY | 0.87+ |
years | DATE | 0.87+ |
single vendor | QUANTITY | 0.86+ |
twenty-five million data points per minute | QUANTITY | 0.85+ |
one monitoring tool | QUANTITY | 0.84+ |
Kubernetes | TITLE | 0.83+ |
three year old | QUANTITY | 0.82+ |
Asana | ORGANIZATION | 0.81+ |
slack | ORGANIZATION | 0.78+ |
hundred milliseconds | QUANTITY | 0.74+ |
theCUBE | ORGANIZATION | 0.73+ |
every week | QUANTITY | 0.64+ |
hundred | QUANTITY | 0.55+ |
thousand | QUANTITY | 0.54+ |
451 | OTHER | 0.53+ |
theCUBE | EVENT | 0.45+ |