Image Title

Search Results for Datta:

Anish Dhar & Ganesh Datta, Cortex | Kubecon + Cloudnativecon Europe 2022


 

>> Narrator: TheCUBE presents Kubecon and Cloudnativecon Europe, 2022. Brought to you by Red Hat, the cloud native computing foundation and its ecosystem partners. >> Welcome to Valencia, Spain in Kubecon, Cloudnativecon Europe, 2022. I'm Keith Townsend and we are in a beautiful locale. The city itself is not that big, 100,000, I mean, sorry, about 800,000 people. And we got out, got to see a little bit of the sites. It is an amazing city. I'm from the US, it's hard to put in context how a city of 800,000 people can be so beautiful. I'm here with Anish Dhar and Ganesh Datta, Co-founder and CTO of Cortex. Anish you're CEO of Cortex. We were having a conversation. One of the things that I asked my client is what is good. And you're claiming to answer the question about what is quality when it comes to measuring microservices? What is quality? >> Yeah, I think it really depends on the company and I think that's really the philosophy we have. When we built Cortex, is that we understood that different companies have different definitions of quality, but they need to be able to be represented in really objective ways. I think what ends up happening in most engineering organizations is that quality lives in people's heads. The engineers who write the services they're often the ones who understand all the intricacies with the service. What are the downstream dependencies, who's on call for this service? Where does the documentation live? All of these things I think impact the quality of the service. And as these engineers leave the company or they switch teams, they often take that tribal knowledge with them. And so I think quality really comes down to being able to objectively codify your best practices in some way and have that distributed to all engineers in the company. >> And to add to that, I think very concrete examples for an organization that's already modern like their idea of quality might be uptime incidents. For somebody that's like going through a modernization strategy, they're trying to get to the 21st century, they're trying to get to Kubernetes. For them, quality means where are we in that journey? Are you on our latest platforms? Are you running CI, are you doing continuous delivery? Like quality can mean a lot of things and so our perspective is how do we give you the tools to say as an organization, here's what quality means to us. >> So at first, my mind was going through when you said quality, Anish, you started out the conversation about having this kind of non-codified set of measurements, historical knowledge, et cetera. I was thinking observability, measuring how much time does it take to have a transaction. But Ganesh you're introducing this new thing. I'm working with this project where we're migrating a monolith application to a set of microservices. And you're telling me Cortex helps me measure the quality of what I'm doing in my project? >> Ganesh: Absolutely. >> How is that? >> Yeah, it's a great question. So I think when you think about observability, you think about uptime and latency and transactions and throughput and all this stuff. And I think that's very high level and I think that's one perspective of what quality is, but as you're going through this journey, you might say like the fact that we're tracking that stuff, the fact that you're using APM, you're using distributed tracing, that is one element of service quality. Maybe service quality means you're doing CICD, you're running vulnerability scans. You're using Docker. Like what that means to us can be very different. So observability is just one aspect of are you doing things the right way? Good to us means you're using SLOs. You are tracking those metrics. You're reporting that somewhere. And so that's like one component for our organization of what quality can mean. >> I'm kind of taken back by this because I've not seen someone kind of give the idea. And I think later on, this is the perfect segment to introduce theCUBE clock in which I'm going to give you a minute to kind of like give me the elevator pitch, but we're going to have the deep conversation right now. When you go in and you... What's the first process you do when you engage in a customer? Does a customer go and get this off of repository, install it, the open source version, and then what? I mean, what's the experience? >> Yeah, absolutely. So we have both a smart and on-prem version of Cortex. It's really straightforward. Basically we have a service discovery onboarding flow where customers can connect to different sets of source for their services. It could be Kubernetes, ECS, Git Repos, APM tools, and then we'll actually automatically map all of that service data with all of the integration data in the company. So we'll take that service and map it to its on call rotation to the JIRA tickets that have the service tag associated with it, to the data algo SLOs. And what that ends ends up producing is this service catalog that has all the information you need to understand your service. Almost like a single pane of glass to work with the service. And then once you have all of that data inside Cortex, then you can start writing scorecards, which grade the quality of those services across those different verticals Ganesh was talking about. Like whether it's a monolith, a microservice transition, whether it's production readiness or security standards, you can really start tracking that. And then engineers start understanding where the areas of risk with my service across reliability or security or operation maturity. I think it gives us in insane visibility into what's actually being built and the quality of that compared to your standards. >> So, okay, I have a standards for SLO that is usually something that is, it might not even be measured. So how do you help me understand that I'm lacking a measurable system for tracking SLO and what's the next step for helping me get that system? >> Yeah, I think our perspective is very much how do we help you create a culture where developers understand what's expected of them? So if SLOs are part of what we consider observability or reliability, then Cortex's perspective is, hey, we want to help your organization adopt SLOs. And so that service cataloging concept, the service catalog says, hey, here's my API integration. Then a scorecard, the organization goes in and says, we want every service owner to define their SLOs, we want you to define your thresholds. We want you to be tracking them, are you passing your SLOs? And so we're not being prescriptive about here's what we think your SLOs should be, ours is more around, hey, we're going to help you like if you care about SLOs, we're going to tell the service owners saying, hey, you need to have at least two SLOs for your service and you got to be tracking them. And the service catalog that data flows from a service catalog into those scorecards. And so we're helping them adopt that mindset of, hey, SLOs are important. It is a component of like a holistic service reliability excellence metric that we care about. >> So what happens when I already have systems for like SLO, how do I integrate that system with Cortex? >> That's one of the coolest things. So the service catalog can be pretty smart about it. So let's say you've sucked in your services from your GitHub. And so now your services are in Cortex. What we can do is we can actually discover from your APM tools, you can say like, hey, for this service, we have guessed that this is the corresponding APM in Datadog. And so from Datadog, here are your SLOs, here are your monitors. And so we can start mapping all the different parts of your world into the Cortex. And that's the power of the service catalog. The service catalog says, given a service, here's everything about that service. Here's the vulnerability scans. Here's the APM, the monitors, the SLOs, the JIRA ticket is like all that stuff comes into a single place. And then our scorecards product can go back out and say, hey, Datadog, tell me about this SLOs for the service. And so we're going to get that information live and then score your services against that. And so we're like integrating with all of your third party tools and integrations to create that single pan of glass. >> Yeah, and to add to that, I think one of the most interesting use cases with scorecards is, okay, which teams have actually adopted SLOs in the first place? I think a lot of companies struggle with how do we make sure engineers defined SLOs are passing them actually care about them. And scorecards can be used to one, which teams are actually meeting these guidelines? And then two, let's get those teams adopted on SLOs. Let's track that, you can do all of that in Cortex, which is I think a really interesting use case that we've seen. >> So let's talk about kind of my use case in the end to end process for integrating Cortex into migrations. So I have this monolithic application, I want to break it into microservices and then I want to ensure that I'm delivering if not, you know what, let's leave it a little bit more open ended. How do I know that I'm better at the end of I was in a monolith before, how do I measure that now that I'm in microservices and on cloud native, that I'm better? >> That's a good question. I think it comes down to, and we talk about this all the time for our customers that are going through that process. You can't define better if you don't define a baseline, like what does good mean to us? And so you need to start by saying, why are we moving to microservices? Is it because we want teams to move faster? Is it because we care about reliability up time? Like what is the core metric that we're tracking? And so you start by defining that as an organization. And that is kind of like a hand wavy thing. Why are we doing microservices? Once you have that, then you define this scorecard. And that's like our golden path. Once we're done doing this microservice migration, can we say like, yes, we have been successful and those metrics that we care about are being tracked. And so where Cortex fits in is from the very first step of creating a service, you can use Cortex to define templates. Like one click, you go in, it spins up a microservice for you that follows all your best practices. And so from there, ideally you're meeting 80% of your standards already. And then you can use scorecards to track historical progress. So you can say, are we meeting our golden path standards? Like if it's uptime, you can track uptime metrics and scorecards. If it's around velocity, you can track velocity metrics. Is it just around modernization? Are you doing CICD and vulnerability scans, like moving faster as a team? You can track that. And so you can start seeing like trends at a per team level, at a per department level, at a per product level saying, hey, we are seeing consistent progress in the metrics that we care about. And this microservice journey is helping us with that. So I think that's the kind of phased progress that we see with Cortex. >> So I'm going to give you kind of a hand wavy thing. We're told that cloud native helps me to do things faster with less defects so that I can do new opportunities. Let's stretch into kind of this non-tech, this new opportunities perspective. I want to be able to move my microservices. I want to be able to move my architecture to microservices, so I reduce call wait time on my customer service calls. So I can easily see how I can measure are we iterating faster? Are we putting out more updates quicker? That's pretty easy to measure. The number of defects, easy to measure. I can imagine a scorecard, but what about this wait time? I don't necessarily manage the call center system, but I get the data. How do I measure that the microservice migration was successful from a business process perspective? >> Yeah, that's a good question. I think it comes down to two things. One, the flexibility of scorecard means you can pipe in that data to Cortex. And what we recommend customers is track the outcome metrics and track the input metrics as well. And so what is the input metric to call wait time? Like maybe it's the fact that if something goes wrong, we have the run books to quickly roll back to an older version that we know is running. That way MTTR is faster. Or when something happens, we know the owner for that service and we can go back to them and say like, hey, we're going to ping you as an incident commander. Those are kind of the input metrics to, if we do these things, then we know our call wait time is going to drop because we're able to respond faster to incidents. And so you want to track those input metrics. And then you want to track the output metrics as well. And so if you have those metrics coming in from your Prometheus or your Datadogs or whatever, you can pipe that into Cortex and say, hey, we're going to look at both of these things holistically. So we want to see is there a correlation between those input metrics like are we doing things the right way, versus are we seeing the value that we want to come out of that? And so I think that's the value of Cortex is not so much around, hey, we're going to be prescriptive about it. It's here's this framework that will let you track all of that and say, are we doing things the right way and is it giving us the value that we want? And being able to report that update to engineer leadership and say, hey, maybe these services are not doing like we're not improving call wait time. Okay, why is that? Are these services behind on the actual input metrics that we care about? And so being able to see that I think is super valuable. >> Yeah, absolutely, I think just to touch on the reporting, I think that's one of the most value add things Cortex can provide. If you think about it, the service is atomic unit of your software. It represents everything that's being built and that bubbles up into teams, products, business units, and Cortex lets you represent that. So now I can, as a CTO, come in and say, hey, these product lines are they actually meeting our standards? Where are the areas of risk? Where should I be investing more resources? I think Cortex is almost like the best way to get the actual health of your engineering organization. >> All right Anish and Ganesh. We're going to go into the speed round here. >> Ganesh: It's time for the Q clock? >> Time for the Q clock. Start the Q clock. (upbeat music) Let's go on. >> Ganesh: Let's do it. >> Anish: Let's do it. >> Let's go on. You're you're 10 seconds in. >> Oh, we can start talking. Okay, well I would say, Anish was just touching on this. For a CTO, their question is how do I know if engineering quality is good? And they don't care about the microservice level. They care about as a business, is my engineering team actually producing. >> Keith: Follow the green, not the dream. (Ganesh laughs) >> And so the question is, well, how do we codify service quality? We don't want this to be a hand wavy thing that says like, oh, my team is good, my team is bad. We want to come in and define here's what service quality means. And we want that to be a number. You want that to be something that can- >> A goal without a timeline is just a dream. >> And CTO comes in and they say, here's what we care about. Here's how we're tracking it. Here are the teams that are doing well. We're going to reward the winners. We're going to move towards a world where every single team is doing service quality. And that's where Cortex can provide. We can give you that visibility that you never have before. >> For that five seconds. >> And hey, your SRE can't be the one handling all this. So let Cortex- >> Shoot the bad guy. >> Shot that, we're done. From Valencia Spain, I'm Keith Townsend. And you're watching theCube. The leader in high tech coverage. (soft music) (soft music) >> Narrator: TheCube presents Kubecon and Cloudnativecon Europe, 2022 brought to you by Red Hat, the cloud native computing foundation and its ecosystem partners. >> Welcome to Valencia, Spain in Kubecon, Cloudnativecon Europe, 2022. I'm Keith Townsend. And we are in a beautiful locale. The city itself is not that big 100,000, I mean, sorry, about 800,000 people. And we got out, got to see a little bit of the sites. It is an amazing city. I'm from the US, it's hard to put in context how a city of 800,000 people can be so beautiful. I'm here with Anish Dhar and Ganesh Datta, Co-founder and CTO of Cortex. Anish you're CEO of Cortex. We were having a conversation. One of the things that I asked my client is what is good. And you're claiming to answer the question about what is quality when it comes to measuring microservices? What is quality? >> Yeah, I think it really depends on the company. And I think that's really the philosophy we have when we build Cortex is that we understood that different companies have different definitions of quality, but they need to be able to be represented in really objective ways. I think what ends up happening in most engineering organizations is that quality lives in people's heads. Engineers who write the services, they're often the ones who understand all the intricacies with the service. What are the downstream I dependencies, who's on call for this service, where does the documentation live? All of these things, I think impact the quality of the service. And as these engineers leave the company or they switch teams, they often take that tribal knowledge with them. And so I think quality really comes down to being able to objectively like codify your best practices in some way, and have that distributed to all engineers in the company. >> And to add to that, I think like very concrete examples for an organization that's already modern their idea of quality might be uptime incidents. For somebody that's like going through a modernization strategy, they're trying to get to the 21st century. They're trying to get to Kubernetes. For them quality means like, where are we in that journey? Are you on our latest platforms? Are you running CI? Are you doing continuous delivery? Like quality can mean a lot of things. And so our perspective is how do we give you the tools to say as an organization here's what quality means to us. >> So at first my mind was going through when you said quality and as you started out the conversation about having this kind of non codified set of measurements, historical knowledge, et cetera. I was thinking observability measuring how much time does it take to have a transaction? But Ganesh you're introducing this new thing. I'm working with this project where we're migrating a monolith application to a set of microservices. And you're telling me Cortex helps me measure the quality of what I'm doing in my project? >> Ganesh: Absolutely. >> How is that? >> Yeah, it's a great question. So I think when you think about observability, you think about uptime and latency and transactions and throughput and all this stuff and I think that's very high level. And I think that's one perspective of what quality is. But as you're going through this journey, you might say like the fact that we're tracking that stuff, the fact that you're using APM, you're using distributed tracing, that is one element of service quality. Maybe service quality means you're doing CICD, you're running vulnerability scans. You're using Docker. Like what that means to us can be very different. So observability is just one aspect of, are you doing things the right way? Good to us means you're using SLOs. You are tracking those metrics. You're reporting that somewhere. And so that's like one component for our organization of what quality can mean. >> Wow, I'm kind of taken me back by this because I've not seen someone kind of give the idea. And I think later on, this is the perfect segment to introduce theCube clock in which I'm going to give you a minute to kind of like give me the elevator pitch, but we're going to have the deep conversation right now. When you go in and you... what's the first process you do when you engage in a customer? Does a customer go and get this off of repository, install it, the open source version and then what, I mean, what's the experience? >> Yeah, absolutely. So we have both a smart and on-prem version of Cortex. It's really straightforward. Basically we have a service discovery onboarding flow where customers can connect to different set of source for their services. It could be Kubernetes, ECS, Git Repos, APM tools, and then we'll actually automatically map all of that service data with all of the integration data in the company. So we'll take that service and map it to its on call rotation to the JIRA tickets that have the service tag associated with it, to the data algo SLOs. And what that ends up producing is this service catalog that has all the information you need to understand your service. Almost like a single pane of glass to work with the service. And then once you have all of that data inside Cortex, then you can start writing scorecards, which grade the quality of those services across those different verticals Ganesh was talking about. like whether it's a monolith, a microservice transition, whether it's production readiness or security standards, you can really start tracking that. And then engineers start understanding where are the areas of risk with my service across reliability or security or operation maturity. I think it gives us insane visibility into what's actually being built and the quality of that compared to your standards. >> So, okay, I have a standard for SLO. That is usually something that is, it might not even be measured. So how do you help me understand that I'm lacking a measurable system for tracking SLO and what's the next step for helping me get that system? >> Yeah, I think our perspective is very much how do we help you create a culture where developers understand what's expected of them? So if SLOs are part of what we consider observability and reliability, then Cortex's perspective is, hey, we want to help your organization adopt SLOs. And so that service cataloging concept, the service catalog says, hey, here's my APM integration. Then a scorecard, the organization goes in and says, we want every service owner to define their SLOs. We want to define your thresholds. We want you to be tracking them. Are you passing your SLOs? And so we're not being prescriptive about here's what we think your SLOs should be. Ours is more around, hey, we're going to help you like if you care about SLOs, we're going to tell the service owners saying, hey, you need to have at least two SLOs for your service and you've got to be tracking them. And the service catalog that data flows from the service catalog into those scorecards. And so we're helping them adopt that mindset of, hey, SLOs are important. It is a component of like a holistic service reliability excellence metric that we care about. >> So what happens when I already have systems for like SLO, how do I integrate that system with Cortex? >> That's one of the coolest things. So the service catalog can be pretty smart about it. So let's say you've sucked in your services from your GitHub. And so now your services are in Cortex. What we can do is we can actually discover from your APM tools, we can say like, hey, for this service we have guessed that this is the corresponding APM in Datadog. And so from Datadog, here are your SLOs, here are your monitors. And so we can start mapping all the different parts of your world into the Cortex. And that's the power of the service catalog. The service catalog says, given a service, here's everything about that service. Here's the vulnerability scans, here's the APM, the monitor, the SLOs, the JIRA ticket, like all that stuff comes into a single place. And then our scorecard product can go back out and say, hey, Datadog, tell me about this SLOs for the service. And so we're going to get that information live and then score your services against that. And so we're like integrating with all of your third party tools and integrations to create that single pan of glass. >> Yeah and to add to that, I think one of the most interesting use cases with scorecards is, okay, which teams have actually adopted SLOs in the first place? I think a lot of companies struggle with how do we make sure engineers defined SLOs are passing them actually care about them? And scorecards can be used to one, which teams are actually meeting these guidelines? And then two let's get those teams adopted on SLOs. Let's track that. You can do all of that in Cortex, which is, I think a really interesting use case that we've seen. >> So let's talk about kind of my use case in the end to end process for integrating Cortex into migrations. So I have this monolithic application, I want to break it into microservices and then I want to ensure that I'm delivering you know what, let's leave it a little bit more open ended. How do I know that I'm better at the end of I was in a monolith before, how do I measure that now that I'm in microservices and on cloud native, that I'm better? >> That's a good question. I think it comes down to, and we talk about this all the time for our customers that are going through that process. You can't define better if you don't define a baseline, like what does good mean to us? And so you need to start by saying, why are we moving to microservices? Is it because we want teams to move faster? Is it because we care about reliability up time? Like what is the core metric that we're tracking? And so you start by defining that as an organization. And that is kind of like a hand wavy thing. Why are we doing microservices? Once you have that, then you define the scorecard and that's like our golden path. Once we're done doing this microservice migration, can we say like, yes, we have been successful. And like those metrics that we care about are being tracked. And so where Cortex fits in is from the very first step of creating a service. You can use Cortex to define templates. Like one click, you go in, it spins up a microservice for you that follows all your best practices. And so from there, ideally you're meeting 80% of your standards already. And then you can use scorecards to track historical progress. So you can say, are we meeting our golden path standards? Like if it's uptime, you can track uptime metrics and scorecards. If it's around velocity, you can track velocity metrics. Is it just around modernization? Are you doing CICD and vulnerability scans, like moving faster as a team? You can track that. And so you can start seeing like trends at a per team level, at a per department level, at a per product level. Saying, hey, we are seeing consistent progress in the metrics that we care about. And this microservice journey is helping us with that. So I think that's the kind of phased progress that we see with Cortex. >> So I'm going to give you kind of a hand wavy thing. We're told that cloud native helps me to do things faster with less defects so that I can do new opportunities. Let's stretch into kind of this non-tech, this new opportunities perspective. I want to be able to move my microservices. I want to be able to move my architecture to microservices so I reduce call wait time on my customer service calls. So, I could easily see how I can measure are we iterating faster? Are we putting out more updates quicker? That's pretty easy to measure. The number of defects, easy to measure. I can imagine a scorecard. But what about this wait time? I don't necessarily manage the call center system, but I get the data. How do I measure that the microservice migration was successful from a business process perspective? >> Yeah, that's a good question. I think it comes down to two things. One, the flexibility of scorecard means you can pipe in that data to Cortex. And what we recommend customers is track the outcome metrics and track the input metrics as well. And so what is the input metric to call wait time? Like maybe it's the fact that if something goes wrong, we have the run book to quickly roll back to an older version that we know is running that way MTTR is faster. Or when something happens, we know the owner for that service and we can go back to them and say like, hey, we're going to ping you as an incident commander. Those are kind the input metrics to, if we do these things, then we know our call wait time is going to drop because we're able to respond faster to incidents. And so you want to track those input metrics and then you want to track the output metrics as well. And so if you have those metrics coming in from your Prometheus or your Datadogs or whatever, you can pipe that into Cortex and say, hey, we're going to look at both of these things holistically. So we want to see is there a correlation between those input metrics? Are we doing things the right way versus are we seeing the value that we want to come out of that? And so I think that's the value of Cortex is not so much around, hey, we're going to be prescriptive about it. It's here's this framework that will let you track all of that and say, are we doing things the right way and is it giving us the value that we want? And being able to report that update to engineer leadership and say, hey, maybe these services are not doing like we're not improving call wait time. Okay, why is that? Are these services behind on like the actual input metrics that we care about? And so being able to see that I think is super valuable. >> Yeah, absolutely. I think just to touch on the reporting, I think that's one of the most value add things Cortex can provide. If you think about it, the service is atomic unit of your software. It represents everything that's being built and that bubbles up into teams, products, business units, and Cortex lets you represent that. So now I can, as a CTO, come in and say, hey, these product lines are they actually meeting our standards? Where are the areas of risk? Where should I be investing more resources? I think Cortex is almost like the best way to get the actual health of your engineering organization. >> All right, Anish and Ganesh. We're going to go into the speed round here. >> Ganesh: It's time for the Q clock >> Time for the Q clock. Start the Q clock. (upbeat music) >> Let's go on. >> Ganesh: Let's do it. >> Anish: Let's do it. >> Let's go on, you're 10 seconds in. >> Oh, we can start talking. Okay, well I would say, Anish was just touching on this, for a CTO, their question is how do I know if engineering quality is good? And they don't care about the microservice level. They care about as a business, is my enduring team actually producing- >> Keith: Follow the green, not the dream. (Ganesh laughs) >> And so the question is, well, how do we codify service quality? We don't want this to be a hand wavy thing that says like, oh, my team is good, my team is bad. We want to come in and define here's what service quality means. And we want that to be a number. You want that to be something that you can- >> A goal without a timeline is just a dream. >> And a CTO comes in and they say, here's what we care about, here's how we're tracking it. Here are the teams that are doing well. We're going to reward the winners. We're going to move towards a world where every single team is doing service quality. And that's what Cortex can provide. We can give you that visibility that you never had before. >> For that five seconds. >> And hey, your SRE can't be the one handling all this. So let Cortex- >> Shoot the bad guy. >> Shot that, we're done. From Valencia Spain, I'm Keith Townsend. And you're watching theCube, the leader in high tech coverage. (soft music)

Published Date : May 20 2022

SUMMARY :

Brought to you by Red Hat, And we got out, got to see and have that distributed to how do we give you the tools the quality of what I'm So I think when you think What's the first process you do that has all the information you need So how do you help me we want you to define your thresholds. And so we can start mapping adopted SLOs in the first place? in the end to end process And so you can start seeing like trends So I'm going to give you And so if you have those metrics coming in and Cortex lets you represent that. the speed round here. Time for the Q clock. You're you're 10 seconds in. the microservice level. Keith: Follow the green, not the dream. And so the question is, well, timeline is just a dream. that you never have before. And hey, your SRE can't And you're watching theCube. 2022 brought to you by Red Hat, And we got out, got to see and have that distributed to how do we give you the tools the quality of what I'm So I think when you think And I think later on, this that has all the information you need So how do you help me And the service catalog that data flows And so we can start mapping You can do all of that in the end to end process And so you can start seeing So I'm going to give you And so if you have those metrics coming in I think just to touch on the reporting, the speed round here. Time for the Q clock. the microservice level. Keith: Follow the green, not the dream. And so the question is, well, timeline is just a dream. that you never had before. And hey, your SRE can't And you're watching theCube,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
AnishPERSON

0.99+

Keith TownsendPERSON

0.99+

CortexORGANIZATION

0.99+

80%QUANTITY

0.99+

KeithPERSON

0.99+

Red HatORGANIZATION

0.99+

USLOCATION

0.99+

GaneshPERSON

0.99+

21st centuryDATE

0.99+

100,000QUANTITY

0.99+

10 secondsQUANTITY

0.99+

twoQUANTITY

0.99+

five secondsQUANTITY

0.99+

two thingsQUANTITY

0.99+

firstQUANTITY

0.99+

Valencia, SpainLOCATION

0.99+

800,000 peopleQUANTITY

0.99+

CortexTITLE

0.99+

Valencia SpainLOCATION

0.99+

one elementQUANTITY

0.99+

one aspectQUANTITY

0.99+

bothQUANTITY

0.99+

oneQUANTITY

0.99+

CloudnativeconORGANIZATION

0.99+

one perspectiveQUANTITY

0.99+

DatadogORGANIZATION

0.99+

one componentQUANTITY

0.99+

Ganesh DattaPERSON

0.98+

OneQUANTITY

0.98+

SLOTITLE

0.98+

2022DATE

0.98+

first stepQUANTITY

0.98+

KubeconORGANIZATION

0.97+

about 800,000 peopleQUANTITY

0.97+

one clickQUANTITY

0.97+

Cloud & Hybrid IT Analytics: 1 on 1 with Sudip Datta, CA Technologies


 

>> Okay welcome back everyone to our special live presentation for cloud and IT analytics for the hybrid cloud. I'm John Furrier, your host. We just had an interview with Peter Burris, keynote presenter. Our second one-on-one conversation is with our second keynote, Sudip Datta is the Vice President of Product Manager for CA Technologies. Sudip, great to see you. Great keynote. >> Good to see you. Thank you. >> A lot of information on your keynote so folks can check it out online and on demand, but I wanted to ask you, you mentioned evolving infrastructure, so it's the first thing that you kind of set the table with. What do you mean by that? >> Sure. So first of all, as I mentioned in my keynote, the infrastructure today is intimately connected with business operations and the user experience, right? So how is the infrastructure evolving and catering to this ongoing demand of app economy? Before we get there, let's define what infrastructure means to CA, right? Infrastructure is servers, storage, network. Could be running on prem, could be running on public cloud, right? So let's look at what's happening on each layer, right. In the server layer, we are seeing bi-directional, somewhat antithetical movement, right? One on the consolidation side of things and the other on expansion to multiple clouds, right? On the consolidation side of things, of course there are re-amps and now we see more and more containers getting adopted like I was looking at a survey. The container growth between 2016 and 2017 is more than 40%. So we are also hearing about serverless compute, stateless compute, and so on and so forth. So that's on the server side of things, right? Storage, we are hearing about object storage. Network is getting more and more abstracted with software defined networking, right? Another survey portrayed that between 2014 and 2020, the SDN market is anticipated to grow at a CAGR of 53% and that's huge. Huge. So the infrastructure is evolving, getting more dynamic, getting more abstract, right? And therefore there are challenges to monitoring and management. >> And you're seeing growth in Kubernetes just to throw a cherry on top of that conversation because that's orchestrating the apps which require programmable infrastructure. >> Absolutely. >> I want to just make a comment, I was just talking with Peter Burris and I want to highlight one of your pieces of your keynote that you mentioned that there's four pillars of modern analytics and monitoring and Peter and I were talking about the digital business requirement for a modern infrastructure and I was kind of teasing it out, I want to see where he wanted to go with it, I kind of put him on the spot, but I was saying hey, data's been a department, analytics has kind of been a department, but now it's kind of holistic. He kind of slapped me around, said "no no, it's still going to be a department." Although technically right, I was trying to say there's a bigger picture. >> Sudip: Sure. >> This is kind of a mindset shift. People are are re-imagining their analytics as a strategic asset just like data's becoming a strategic asset. My question is, if you don't monitor it, how do you even understand it? So you need these four pillars, and they are policy based configuration that's dynamic, unified monitoring, contextual intelligence, and collaboration integration. With the trend of the true private cloud report, you're starting to see the shift in labor from non-differentiated to differentiated. And those kind of four pillars as kind of a breeding ground for innovation. Are they connecting, do you see that connecting into this new IT role? >> Absolutely. As you rightly pointed out, the non-differentiated labor is being replaced by automation, by machine learning, by scripts, whatever it is. It's whole-scale automation. So that itself lends to the fact that there is a different shade of labor which is the value-added labor. So how does labor create value? And that's related to the four pillars that we talked about. How to manage these dynamic environments and glean data out of these environments to provide valuable insights and intelligence. We talked about contextual intelligence. So when it comes to contextual intelligence, IT can be intimately involved with the business to provide the IT context to the business or the business context to the IT and vice versa and add value to the business. Giving a specific example... In prior times, IT used to be reactive. When business runs out of, runs a camping, they run out of capacity and they say we need to add servers and they're rolling a server and so on and so forth. Now, of course the automation side of server provisioning has been taken care of. There are a lot of APIs out there, there are Amazon cloud formations and all that, but you still need a policy that is going to proactively detect, perform a what-if analysis that if there is a 2x ramp in business, there is going to be corresponding pressure on infrastructure and act proactively. That way, I can get to be the friend of business. It's not really acting after the fact, but acting proactively. >> I was talking with Umar Kahn, one of your colleagues yesterday. We talked about cars. I love Teslas 'cause it's a great example of innovation and you got old cars and you got Teslas. Really we're seeing kind of a move in IT where modern looks like the Tesla of IT where things are just different but work much better. So I got to ask you a question, Tesla's a great cool car, there's a lot of hype and buzz around it, but it's still got to drive, right? It's still got to be great. So you mentioned faults, fault detection and machine learning in your presentation, but IT ops still needs to run. And you got IOT Edge that Peter pointed out that needs to be figured out. So you got to figure out these new things and you got to run stuff, so you need the fault detection with the machine but you still got to be cool. Like the Tesla of IT. How are you guys becoming the Tesla of IT? >> Absolutely. I will touch upon a few points. First of all, as I mentioned right at the beginning, that data is important but we focused on the three Vs of data, which is velocity, volume and variety. But there is also the veracity of data and CA has been in the business of monitoring, capturing this data from various systems. From mobiles to mainframes, right? For the last few decades, right? So we have the true data, we are collecting the data, and now we are building a data analytics platform on which the data will be ingested and we will give insights. So that's going to be a big differentiator. The other is, we have all the tools from application management to infrastructure management tools, net ops tools, and we are connecting all of them to cover the entire digital chain. The reason is important, and I will highlight only one particular aspect of it. Network, the most neglected compliment in the infrastructure-- >> And the most important. Everyone complains about the network the most. >> Most important. Even when a kid plays a video game, it's an app. Most of us tend to forget that it's an app and the most important element in that app is the network. And we are in the business of network management so we are not only server and storage and app, we are also tying network management into this overall analytics platform. And within network management, it's tacit management, flow management. These are all important things because today's world, if your network betrays on you, then your user experience-- >> So I got to ask you, the products are in the company. And this is kind of important because most people who think about monitoring analytics would have kind of a different view based on what they're instrumenting. You're kind of talking about network and apps. You're kind of looking at the big picture. Are you tying that together? >> Absolutely. >> Can you explain how? >> Absolutely. So CA has been a market leader in application performance management and in network management and in server and cloud management. So we are tying all this together, the whole digital chain, as I said we are ingesting all the data into an open standard space, open source-based analytics platform, and we are collating the data so you can see what are the networks elements that preceded before a server got choked? Or before the application became inaccessible? We can tie it all together, all the units together, and perform assisted create and root cause analysis. >> Well I wanted to put you on the spot today because we are live, so I got to ask you as the VP of Product Management, what's your favorite product? Do you have a favorite child? (laughing) >> I mean, all of them are my favorite. >> There it is! Of course you can't pick a favorite, everyone's watching. >> Yeah, so yeah. >> As a parent you can't pick a favorite child. They're all good in their own way, right? >> They're all kind of horses for courses. Really, they do fabulous things. At the same time, we don't want the proliferation of tools. We are trying to rationalize tools like the net ops, the cloud ops and application performance management and tie them all together into our analytics platform. You can say like the analytics is my favorite word today because that's the new kid on the block but as I said, all of them are very very important. >> Well I always say, whoever could be the Tesla for IT is going to win it all. So with that, serious question, as VP of Product Management, do want to ask a serious question around that. What's your North Star? When you talk to your product teams, they're specking out products, they're talking to customers, and the engineers are building it out. What is the North Star? What is the ethos of CA these days? 'Cause you guys are pushing the envelope while maintaining that install base of customers. What is the North Star? What is the ethos? What's the guiding principles for CA Technologies? >> Absolutely. Customers, customers and customers, right? And the reason being, and I will give you... Of course, the user experience matters, but there is also an empirical reason. We are a market leader in the MSP space, for example. MSP and and just the space, and not only do we care about our customers but the customers of our customers as well. MSPs like ONE-NET, and Bespin Global, that you see is monitoring tools for managing their customers. So our allegiance goes all the way to our customers and their customers. So that's a guiding principle. But at the same time, we try to innovate beyond what our customers have been asking for. That's where the intuitive integration between application performance management, infrastructure management, network management, comes. And we want to be absolutely a leader in this end-to-end management. >> We talk with our WebOn team all the time and Peter and I talk about with Dave Alante all the time about how important IT operations are going to be right now because all the market research shows, Peter mentioned it, private cloud, true private cloud, hybrid cloud, massive growth area. Lot of opportunities for ops to really deliver value because the dev-ops momentum, because of the things like containers and Kubernetes, the programmable infrastructure has to be there. So I got to ask you the question, from a customer standpoint, and folks watching. What's the most important thing that your customers need to know when they start to re-think the architecture and ultimately make that 10 to 20 year investment in this new modern IT operations with CA? >> Sure. The first thing is, and I will re-visit the four pillars, right? That the dynamic, discovery, policy-based management is very very important because discovery, a lot of times we neglect discovery because it's always there. But the thing is, that's the starting point. That's the cradle where the overall monitoring takes birth. So that's the first point. The second is bring everything into, if not a single but minimal panes of glass. Maybe net ops has a tool and cloud ops has a tool and of course you have a tool for applications performance management. So those are the building blocks of monitoring. And then, overlay it with contextual intelligence and analytics. As I said, we are ingesting all the data, not only from CA tools, but using open APIs from other tools into our analytics framework and provide contextual intelligence. And last but not the least, collaboration and integration. We are integrating with frameworks such as Slack to provide collaboration between dev ops and IT ops, between storage admins and server admins, and so on and so forth, right? So those are the building blocks. So if you are thinking about what you are going to do in 10 years timeframe, first of all, hybrid cloud is a reality. So for managing the overall, entire spectrum of hybrid cloud, you need a tool that's unified, that can do dynamic policy-based management, that can provide intelligence, and that can encourage collaboration. >> Sudip, thank you so much for sharing this one-on-one conversation. For the folks watching, there's a great slide that outlines that operational intelligence. It was a beautiful eye candy, it's like an architecture slide, I was geekin' out on it. Check it out on the keynote on the on demand. Sudip, thank you so much for sharing your insight here on the future of modern analytics and monitoring strategies. This is a special presentation. One-on-one drill-down with keynote presenter Sudip Datta who is the Vice President of Product Management, part of the cloud and IT analytics digital business. We'll be right back with more one-on-one interviews after this short break.

Published Date : Aug 22 2017

SUMMARY :

Sudip Datta is the Vice President of Product Manager Good to see you. so it's the first thing and the other on expansion to multiple clouds, right? because that's orchestrating the apps I kind of put him on the spot, With the trend of the true private cloud report, or the business context to the IT and vice versa So I got to ask you a question, and CA has been in the business And the most important. and the most important element in that app is the network. You're kind of looking at the big picture. and we are collating the data Of course you can't pick a favorite, As a parent you can't pick a favorite child. because that's the new kid on the block and the engineers are building it out. MSP and and just the space, So I got to ask you the question, and of course you have a tool on the future of modern analytics

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
PeterPERSON

0.99+

Peter BurrisPERSON

0.99+

Dave AlantePERSON

0.99+

John FurrierPERSON

0.99+

10QUANTITY

0.99+

Umar KahnPERSON

0.99+

Sudip DattaPERSON

0.99+

2016DATE

0.99+

second keynoteQUANTITY

0.99+

AmazonORGANIZATION

0.99+

2017DATE

0.99+

yesterdayDATE

0.99+

Bespin GlobalORGANIZATION

0.99+

each layerQUANTITY

0.99+

more than 40%QUANTITY

0.99+

TeslaORGANIZATION

0.99+

2xQUANTITY

0.99+

todayDATE

0.99+

first pointQUANTITY

0.99+

oneQUANTITY

0.99+

2014DATE

0.99+

10 yearsQUANTITY

0.99+

SudipPERSON

0.98+

four pillarsQUANTITY

0.98+

20 yearQUANTITY

0.98+

2020DATE

0.98+

TeslasORGANIZATION

0.98+

53%QUANTITY

0.98+

1QUANTITY

0.97+

first thingQUANTITY

0.97+

FirstQUANTITY

0.97+

secondQUANTITY

0.97+

firstQUANTITY

0.96+

ONE-NETORGANIZATION

0.95+

second oneQUANTITY

0.95+

threeQUANTITY

0.94+

CALOCATION

0.93+

WebOnORGANIZATION

0.92+

KubernetesTITLE

0.92+

SlackTITLE

0.92+

OneQUANTITY

0.9+

CA TechnologiesORGANIZATION

0.88+

North StarORGANIZATION

0.82+

last few decadesDATE

0.81+

CAORGANIZATION

0.74+

singleQUANTITY

0.7+

aspectQUANTITY

0.68+

opsORGANIZATION

0.64+

VicePERSON

0.62+

piecesQUANTITY

0.49+

IOT EdgeTITLE

0.48+

UNLIST TILL 4/2 - The Road to Autonomous Database Management: How Domo is Delivering SLAs for Less


 

hello everybody and thank you for joining us today at the virtual Vertica BBC 2020 today's breakout session is entitled the road to autonomous database management how Domo is delivering SLA for less my name is su LeClair I'm the director of marketing at Vertica and I'll be your host for this webinar joining me is Ben white senior database engineer at Domo but before we begin I want to encourage you to submit questions or comments during the virtual session you don't have to wait just type your question or comment in the question box below the slides and click Submit there will be a Q&A session at the end of the presentation we'll answer as many questions as we're able to during that time any questions that we aren't able to address or drew our best to answer them offline alternatively you can visit vertical forums to post your questions there after the session our engineering team is planning to join the forum to keep the conversation going also as a reminder you can maximize your screen by clicking the double arrow button in the lower right corner of the slide and yes this virtual session is being recorded and will be available to view on demand this week we'll send you notification as soon as it's ready now let's get started then over to you greetings everyone and welcome to our virtual Vertica Big Data conference 2020 had we been in Boston the song you would have heard playing in the intro would have been Boogie Nights by heatwaves if you've never heard of it it's a great song to fully appreciate that song the way I do you have to believe that I am a genuine database whisperer then you have to picture me at 3 a.m. on my laptop tailing a vertical log getting myself all psyched up now as cool as they may sound 3 a.m. boogie nights are not sustainable they don't scale in fact today's discussion is really all about how Domo engineers the end of 3 a.m. boogie nights again well I am Ben white senior database engineer at Domo and as we heard the topic today the road to autonomous database management how Domo is delivering SLA for less the title is a mouthful in retrospect I probably could have come up with something snazzy er but it is I think honest for me the most honest word in that title is Road when I hear that word it evokes for me thoughts of the journey and how important it is to just enjoy it when you truly embrace the journey often you look up and wonder how did we get here where are we and of course what's next right now I don't intend to come across this too deep so I'll submit there's nothing particularly prescient and simply noticing the elephant in the room when it comes to database economy my opinion is then merely and perhaps more accurately my observation the office context imagine a place where thousands and thousands of users submit millions of ad-hoc queries every hour now imagine someone promised all these users that we could deliver bi leverage at cloud scale in record time I know what many of you should be thinking who in the world would do such a thing of course that news was well received and after the cheers from executives and business analysts everywhere and chance of Keep Calm and query on finally started to subside someone that turns an ass that's possible we can do that right except this is no imaginary place this is a very real challenge we face the demo through imaginative engineering demo continues to redefine what's possible the beautiful minds at Domo truly embrace the database engineering paradigm that one size does not fit all that little philosophical nugget is one I would pick up while reading the white papers and books of some guy named stone breaker so to understand how I and by extension Domo came to truly value analytic database administration look no further than that philosophy and what embracing it would mean it meant really that while others were engineering skyscrapers we would endeavor to build Datta neighborhoods with a diverse kapala G of database configuration this is where our journey at Domo really gets under way without any purposeful intent to define our destination not necessarily thinking about database as a service or anything like that we had planned this ecosystem of clusters capable of efficiently performing varied workloads we achieve this with custom configurations for node count resource pool configuration parameters etc but it also meant concerning ourselves with the unattended consequences of our ambition the impact of increased DDL activities on the catalog system overhead in general what would be the management requirements of an ever-evolving infrastructure we would be introducing multiple points of failure what are the advantages the disadvantages those types of discussions and considerations really help to define what would be the basic characteristics of our system the database itself needed to be trivial redundant potentially ephemeral customizable and above all scalable and we'll get more into that later with this knowledge of what we were getting into automation would have to be an integral part of development one might even say automation will become the first point of interest on our journey now using popular DevOps tools like saltstack terraform ServiceNow everything would be automated I mean it discluded everything from larger multi-step tasks like database designs database cluster creation and reboots to smaller routine tasks like license updates move-out and projection refreshes all of this cool automation certainly made it easier for us to respond to problems within the ecosystem these methods alone still if our database administration reactionary and reacting to an unpredictable stream of slow query complaints is not a good way to manage a database in fact that's exactly how three a.m. Boogie Nights happen and again I understand there was a certain appeal to them but ultimately managing that level of instability is not sustainable earlier I mentioned an elephant in the room which brings us to the second point of interest on our road to autonomy analytics more specifically analytic database administration why our analytics so important not just in this case but generally speaking I mean we have a whole conference set up to discuss it domo itself is self-service analytics the answer is curiosity analytics is the method in which we feed the insatiable human curiosity and that really is the impetus for analytic database administration analytics is also the part of the road I like to think of as a bridge the bridge if you will from automation to autonomy and with that in mind I say to you my fellow engineers developers administrators that as conductors of the symphony of data we call analytics we have proven to be capable producers of analytic capacity you take pride in that and rightfully so the challenge now is to become more conscientious consumers in some way shape or form many of you already employ some level of analytics to inform your decisions far too often we are using data that would be categorized as nagging perhaps you're monitoring slow queries in the management console better still maybe you consult the workflows analyzing how about a logging and alerting system like sumo logic if you're lucky you do have demo where you monitor and alert on query metrics like this all examples of analytics that help inform our decisions being a Domo the incorporation of analytics into database administration is very organic in other words pretty much company mandated as a company that provides BI leverage a cloud scale it makes sense that we would want to use our own product could be better at the business of doma adoption of stretches across the entire company and everyone uses demo to deliver insights into the hands of the people that need it when they need it most so it should come as no surprise that we have from the very beginning use our own product to make informed decisions as it relates to the application back engine in engineering we call it our internal system demo for Domo Domo for Domo in its current iteration uses a rules-based engine with elements through machine learning to identify and eliminate conditions that cause slow query performance pulling data from a number of sources including our own we could identify all sorts of issues like global query performance actual query count success rate for instance as a function of query count and of course environment timeout errors this was a foundation right this recognition that we should be using analytics to be better conductors of curiosity these types of real-time alerts were a legitimate step in the right direction for the engineering team though we saw ourselves in an interesting position as far as demo for demo we started exploring the dynamics of using the platform to not only monitor an alert of course but to also triage and remediate just how much economy could we give the application what were the pros and cons of that Trust is a big part of that equation trust in the decision-making process trust that we can mitigate any negative impacts and Trust in the very data itself still much of the data comes from systems that interacted directly and in some cases in directly with the database by its very nature much of the data was past tense and limited you know things that had already happened without any reference or correlation to the condition the mayor to those events fortunately the vertical platform holds a tremendous amount of information about the transaction it had performed its configurations the characteristics of its objects like tables projections containers resource pools etc this treasure trove of metadata is collected in the vertical system tables and the appropriately named data collector tables as a version 9 3 there are over 190 tables that define the system tables while the data collector is the collection of 215 components a rich collection can be found in the vertical system tables these tables provide a robust stable set of views that let you monitor information about your system resources background processes workload and performance allowing you to more efficiently profile diagnose and correlate historical data such as low streams query profiles to pool mover operations and more here you see a simple query to retrieve the names and descriptions of the system tables and an example of some of the tables you'll find the system tables are divided into two schemas the catalog schema contains information about persistent objects and the monitor schema tracks transient system States most of the tables you find there can be grouped into the following areas system information system resources background processes and workload and performance the Vertica data collector extends system table functionality by gathering and retaining aggregating information about your database collecting the data collector mixes information available in system table a moment ago I show you how you get a list of the system tables in their description but here we see how to get that information for the data collector tables with data from the data collecting tables in the system tables we now have enough data to analyze that we would describe as conditional or leading data that will allow us to be proactive in our system management this is a big deal for Domo and particularly Domo for demo because from here we took the critical next step where we analyze this data for conditions we know or suspect lead to poor performance and then we can suggest the recommended remediation really for the first time we were using conditional data to be proactive in a database management in record time we track many of the same conditions the Vertica support analyzes via scrutinize like tables with too many production or non partition fact tables which can negatively affect query performance and life in vertical in viral suggests if the table has a data a time step column you recommend the partitioning by the month we also can track catalog sizes percentage of total memory and alert thresholds and trigger remediations requests per hour is a very important metric in determining when a trigger are scaling solution tracking memory usage over time allows us to adjust resource pool parameters to achieve the optimal performance for the workload of course the workload analyzer is a great example of analytic database administration I mean from here one can easily see the logical next step where we were able to execute these recommendations manually or automatically be of some configuration parameter now when I started preparing for this discussion this slide made a lot of sense as far as the logical next iteration for the workload analyzing now I left it in because together with the next slide it really illustrates how firmly Vertica has its finger on the pulse of the database engineering community in 10 that OS management console tada we have the updated work lies will load analyzer we've added a column to show tuning commands the management console allows the user to select to run certain recommendations currently tuning commands that are louder and alive statistics but you can see where this is going for us using Domo with our vertical connector we were able to then pull the metadata from all of our clusters we constantly analyze that data for any number of known conditions we build these recommendations into script that we can then execute immediately the actions or we can save it to a later time for manual execution and as you would expect those actions are triggered by thresholds that we can set from the moment nyan mode was released to beta our team began working on a serviceable auto-scaling solution the elastic nature of AI mode separated store that compute clearly lent itself to our ecosystems requirement for scalability in building our system we worked hard to overcome many of the obstacles they came with the more rigid architecture of enterprise mode but with the introduction is CRM mode we now have a practical way of giving our ecosystem at Domo the architectural elasticity our model requires using analytics we can now scale our environment to match demand what we've built is a system that scales without adding management overhead or our necessary cost all the while maintaining optimal performance well we're really this is just our journey up to now and which begs the question what's next for us we expand the use of Domo for Domo within our own application stack maybe more importantly we continue to build logic into the tools we have by bringing machine learning and artificial intelligence to our analysis and decision making really do to further illustrate those priorities we announced the support for Amazon sage maker autopilot at our demo collusive conference just a couple of weeks ago for vertical the future must include in database economy the enhanced capabilities in the new management console to me are clear nod to that future in fact with a streamline and lightweight database design process all the pieces should be in place versions deliver economists database management itself we'll see well I would like to thank you for listening and now of course we will have a Q&A session hopefully very robust thank you [Applause]

Published Date : Mar 31 2020

SUMMARY :

conductors of the symphony of data we

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
BostonLOCATION

0.99+

VerticaORGANIZATION

0.99+

thousandsQUANTITY

0.99+

DomoORGANIZATION

0.99+

3 a.m.DATE

0.99+

AmazonORGANIZATION

0.99+

todayDATE

0.99+

first timeQUANTITY

0.98+

this weekDATE

0.97+

over 190 tablesQUANTITY

0.97+

two schemasQUANTITY

0.96+

second pointQUANTITY

0.96+

215 componentsQUANTITY

0.96+

first pointQUANTITY

0.96+

three a.m.DATE

0.96+

Boogie NightsTITLE

0.96+

millions of ad-hoc queriesQUANTITY

0.94+

DomoTITLE

0.93+

Vertica Big Data conference 2020EVENT

0.93+

Ben whitePERSON

0.93+

10QUANTITY

0.91+

thousands of usersQUANTITY

0.9+

one sizeQUANTITY

0.89+

saltstackTITLE

0.88+

4/2DATE

0.86+

a couple of weeks agoDATE

0.84+

DattaORGANIZATION

0.82+

end of 3 a.m.DATE

0.8+

Boogie NightsEVENT

0.78+

double arrowQUANTITY

0.78+

every hourQUANTITY

0.74+

ServiceNowTITLE

0.72+

DevOpsTITLE

0.72+

Database ManagementTITLE

0.69+

su LeClairPERSON

0.68+

many questionsQUANTITY

0.63+

SLATITLE

0.62+

The RoadTITLE

0.58+

Vertica BBCORGANIZATION

0.56+

2020EVENT

0.55+

database managementTITLE

0.52+

Domo DomoTITLE

0.46+

version 9 3OTHER

0.44+

Stefanie Chiras, Ph.D., Red Hat | AnsibleFest 2019


 

>>live from Atlanta, Georgia. It's the Q covering answerable Best 2019. Brought to you by Red Hat. >>Welcome back. Everyone keeps live coverage of answerable fast here in Atlanta. Georgia John for my coach do Minutemen were here. Stephanie chairs to the vice president of general manager of the rail business unit. Red Hat. Great to see you. Nice to see you, too. You have all your three year career. IBM now Invincible. Back, Back in the fold. >>Yeah. >>So last time we chatted at Red Hat Summit Rail. Eight. How's it going? What's the update? >>Yeah, so we launched. Related some. It was a huge opportunity for arrested Sort of Show it off to the world. A couple of key things we really wanted to do There was make sure that we showed up the red hat portfolio. It wasn't just a product launch. It was really a portfolio. Lunch feedback so far on relate has been great. We have a lot of adopters on their early. It's still pretty early days. When you think about it, it's been about a little over 445 months. So, um, still early days the feedback has been good. You know it's actually interesting when you run a subscription based software model, because customers can choose to go to eight when they need those features and when they assess those features and they can pick and choose how they go. But we have a lot of folks who have areas of relate that they're testing the feature function off. >>I saw a tweet you had on your Twitter feed 28 years old, still growing up, still cool. >>Yeah, >>I mean 28 years old, The world's an adult now >>know Lennox is running. The enterprise is now, and now it's about how do you bring new innovation in when we launched Relate. We focused really on two sectors. One was, how do we help you run your business more efficiently? And then how do we help you grow your business with innovation? One of the key things we did, which is probably the one that stuck with me the most, was we actually partnered with the Redhead Management Organization and we pulled in the capability of what's called insights into the product itself. So all carbon subscription 678 all include insights, which is a rules based engine built upon the data that we have from, you know, over 15 years of helping customers run large scale Lennox deployments. And we leverage that data in order to bring that directly to customers. And that's been huge for us. And it's not only it's a first step into getting into answerable. >>I want to get your thoughts on We're here and Ansel Fest ate one of our two day coverage. The Red Hat announced the answer Automation platform. I'll see. That's the news. Why is this show so important in your mind? I mean, you see the internal. You've seen the history of the industry's a lot of technology changes happening in the modern enterprises. Now, as things become modernized both public sector and commercial, what's the most important thing happening? Why is this as well fest so important this year? >>To me, it comes down to, I'd say, kind of two key things. Management and automation are becoming one of the key decision makers that we see in our customers, and that's really driven by. They need to be efficient with what they have running today, and they need to be able to scale and grow into innovation. platform. So management and automation is a core critical decision point. I think the other aspect is, you know, Lennox started out 28 years ago proving to the world how open source development drives innovation. And that's what you see here. A danceable fest. This is the community coming together to drive innovation, super modular, able to provide impact right from everything from how you run your legacy systems to how you bring security to it into how do you bring new applications and deploy them in a safe and consistent way? It spans the whole gambit. >>So, Stephanie, you know, there's so much change going on in the industry you talked about, you know what's happening in Relate. I actually saw a couple of hello world T shirts which were given out at Summit in Boston this year, maybe help tie together how answerable fits into this. How does it help customers, you know, take advantage of the latest technology and and and move their companies along to be able to take advantage of some of the new features? >>Yeah, and so I really believe, of course, that unopened hybrid cloud, which is our vision of where people want to go, You need Lennox. So Lenox sits at the foundation. But to really deploy it in in a reasonable way in a Safeway in an efficient way, you need management on automation. So we've started on this journey. When we launched, we announced its summit that we brought in insights and that was our first step included in we've seen incredible uptick. So, um, when we launch, we've seen 87% increase since May in the number of systems that are linked in, we're seeing 33% more increase in coverage of rules based and 152% increase in customers who are using it. What that does is it creates a community of people using and getting value from it, but also giving value back because the more data we have, the better the rules get. So one interesting thing at the end of May, the engineering team they worked with all the customers that currently have insights. Lincoln and they did a scan for Specter meltdown, which, of course, everyone knows about in the industry with the customers who had systems hooked up, they found 100 and 76,000 customer systems that were vulnerable to Spector meltdown. What we did was we had unanswerable playbook that could re mediate that problem. We proactively alerted those customers. So now you start to see problems get identified with something like insights. Now you bring an answerable and answerable tower. You can effectively decide. So I want to re mediate. I can re mediate automatically. I can schedule that remediation for what's best for my company. So, you know, we've tied these three things together kind of in the stepwise function. In fact, if you have a real subscription, you've hooked up to insights. If insights finds an issue, there's a fix it and with answerable, creates a playbook. Now I can use that playbook and answerable tower so really ties through nicely through the whole portfolio to be able to to do everything in feeling. >>It also creates collaboration to these playbooks can be portable, move across the organization, do it once. That's the automation pieces that >>yeah, absolutely. So now we're seeing automation. How do you look at it across multiple teams within an organization so you could have a tower, a tower admin be able to set rules and boundaries for teams, I can have an array l writes. I t operations person be able to create playbooks for the security protocols. How do I set up a system being able to do things repeatedly and consistently brings a whole lot of value and security and efficiency? >>One of the powers of answerable is that it can live in a header Ji. In this environment, you got your windows environment. You know, I've talked of'em where customers that are using it and, of course, in cloud help help us understand kind of the realm. You know why rail plus answerable is, you know, an optimal solution for customers in those header ingenious environment. And what would love I heard a little bit in the keynote about kind of the road map where it's going. Maybe you can talk to about where those would fit together. >>Yeah, perfect and e think your comment about Header genius World is is Keith. That is the way we live, And folks will have to live in a head or a genius, a cz far as the eye can see. And I think that's part of the value, right to bring choice when you look at what we do with rail because of the close collaboration we have between my team and Theo team. That in the management bu around insights are engineering team is actively building rules so we can bring added value from the sense of we have our red Hat engineers who build rail creating rules to mitigate things, to help things with migration. So us develop well, Aden adoption. We put in in place upgrades, of course, in the product. But also there's a whole set of rules curated, supported by red hat that help you upgrade to relate from a prior version. So it's the tight engineering collaboration that we can bring. But to your point, it's, you know, we want to make sure that answerable and answerable tower and the rules that are set up bring added value to rebel and make that simple. But it does have to be in a head of a genius world. I'm gonna live with neighbors in any data center. Of course, >>what one of the pieces of the announcement talked about collections, eyes there, anything specific from from your team that it should be pointed out about from a collections in the platform announcement. >>So I think I think his collection starts to starts to grow on. Git brings out sort of the the simplicity of being pulled. It pulled playbooks and rolls on and pull that all in tow. One spot. We'll be looking at key scenarios that we pulled together that mean the most Terrell customers. Migration, of course, is one. We have other spaces, of course. Where we work with key ecosystem partners, of course, ASAP, Hana, running on rail has been a big focus for us in partnership with S A P. We have a playbook for installing ASAP Hana on Well, so this collaboration will continue to grow. I think collections offers a huge opportunity for a simpler experience to be able to kind of do a automated solution, if you will kind of on your floor >>automation for all. That's the theme here. >>That's what I >>want to get your thoughts on. The comment you made about analytical analytics keep it goes inside rail. This seems to be a key area for insights. Tying the two things together so kind of cohesive. But decoupled. I see how that works. What kind of analytical cables are you guys serving up today and what's coming around the corner because environments are changing. Hybrid and multi cloud are part of what everyone's talking about. Take care of the on premises. First, take care of the public cloud. Now, hybrids now on operating model has to look the same. This is a key thing. What kind of new capabilities of analytics do you see? >>Yes, that's it. So let me step you through that a little bit because because your point is exactly right. Our goal is to provide a single experience that can be on Prem or off Prem and provides value across both, as as you choose to deploy. So insights, which is the analytics engine that we use built upon our data. You can have that on Prem with. Well, you can have it off from with well, in the public cloud. So where we have data coming in from customers who are running well on the public cloud, so that provides a single view. So if you if you see a security vulnerability, you can skin your entire environment, Which is great. Um, I mentioned earlier. The more people we have participating, the more value comes so new rules are being created. So as a subscription model, you get more value as you go. And you can see the automation analytics that was announced today as part of the platform. So that brings analytics capabilities to, you know, first to be able to see what who's running what, how much value they're getting out of analytics, that the presentation by J. P. Morgan Chase was really compelling to see the value that automation is delivering to them. For a company to be ableto look at that in a dashboard with analytics automation, that's huge value, they can decide. Do we need to leverage it here more? Do we need to bring it value value here? Now you combine those two together, right? It's it, And being informed is the best. >>I want to get your reaction way Make common. Are opening student in our opening segment around the J. P. Morgan comment, you know, hours, two minutes, days, two minutes, depending on what the configurations. Automation is a wonderful thing. Where pro automation, as you know, we think it's gonna be huge category, but we took, um ah survey inside our community. We asked our practitioners in our community members about automation, and then they came back with the following. I want to get your reaction. Four. Major benefits. Automation focused efforts allows for better results. Efficiency. Security is a key driver in all this. You mentioned that automation drives job satisfaction, and then finally, the infrastructure Dev ops folks are getting re skilled up the stack as the software distraction. Those are the four main points of why automation is impacting enterprise. Do you agree with that? You make comments on some of those points? >>No, I do. I agree. I think skills is one thing that we've seen over and over again. Skills is skills. His key. We see it in Lennox. We have to help, right? Bridge window skills in tow. Lennox skills. I think automation that helps with skills development helps not only individuals but helps the company. I think the 2nd 2nd piece that you mentioned about job satisfaction at the end of the day, all of us want to have impact. And when you can leverage automation for one individual toe, have impact that that is much broader than they could do before with manual tasks. That's just that's just >>you know, Stew and I were talking also about the one of the key note keywords that kept on coming out and the keynote was scales scales, driving a lot of change in the industry at many levels. Certainly, software automation drives more value. When you have scale because you scaling more stuff, you can manually configure his stuff. A scale software certainly is gonna be a big part of that. But the role of cloud providers, the big cloud providers see IBM, Amazon, all the big enterprises like Microsoft. They're traveling massive scale. So there's a huge change in the open source community around how to deal with scale. This is a big topic of conversation. What's your thoughts on this? Sending general opinions on how the scales change in the open source equation. Is it more towards platforms, less tools, vice versa? Is there any trends? You see? >>I think it's interesting because I think when I think a scale, I think both volume right or quantity as the hyper scale ours do. I think also it's about complexity. I think I think the public clouds have great volume that they have to deal with in numbers of systems, but they have the ability to customize leveraging development teams and leveraging open source software they can customize. They can customize all the way down to the servers and the processor chips. As we know for most folks, right, they scale. But when they scale across on Prem in off from its adding complexity for them. And I think automation has value both in solving volume issues around scale, but also in complexity issues around scale. So even you know mid size businesses if they want a leverage on Prem, an off ramp to them, that's complexity scale. And I think automation has a huge amount of value to >>bring that abstracts away. The complexity automated, absolutely prized job satisfaction but also benefits of efficiency >>absolutely intimately. The greatest value of efficiency is now. There's more time to bring an innovation right. It's a zoo, Stephanie. >>Last thing I wondering, What feedback are you hearing from customers? You know, one of the things that struck me we're talking about the J. P. Morgan is they made great progress. But he said they had about a year of working with security of the cyber, the control groups to help get them through that knothole of allowing them toe really deploy automation. So, you know, usually something like answerable. You think? Oh, I can get a team. Let me get it going. But, oh, wait, no, Hold on. Corporate needs to make its way through. What is that something you hear generally? Is that a large enterprise thing? You know what? What are you hearing from customers that you're >>talking? I think I think we see it more and more, and it came up in the discussions today. The technical aspect is one aspect. The sort of cultural or the ability to pull it in is a whole separate aspect. And you think that technology from all of us who are engineers, we think, Well, that's the tough bit. But actually, the culture bit is just it's hard. One thing that that I see over and over again is the way cos air structured has a big impact. The more silo the teams are, do they have a way to communicate because fixing that so that you, when you bring in automation, it has that ability to sort of drive more ubiquitous value across. But if you're not structured toe leverage that it's really hard if your I T ops guys don't talk to the application folks bringing that value is very hard, so I think it is kind of going along in parallel right. The technical capabilities is one aspect. How you get your organization structure to reap the benefits is another aspect, and it's a journey. That's that's really what I see from folks. It is a journey. And, um, I think it's inspiring to see the stories here when they come back and talk about it. But to me the most, the greatest thing about it's just start right. Just start wherever you are and and our goal is to try and help on ramps for folks wherever their journey is, >>is a graft over people's careers and certainly the modernization of the enterprise and public sector and governments from how they procure technology to how they deploy and consume it is radically changing very quickly. By the way too scale on these things were happening. I've got to get your take on. I want to get your expert opinion on this because you have been in the industry of some of the different experiences. The cloud one Datta was the era of compute storage startups started Airbnb start all these companies examples of cloud scale. But now, as we start to get into the impact to businesses in the enterprise with hybrid multi cloud, there's a cloud. 2.0 equation again mentioned Observe Ability was just network management at White Space. Small category. Which company going public? It's important now kind of subsystem of cloud 2.0, automation seems to feel the same way we believe. What's your definition of cloud to point of cloud? One daughter was simply stand up some storage and compete. Use the public cloud and cloud to point is enterprise. What does that mean to you? What? How would you describe cloud to point? >>So my view is Cloud one Dato was all about capability. Cloud to Dato is all about experience, and that is bringing a whole do way that we look at every product in the stack, right? It has to be a seamless, simple experience, and that's where automation and management comes in in spades. Because all of that stuff you needed incapability having it be secure, having it be reliable, resilient. All of that still has to be there. But now you now you need the experience or to me, it's all about the experience and how you pull that together. And that's why we're hoping. You know, I'm thrilled here to be a danceable fast cause. The more I can work with the teams that are doing answerable and insights and the management aspect in the automation, it'll make the rail experience better >>than people think it's. Software drives it all. Absolutely. Adam, Thanks for sharing your insights on the case. Appreciate you coming back on and great to see you. >>Great to be here. Good to see >>you. Coverage here in Atlanta. I'm John for Stupid Men Cube coverage here and answerable Fest Maur coverage. After the short break, we'll be right back. >>Um

Published Date : Sep 24 2019

SUMMARY :

Brought to you by Red Hat. Back, Back in the fold. What's the update? You know it's actually interesting when you run a subscription based software model, because customers I saw a tweet you had on your Twitter feed 28 years old, still growing up, And then how do we help you grow your business with innovation? I mean, you see the internal. able to provide impact right from everything from how you run your legacy systems to how How does it help customers, you know, take advantage of the latest technology and and and move So now you start to That's the automation pieces that I t operations person be able to create playbooks for the security protocols. You know why rail plus answerable is, you know, an optimal solution for customers in those header And I think that's part of the value, right to bring choice when you look at from your team that it should be pointed out about from a collections in the platform announcement. to be able to kind of do a automated solution, if you will kind of on your floor That's the theme here. What kind of analytical cables are you guys serving up today So if you if you see a security vulnerability, you can skin your entire environment, P. Morgan comment, you know, hours, two minutes, days, two minutes, piece that you mentioned about job satisfaction at the end of the day, all of us want to have impact. So there's a huge change in the open source community around how to deal with scale. So even you know mid size businesses if they want a leverage on Prem, an off ramp to bring that abstracts away. There's more time to bring an innovation What is that something you hear generally? How you get your organization structure to reap the of cloud 2.0, automation seems to feel the same way we believe. it's all about the experience and how you pull that together. Appreciate you coming back on and great to see you. Great to be here. After the short break, we'll be right back.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
AmazonORGANIZATION

0.99+

IBMORGANIZATION

0.99+

MicrosoftORGANIZATION

0.99+

Red HatORGANIZATION

0.99+

StephaniePERSON

0.99+

AtlantaLOCATION

0.99+

two minutesQUANTITY

0.99+

Stefanie ChirasPERSON

0.99+

StewPERSON

0.99+

AdamPERSON

0.99+

JohnPERSON

0.99+

OneQUANTITY

0.99+

100QUANTITY

0.99+

33%QUANTITY

0.99+

AirbnbORGANIZATION

0.99+

two dayQUANTITY

0.99+

152%QUANTITY

0.99+

Atlanta, GeorgiaLOCATION

0.99+

J. P. MorganORGANIZATION

0.99+

LennoxORGANIZATION

0.99+

LenoxORGANIZATION

0.99+

87%QUANTITY

0.99+

two sectorsQUANTITY

0.99+

KeithPERSON

0.99+

EightQUANTITY

0.99+

eightQUANTITY

0.99+

FirstQUANTITY

0.99+

red hatORGANIZATION

0.99+

twoQUANTITY

0.99+

BostonLOCATION

0.99+

first stepQUANTITY

0.99+

MayDATE

0.99+

two thingsQUANTITY

0.99+

over 15 yearsQUANTITY

0.99+

bothQUANTITY

0.99+

Redhead Management OrganizationORGANIZATION

0.98+

three yearQUANTITY

0.98+

todayDATE

0.98+

2nd 2nd pieceQUANTITY

0.98+

one aspectQUANTITY

0.98+

28 years agoDATE

0.98+

2019DATE

0.98+

this yearDATE

0.98+

DattaORGANIZATION

0.97+

one thingQUANTITY

0.97+

One spotQUANTITY

0.97+

oneQUANTITY

0.97+

LincolnPERSON

0.97+

FourQUANTITY

0.97+

S A P.ORGANIZATION

0.97+

over 445 monthsQUANTITY

0.97+

76,000 customerQUANTITY

0.96+

PremORGANIZATION

0.96+

ASAPORGANIZATION

0.95+

J. P. Morgan ChaseORGANIZATION

0.95+

four main pointsQUANTITY

0.94+

Atlanta.LOCATION

0.94+

firstQUANTITY

0.93+

HanaORGANIZATION

0.92+

SpecterTITLE

0.92+

single experienceQUANTITY

0.92+

28 years oldQUANTITY

0.91+

One thingQUANTITY

0.9+

J. P. MorganORGANIZATION

0.89+

one interesting thingQUANTITY

0.89+

one individual toeQUANTITY

0.89+

end of MayDATE

0.89+

single viewQUANTITY

0.89+

two key thingsQUANTITY

0.88+

TerrellORGANIZATION

0.88+

678OTHER

0.87+

Hybrid IT Analytics, Cars, User Stories & CA UIM: Interview with Umair Khan


 

>> Welcome back, everyone. We we are here live in our Palo Alto studios with theCUBE. I'm John Furrier, the host of today's special digital event, hybrid, cloud and IT analytics for digital business. This is our one-on-one segment with Umair Khan, principal product marketing manager at CA Technologies. Where we get to do a drill-down. He's got a special product, UIM. We're going to talk about unified management. Umair, great to see you. Nice shirt, looking good, same as mine. I got the cuff links. >> I know, we think alike and have the same shirt. >> Got the cloud cufflinks. >> You got to get me one of those. (laughs) >> Good to see you. >> Good to see you. >> Hey, I want to just drill down. We had the two keynote presenters, Peter Burris, we'll keep on the research perspective and then kind of, where you guys tie in with your VP of Product Management, Sudip Datta, and interesting connection. Peter laid out the future of digital business, matches perfectly with the story of CA, so interesting. More importantly, it's got to be easy, though. How are you guys doing? I want to drill down to your product, UIM. Unified management, what is that? Unified infrastructure management. What's making it so easy? So, like you said, it's unified infrastructure management. It's a single product to monitor your cloud, your on-prem, your traditional and your entire stack, be it compute layer, storage layer, application services layer. It's a single product to monitor it all, so a) you get a single view to resolve problems, and at the back end, people tend to underestimate the time it takes to configure different tools, right? Imagine a different tool for cloud, different tool for public cloud too that you use, I'm not going to name vendors. Traditional environment you have, or maybe one silo group is using hybrid infrastructure, right? So configuring those, managing those, it's tough. And having a single console to deploy monitoring configuration in the same time monitor that infrastructure makes it easy. >> You and I were talking yesterday, before we came here and were doing a dry run, about cars. >> Yeah. >> And we were talking about the Tesla is so cool compared to an older car, but it's got everything in there. It's got analytics, it's got data, but it's a car. The whole purpose is to drive. It has nothing to do with IT, yet it's got a ton of IT analytics in it. How is business related to that? Because you could almost say that the single pane of glass is analytics. It's almost like Tesla for the business. The business is the car. How do you view that, because you have an interesting perspective. I want to get your take on that. >> Absolutely. So I've seen a lot of people giving examples as well, but I think cars of today is a great example of how monitoring should be, right? Cars, yes, it's still about the look and feel and the brand, but when you're sitting in the car now you expect a unified view. You want blind spot detector, you want collision detector, everything there. Even your fuel gauge, it shouldn't tell you how much is left, it should tell you how much mileage is left, right? Everything is becoming more intelligent. And you know Peter talked about the importance of expedience in the digital business, so IT team needs that visibility, that end-to-end unified view, just like in a modern car, to avoid any blind spots and resolve issues faster, and at the same time, it has to be more proactive and predictive in nature. So that collision detection, all the car companies these days have a commercial on safety features, collision detection, and same with IT. They need to have that ability to use intelligent monitoring tools to be able to resolve issues before the customer experience suffers. And one of our customers says, if someone opened a service desk ticket, that means everyone knows about the issue. I need to be resolving that issue before the service desk ticket is issued, right? >> You don't see Tesla opening up issues, "Hey, you're on the freeway, slow down." But this is important. I mean, Tesla was disruptive because they didn't just build a car and say "bolt on analytics." They took holistic, proactive view of the car experience with technology and analytics in mind to bring that tech to the table. That's similar to the message that we heard from Peter and Sudip about analytics. It's not just a thing you bolt on anymore. You got to think about the outcome of what you're trying to do. >> Exactly. >> That really is the key. And how does that unified infrastructure management do that? >> So it's all about unifying all different, today's digital businesses are adopting a lot of technologies. Every developer has their own stacks. As an IT ops person, you don't want to be someone who says, "you cannot adopt this cloud" or, "you do not adopt this technology." You should be flexible enough to whatever stack they have. You should be able to monitor that infrastructure for them, get yourself a unified view to resolve issues faster. But at the same time, provide your dev teams the flexibility of choosing the stack they like. >> A lot of IT ops guys are impacted and energized, quite frankly, by the future that's upon us with all these opportunities, but the realities of having uptime is a for opsis key and also enabling new (mutters) like IOT. The question for you is, who is most impacted in the enterprise organization or in IT operations, by your modern analytics products and visions? >> So I think there are two groups, right? One is the traditional VP of IT infrastructure, IT operations, so he has a lot of concerns about his infrastructure is becoming more and more dynamic, more complex, clouds are being adopted, businesses talking about expedience, right? So he needs a modern approach to get that end-to-end picture and make sure there are no blame games happening between different groups, and resolve issues really proactively. And at the same time, his tool and his analytics approach need to support modern infrastructures, right? If businesses wants to adopt cloud-based technologies, he needs to be, or she needs to be, able to provide that monitoring, needs to cover that approach as well. >> Is there one that pops out that you see growing faster in terms of the persona within IT? Because we hear Sudip talk about network, which we all want the network to go faster. I mean, you can't go to to Levi's Stadium or any kind of place and people complain about wifi. My kids are like, "Dad, the network's too slow." But in IT, network's critical. But only up to the app, so it's a bigger picture than that. Is there one persona that's rising up that you see that really hones in on this message of this holistic view of looking at modern analytics? >> I think rules are changing overall in IT, right? The system admin is becoming cloud admin, or the dev ops guy, so I think it's getting more and more collaborative. Roles will be redefined, reengineered a bit to meet the needs of modern technologies, modern companies, and so on. And we're also seeing the rise of a site reliability engineer, right? Because he's more concerned about reliability versus individual component. To him an app might be bad because of the network, because of the application itself, or the infrastructure that runs it. >> Okay, what does the UIM stand for and how does that impact in the overall stack? >> So UIM is our unified, as I mentioned before, unified infrastructure management product that's the most comprehensive solution on the market. If you look at technology support from your public-private cloud-based infrastructures like Amazon, Azure, or your hyperconverge. You can also call them private cloud, like mechanics, and being variable stack, or your traditional IT as well, from your (mutters) environments or from your Cisco environments, Cisco UCS, or anything. So it really gives that comprehensive solution set, and at the same time it provides an open architecture if you wanted to monitor some technology that we don't provide support for, it allows you to monitor that. And again, because of that, people are able to resolve issues faster, they're able to improve mean time to repair, and at the same time, I'll reemphasize the configuration part, right? Imagine you have multiple tools for each silos, then you need to configure that. In a dev ops world, you have to release applications faster, but you cannot deploy an application without configuring the monitoring for it, right? But if the infrastructure monitoring guys are taking three or four days to configure monitoring, then the entire concept of dev ops falls apart. So that's where UIM helps too. It really helps ops deploy configurations a lot faster through out-of-the-box templates in a unified approach across hybrid stacks. >> And developers want infrastructure as code, that's clear as day, and now they want great analytics. Okay, so I got to ask you the use case. I got to drill down on use cases, specifically, for the folks watching, whether they're maybe a CA customer in the past or one now, or not yet a customer. Where are you winning? Where is CA actually winning right now? How would talk about the specific use cases where it's a perfect fit and where you've got beachhead and where you can go. >> No, I think the places we typically win really well is as companies become more hybrid, if they're starting up in cloud-based infrastructures, they all of a sudden realize that the monitoring approach for traditional infrastructure is really not for cloud. The more technology that (mutters), you started with cloud and you want to adopt containers, and you start adding these monitoring tools. All of a sudden you realize this approach cannot work. I'm creating more silos, I don't the internal visibility and these infrastructures are more dynamic, going up and down all the time. I need a modern tool, modern approach. So typically, when you have hybrid infrastructures, we typically win there. And I think of a large insurance company as well, where initially we started working with them, and initially they had a lot of different tools that they worked on-- >> I think we actually have a slide for this. Can you pull that up on the thing here, the slide. Before you get to the insurance company, I want to get the graphic up. There it is. So we had the global 500 company, go ahead, continue. >> So basically worked with a global 500 insurance company. They had the same kind of issue, right? A lot of different technologies being adopted, cloud being adopted by a lot of the application team, and they wanted to really scale the business, digitize the business, but they didn't want the monitoring to get in the way. Right, so they implemented UIM, and they significantly improved mean time to repair and the time they spent in monitoring tools, right? That's the biggest thing. IT while monitoring may sound cool, but it's, the IT wants to work in modern innovative stuff. They want to stare at a screen, spending time and creating scripts and monitoring. So it really gave them the ability to get you the single tool to monitor increasingly complex and hybrid infrastructure. >> So you guys also ran a survey, also validated by Tech Validate, which is a third party firm which surveys top IT folks, on the three important ITOA, IT analytics solutions, correlation of data across apps, infrastructure, and network, 78%. Full stack visibility with in-context log monitoring and analysis, 65%. Ability to scale in high volume environments. So interesting how those are the top three. Kind of speaks to the conversation Peter Burris and I had. Lot of data (laughs), okay, multiple stack issues, so you're talking about a holistic view. What's the importance of these top three trends? >> I think a lot of companies miss out when they only monitor a silo, right? Even when I talk about our unified product, it's unified infrastructure. Even within infrastructure, there's so many components. You have to unify them, and that's the UIM work. But as Sudip mentioned, we have one of the biggest portfolio in the market. We're not only good at unified infrastructure, but also the network that connects that infrastructure to the application, and the application itself, right? The mobile application, the user experience of it, and the code-level visibility that you need. So as the survey mentioned, one of the biggest issues that companies have is they want to aggregate this data from app, network, and infrastructure. And at CA we are uniquely positioned because we have products in all three areas. I think typically no vendor covers all three areas and we're tying these together with more contextual analytics, which includes log which we released a while back, and I love to give the example of logs as well, right? People even monitor logs in a silo. But the value of using log together with performance is performance tells you a system is slow, okay, but logs tell you why. So it's using context together with your performance across app, infra, and network, really helps you solve these problems. >> Well, the Internet of Things and the car example we use also takes advantage of potential log data because data exhaust could be sitting around, but with realtime it could be very relevant. Okay, so let's move on to some of the kudos you're getting. Customers recognize CA as a leader in ITOA, IT analytics, operational analytics. 82% of organizations agree with the following, little thumb-up there. "CA has the breadth and depth of monitoring expertise to deliver the cross-correlation of IT operation analytics data from app to infrastructure to network. I buy the vision. I'm going to challenge you on this. What's the most important thing you got that this survey says? Because that's a huge number. Some might challenge that number. So I'm going to challenge that. Why is that number so important, and describe how it's reached. >> So I think it's some of our customers that have bought the belief of this, right, because we have in the portfolio an application performance like I mentioned, infrastructure performance with UIM, our net ops product portfolio, we are the only vendor in the market with that holistic set of products and experience in all three areas. So that really positions us uniquely. If you pick up any vendor out there, they either started on the app side, just started going on the infrastructure side, or they're a pure network player, starting to go infra and trying to get into app. But we are the vendor that has all three, and now we are bringing all of these three areas together through our operation intelligence platform that Sudip mentioned. >> Okay, so go to the next slide here. This one here is kind of chopped down, so move to the next one, you can come to that, look at that, later. This is the one I want to talk about, because retail is huge. We cover retail as a retail analyst firm, but retail does have a lot of edge components to it. It's heavily data-driven, evolving realtime from wearables to whatever. I mean, it's just going crazy. So it's turbulent from a change standpoint, but it's heavily IT operations driven. Why is this important? It says "Global 500 retail company was spending too much time in issue resolution. They lacked end-to-end visibility across cloud, traditional, and applications. After implementing CA UIM, they improved their mean time to repair by 35-50%. I'll translate that. Basically, it's broken, they got to repair it. Things aren't working. Retail can't be down. Why did you guys provide this kind of performance? Give a specific example of how this all plays out. >> So actually this tech firm named the customer, but in a typical scenario in retail, everyone is getting these mobile apps, right? So you need to monitor performance of the mobile app, the application running on it, we have tools for that, and the infrastructure behind it. So typically these mobile apps are on the cloud, right? IT ops have a traditional infrastructure, but this is Amazon-based or Azure-based. They come to us, we are adopting these mobile applications, but at the same time, we don't want to set up a separate IT ops team for these mobile applications as well. So retail organizations are proactively implementing an analytics-based approach for their unified end-to-end view. So even though the mobile app might be siloed, but it's multi-channel in retail, right? So they might order from their application but they might pick up in the store, and the store might be running on a physical Windows machine, versus some cloud-based boss. >> So you're saying they get to the cloud real fast, then realize, "oh, damn, I got to fix this. "I need analytics." So either way the customer use case is they can work with you on the front end to design that reimagined infrastructure, or bring you in at the right time. >> And our monitoring tool helps that, gives that end-to-end view, right, from the user's genie all the way from logging in, to all the way to the transaction being updated on the inventory software, being updated on the store, all the back-end SOP system. So we monitor all these technologies, give them end-to-end views. And we give them proactive (mutters). That's what analytics is, right? If their experience is slow, again, a user shouldn't be telling them on social media, "I can't order this," right? That IT team should be proactively testing, proactively-- >> Agility, speed and agility. >> Right, and without a unified view, it's not possible. >> All right, I'm at a bottom line here for you, and get your personal perspective. Take your CA hat off and your personal industry tech hat on. What should IT guys, what should they think of when working with CA? Why is CA good for them, and why should they look at you, and why should they continue to use you if they're an existing customer? >> So I think CA, like I said before, they're experienced in this space, right? And the investment we are making in analytics and cloud, we have a large customer base, so pretty much every customer, every enterprise, every industry you name, we have a customer there. And we have a huge portfolio already. So we have the basis from application to network to infrastructure, and are building this analytics layer that our customers have been asking us, that you're one of the rare vendors that have the most depth of information already available, right? So if aggregating that into an operational intelligence platform really helps puts us in a unique position by giving them the broadest set of data through a single platform. Right, and our experience for 30 years in monitoring, like Peter mentioned as well, and the investment we are bringing in cloud, UIM is a example. We were recently applauded by industry analysts as well that it's one of the best tools for single pane of glass for hybrid cloud environments. That shows how heavily we are investing in new, modern infrastructures like Amazon and Azure and even Utanics, right? >> Well, certainly you've got a lot of props. We just shared some of those stats and from independent firms like Tech Validate. But more, I think, impressive is that Peter Burroughs is on the cutting edge of digital business. You guys are aligned really with some of the cutting-edge research, where we see the market going, so congratulations. This digital event's been great. I want to ask you one final question. We see you guys out a lot at all the events we go to with TheCUBE, we go to all the cloud events. So you guys are going to be going to all the cloud events this year. So is that how customers can get ahold of you in the field? Which events will you be at? Where should they look for CA out in the field? >> So I think we're pretty much everywhere, on all the key events that you mentioned. Amazon Reinvent and C-World is coming as well. Customers should come to us and see how CA is helping people better manage the modern software factory, what we call it, every customer is in a digital economy, is trying to build software to deliver unique experiences, and at CA we talked about our IT operations, from dev to test to ops, we provide all the solutions. So C-World, Amazon Reinvent, you know, come find us there, or online at ca.com as well. >> All right, Umair, thanks for coming here and sharing your thoughts as part of our one-on-one drill downs from the digital event here at Silicon Angle Media's Cube Studios in Palo Alto, where we discuss the cloud and IT analytics for digital business, sponsored by CA Technologies. I'm John Furrier. I've been the host and moderator for today. I want to thank Peter Burris, head of research at wikibon.com for the opening keynote and Sudip Datta, who's the vice president of product management for CA for the second keynote. And all the conversation will be online, and thanks for watching, everyone. And check out CA. We'll see you at all the different cloud events with TheCUBE, thanks for watching.

Published Date : Aug 22 2017

SUMMARY :

I got the cuff links. You got to get me one of those. and at the back end, people tend to underestimate You and I were talking yesterday, before we came here the Tesla is so cool compared to an older car, So that collision detection, all the car companies That's similar to the message that we heard That really is the key. But at the same time, provide your dev teams but the realities of having uptime is a for opsis key And at the same time, his tool and his analytics approach growing faster in terms of the persona within IT? because of the application itself, and at the same time it provides an open architecture Okay, so I got to ask you the use case. and you start adding these monitoring tools. So we had the global 500 company, So it really gave them the ability to get you So you guys also ran a survey, and the code-level visibility that you need. and the car example we use also that have bought the belief of this, right, This is the one I want to talk about, but at the same time, we don't want to set up they can work with you on the front end from the user's genie and why should they continue to use you And the investment we are making in analytics and cloud, So is that how customers can get ahold of you in the field? on all the key events that you mentioned. And all the conversation will be online,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Umair KhanPERSON

0.99+

TeslaORGANIZATION

0.99+

Peter BurrisPERSON

0.99+

UmairPERSON

0.99+

threeQUANTITY

0.99+

John FurrierPERSON

0.99+

PeterPERSON

0.99+

Sudip DattaPERSON

0.99+

65%QUANTITY

0.99+

30 yearsQUANTITY

0.99+

Palo AltoLOCATION

0.99+

two groupsQUANTITY

0.99+

Peter BurroughsPERSON

0.99+

yesterdayDATE

0.99+

AmazonORGANIZATION

0.99+

CiscoORGANIZATION

0.99+

OneQUANTITY

0.99+

78%QUANTITY

0.99+

Tech ValidateORGANIZATION

0.99+

second keynoteQUANTITY

0.99+

oneQUANTITY

0.99+

four daysQUANTITY

0.99+

CA TechnologiesORGANIZATION

0.99+

Levi's StadiumLOCATION

0.99+

CA.LOCATION

0.99+

82%QUANTITY

0.99+

CALOCATION

0.99+

SudipPERSON

0.99+

35-50%QUANTITY

0.98+

todayDATE

0.98+

single productQUANTITY

0.98+

one final questionQUANTITY

0.98+

TheCUBEORGANIZATION

0.97+

single toolQUANTITY

0.97+

single consoleQUANTITY

0.96+

WindowsTITLE

0.96+

two keynote presentersQUANTITY

0.96+

three areasQUANTITY

0.96+

this yearDATE

0.96+

single platformQUANTITY

0.96+

each silosQUANTITY

0.96+

SudipORGANIZATION

0.95+

ITOAORGANIZATION

0.95+

wikibon.comORGANIZATION

0.95+

UtanicsORGANIZATION

0.94+

UIMORGANIZATION

0.93+

Global 500ORGANIZATION

0.93+

theCUBEORGANIZATION

0.92+

C-WorldORGANIZATION

0.92+

single paneQUANTITY

0.9+

single viewQUANTITY

0.9+

Silicon Angle MediaORGANIZATION

0.89+

one personaQUANTITY

0.84+

AzureORGANIZATION

0.84+

UIMTITLE

0.83+

Key Pillars of a Modern Analytics & Monitoring Strategy for Hybrid Cloud


 

>> Good morning, everyone. My name is Sudip Datta. I head up product management for Infrastructure Management and Analytics at CA Technologies. Today I am going to talk about the key pillars for modern analytics and monitoring for hybrid cloud. So before we get started, let's set the context. Let's take a stock of where we are today. Today in terms of digital business, software is driving business. Software is the backbone, is the driving force for most of the business services. Whether you are a financial institution or a hospitality service or a health care service or even a restaurant service pizza, you are front-ended by software. And therefore the user experience is of paramount importance. Just to give you some factoids. Eighty-three percent of U.S. consumers say that the brand that, the frontal software portal is more important than the product itself. And the companies are reciprocating by putting a lot of emphasis on user experience, as you see in the second factoid. The third factoid, it's even more interesting that 53% of the users of a mobile app actually abandon the app if the app doesn't load within a specified time. So we all understand now the importance of user experience in today's business. So what's happening to the infrastructure underneath that's hosting these applications? The infrastructure itself is evolving, right? How? First of all, as we all know there is a huge movement, a huge shift towards cloud. Customers are adopting cloud for reasons of economy, agility and efficiency. And whether you are running on cloud or on prem, the architecture itself is getting more and more dynamic. On the server side we hear about server-less computing. More and more enterprises are adopting containers, could be Dockers or other containers. And on the networking side we see an adoption of software-defined networking. The logical overlay on top of the physical underlay is abstracting the network. While we see a huge shift, a movement towards cloud, it is also true that customers are also retaining some of their assets on prem, and that's why we talk about hybrid cloud. Hybrid cloud is a reality, and it's going to be a reality for the foreseeable future. Take for example a bank that has its systems of engagement on public cloud, and systems of records on prem deeply nested within their DNC. So the transaction, the end-to-end transaction has to traverse multiple clouds. Similarly we talk to customers who run their production tier one application on prem, while tier two and tier three desktop applications run on public cloud. So that's the reality. Multi-cloud dynamic environment is a reality of today. While that's a reality, they pose a serious challenge for IT operations. What are the challenges? Because of multiple clouds, because of assets spanning multiple data centers, multiple clouds, there are blind spots getting created. IT ops is often blindsided on things that are happening on the other side of the firewall. And as a result what's happening is they're late to react, and often they react to problems much later than their customers find it, and that's an embarrassment. The other thing that's happening is because of the dynamic nature of the cloud, things are ephemeral, things are dynamic, things come and go, assets come and go, IT ops is often in the business of keeping pace with these changes. They are reacting to these changes. They are trying to keep pace with these changes, and silo'd tools are not the way to go. They are trying to keep up with these changes, but they are failing in doing so. And as a result we see poor user experience, low productivity, capacity problems and delayed time to market. Now what's the solution? What is the solution to all these problems? So what we are recommending is a four-pronged solution, what we represent as four pillars. The first pillar is about dynamic policy-based configuration and discovery. The second one is unification of the monitoring and analytics. The third one is contextual intelligence, and the fourth one is integration and collaboration. Let's go through them one by one. First of all, in terms of dynamic policy-based configuration, why is it important? I was talking to a VP of IT last week, and he commented that the time to deploy the monitoring for an application is longer than the time to deploy the application itself, and that's a shame. That's a real shame because in today's world application needs to be monitored straight out of the box. This is compounded by the fact that once you deploy the application, the application today is dynamic, as I said, the cloud assets are dynamic. The topology changes, and monitoring tools need to keep pace with that changing topology. So we need automated discovery. We need API driven discovery, and we need policy-based monitoring for large scale standardization. And last but not the least, the policies need to be based on dynamic baselines. The age, the era of static thresholds is long over because static thresholds lead to false alerts, resulting in higher opics for IT, and IT personnel absolutely, absolutely want to move away from it. Unified monitoring and analytics. This morning I stumbled upon a Lincoln white paper which said 20 tools you need for your hybrid monitoring, and I was absolutely dumbfounded. Twenty tools? I mean, that's a conversation non-starter. So how do we rationalize the tools, minimize the silos, and bring them under single pane of glass, or at least minimal panes for glass for monitoring? So IT admins can have a coherent view of servers, storage, network and applications through a single pane of glass? And why is that important? It's important because it results in lesser blame game. Because of silo'd tools what happens is admins are often fighting with each other, blaming each other. Server admins think that it's a storage problem. The storage admin thinks it's a database problem, and they are pointing to each other, right? So the tools, the management tools should be a point of collaboration, not a point of contention. Talking about blame game, one area that often gets ignored is the area of fault management and monitoring. Why is it important? And I will give a specific example. Let's say you have 100 VMs, and all those VMs become unreachable as a result of router being down. The root cause of the problem therefore are not the VMs, but the router. So instead of generating 101 alarms, the management tool needs to be smart enough to generate one single alarm. And that's why fault management and root cause analysis is of paramount importance. It suppresses unnecessary noise and results in lesser blaming. Contextual intelligence. Now when we talk about the cloud administrator, the cloud admin, the cloud admin in the past were living in the cocoon of their hybrid infrastructure. They were managing the hybrid infrastructure, but in today's world to have an end-to-end visibility of the digital chain, they need to integrate with application performance management tools, APM, as well as what lies underneath, which is the network, so that they have an end-to-end visibility of what's happening in the whole digital chain. But that's not all. They also need what we call is the context of the application. I will give you a specific example. For example, if the server runs out of memory when a lot of end users log into the system, or run out of capacity when a particular marketing promotion is running, then the context really is the business that leads to a saturation in IT. So what you need is to capture all the data, whether they come from logs, whether they come from alarms, capacity events as well as business events, into a single analytics platform and perform analytics on top of it. And then augment it with machine learning and pattern recognition capabilities so that it will not only perform root cause analysis for what happened in the past, but you're also able to anticipate, predict and prevent future problems. The fourth pillar is collaboration and integration. IT ops in today's world doesn't and shouldn't run in a silo. IT ops need to interact with dev ops. Within dev ops developers need to interact with QA. Storage admins need to collaborate with server admins, database admins and various other admins. So the tools need to encourage and provide a platform for collaboration. Similarly IT tools, IT management tools should not run standalone. They need to integrate with other tools. For example, if you want monitoring straight out of the box, the monitoring needs to integrate with provisioning processes. The monitoring downstream needs to integrate with ticketing systems. So integration with other tools, whether third party or custom developed, whatever it is, it's very, very important. Having said that, having laid what the solution should be, what the prescription should be, how is CA Technologies gearing up for it? In CA we have the industry's most comprehensive, the richest portfolio of infrastructure management tools, which is capable of managing all forms of infrastructure, traditional, private cloud, public cloud. Just to give you an example, in private cloud we support the traditional VMs as well as hyper converged infrastructure like Nutanix. We support Docker and other forms of containers. In public cloud we support the monitoring of infrastructure as a service, platform as a service, software as a service. We support all the popular clouds, AWS, Azure, Office 365 on Azure, as well as Salesforce.com. In terms of network, out net ops tools manage the latest and greatest SDN and SD-WAN, the VMware SDN, the open stack SDN, in terms of SD-WAN Cisco, Viptella. If you are a hybrid cloud customer, then you are no longer blindsided on things that are happening on the cloud side because we integrate with tools like Ixia. And once we monitor all these tools, we provide value on top of it. First of all, we monitor not only performance, but also packet, flow, all the net ops attributes. Then on top of that we provide predictive insights and learning. And because of our presence in the application performance management space, we integrate with APM to provide application to infrastructure correlation. Finally our monitoring is integrally linked with our operational intelligence platform. So in CA we have an operational intelligence platform built around CA Jarvis technology, which is based on open source technology, Elastic Logstash and Kibana, supplemented by Hadoop and Spark. And what we are doing is we are ingesting data from our monitoring tools into this data lake to provide value added insights and intelligence. When we talk about big data we talk about the three Vs, the variety, the volume and the velocity of data. But there is a fourth V that we often ignore. That's the veracity of the data, the truthfulness of data. CA being a leader in monitoring space, we have been in the business of collecting and monitoring data for ages, and what we are doing is we are ingesting these data into the platform and provided value added analytics on top of it. If you can read the slide, it's also an open framework we have the APIs from for ingesting data from third-party sources as well. For example, if you have your business data, your business sentiment data, and if you want to correlate that with IT metrics, how your IT is keeping up with your business cycles, you can do that as well. Now some of the applications that we are building, and this product is in beta as you see, are correlation between the various events, IT events and business events, network events and server events. Contextual log analytics. The operative word is contextual. There are a plethora of tools in the market that perform log analytics, but log analytics in the context of a problem when you really need it is of paramount importance. Predictive capacity analytics. Again, capacity analytics is not only about trending, right? It's about what if analysis. What will happen to your infrastructure? Or can your infrastructure sustain the pressure if your business grows by 2X, for example? That kind of what if analysis we should be able to do. And finally machine learning, we are working on it. Out of box machine learning algorithm to make sure that problems are not only corrected after the fact, but we can predict problems. We can prevent the problems in future. So for those who may be listening to this might be wondering where do we start? If you are already a CA customer, you are familiar with CA tools, but if you're not, what's the starting point? So I would recommend the starting point is CA Unified Infrastructure Manager, which is the market leading tool for hybrid cloud management. And it's not a hollow claim that we are making, right? It has been testified, it has been blessed by customers and analysts alike. And you can see it was voted the cloud monitoring software of the year 2016 by a third party. And here are some of the customer experiences. NMSP, they were able to achieve 15% productivity improvement as a result of adopting UIM. A healthcare provider, their meantime to repair, MTTR, went down by 40% as a result of UIM. And a telecom provider, they had a faster adoption to cloud as a result of UIM, the reason being UIM gave them for the first time a single pane of glass to manage their on prem and cloud environments, which has been a detriment for them for adopting cloud. And once they were able to achieve that, they were able to switch onto cloud much, much faster. Finally, the infrastructure management capabilities that I talked about is now being delivered as a turnkey solution, as a SAS solution, which we call digital experience insights. And I strongly, strongly encourage you to try UIM via CA digital experience insights, and here is the URL. You can go and sign up for the trial. With that, thank you.

Published Date : Aug 22 2017

SUMMARY :

And on the networking side we see an adoption of

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
101 alarmsQUANTITY

0.99+

100 VMsQUANTITY

0.99+

53%QUANTITY

0.99+

20 toolsQUANTITY

0.99+

Twenty toolsQUANTITY

0.99+

15%QUANTITY

0.99+

Eighty-three percentQUANTITY

0.99+

second factoidQUANTITY

0.99+

fourth VQUANTITY

0.99+

40%QUANTITY

0.99+

CALOCATION

0.99+

third factoidQUANTITY

0.99+

fourth pillarQUANTITY

0.99+

first pillarQUANTITY

0.99+

2XQUANTITY

0.99+

last weekDATE

0.99+

CA TechnologiesORGANIZATION

0.99+

TodayDATE

0.99+

AWSORGANIZATION

0.99+

CiscoORGANIZATION

0.99+

NMSPORGANIZATION

0.99+

four pillarsQUANTITY

0.98+

2016DATE

0.98+

third oneQUANTITY

0.98+

first timeQUANTITY

0.98+

Sudip DattaPERSON

0.98+

fourth oneQUANTITY

0.98+

HadoopORGANIZATION

0.98+

todayDATE

0.98+

FirstQUANTITY

0.97+

Office 365TITLE

0.97+

one single alarmQUANTITY

0.97+

second oneQUANTITY

0.97+

Elastic LogstashORGANIZATION

0.96+

AzureTITLE

0.96+

UIMORGANIZATION

0.95+

single paneQUANTITY

0.95+

LincolnORGANIZATION

0.95+

U.S.LOCATION

0.95+

KibanaORGANIZATION

0.95+

This morningDATE

0.95+

three VsQUANTITY

0.93+

one areaQUANTITY

0.87+

oneQUANTITY

0.86+

ViptellaORGANIZATION

0.84+

VMwareTITLE

0.82+

NutanixORGANIZATION

0.81+

single analyticsQUANTITY

0.8+

SparkORGANIZATION

0.75+

four-prongedQUANTITY

0.69+

Salesforce.comORGANIZATION

0.67+

DockerTITLE

0.67+

tier threeQUANTITY

0.62+

CAORGANIZATION

0.61+

IxiaTITLE

0.6+

tier twoQUANTITY

0.57+

JarvisORGANIZATION

0.56+

APMORGANIZATION

0.54+

premORGANIZATION

0.53+

tier oneQUANTITY

0.53+

Cloud & Hybrid IT Analytics: 1 on 1 with Peter Burris, Wikibon


 

>> Hey, welcome back everyone. We're here live in the Palo Alto Cube studios for our special digital live event sponsored by CA Technologies. I'm here with Peter Burris, Head of Research Wikibon.com, General Manager of Research for SiliconANGLE Media. Peter, you gave the Keynote this morning along with Sudip Datta talking about analytics. Interesting connection. Dave has been around for a while but now it's more instrumental. CA's had analytics, and monitoring for a while, now it's more instrumental. That seems to be the theme we're seeing here with the research that you're representing and your insight around digital business. Some of the leading research on the topic. Your thoughts on how they connect, what should users know about the connection between data and business, CA analytics and data? >> I think two things, John, first off as I kind of mentioned number one is that more devices are going to be more instrumental to the flow of, to the information flow to the data flows are going to create business value, and that's going to increase the need for greater visibility into how each of these things work together individually, but increasingly it's not just about having individual devices or individual things up and running or having visibility into them. You have to understand how they end up interacting with each other and so the whole modern anthropology becomes more important. We need to start finding ways of improving the capability of monitoring while at the same time simplifying it is the only way that we're going to achieve the goal of these increasingly complex infrastructures that nonetheless consistently deliver the business value that the business requires and customers expect. >> It's been interesting, monitoring has been around for awhile, you can monitor this, you can monitor that, you can kind of bring it all together in a database, but as we move to the cloud and you're seeing internet or things as you pointed out, there's a real connection here and the point that I wanted to talk about is, you mentioned the internet as a computer. Okay, which involves, system software kind of thinking, Let's tease that out. I want to unpack that concept because if the internet now is the platform that everyone will be basing and reimagining their business around, how do companies need to figure this out because this is on everyone's mind because it might miss the fact that it costs a hell of a lot of cash just to move stuff from the edge to the cloud or even just architectural strategies. What's that importance of the internet as a computer? >> Well, the notion of internet scale computing has been around for quite sometime. And the folks who take that kind of systems approach to things, may of them are sitting within 50 miles of where we sit right here. In fact, most of them. So, Google looks at the internet as a computer, that it can process. Facebook sees things the same way. So, if you look at some of these big companies that are actually thinking about internet scale computing, any service, any data, anytime, anywhere, then that thinking has started to permeate, certainly Silicon Valley. And in my conversations with CIO's, they increasingly want to think the same way. What is it, what, how do I have to think about my business relative to all of the available resources that are out there so I can have my company think about gaining access to a service wherever it might be. Gaining access to data that would be relevant to my company, wherever it might be. Appropriately moving the data, minimizing the amount of data that I have to move. Moving the events to the data when necessary. So, the, this is, in many respects the architectural question in IT today. How do we think about the way we weave together all these possible resources, possible combinations into something that sustains, sustainably delivers business value in a coherent manageable, predictable way? >> It's interesting, you and I have both seen many waves of innovation going back to the mini computer mainframe days and there used to be departments called data processing and this would be departments that handle analytics and monitoring. But now we're in a new era, a modern era where everything can be instrumented which elevates the notion of a department into a holistic perspective. You brought this up in your talk during the Keynote where it said data has to permeate throughout the organization whether it's IOT edge or wherever, so how do companies move from that department mindset, oh, the department handles the data warehouse or analytics, to a much more strategic, intelligent system? >> Well, that's an interesting question, John. I think it's one of the biggest things a business, you're going to have to think about. On the one hand, our expectations, we will continue to see a department. And the reason why that is, but not in a way that's historically been thought about, one of the reasons why that is, is because the entire business is going to share claims against the capabilities of technology. Marketing's going to lay a claim to it. Sales is going to lay claim to it. Manufacturing and finance are going to lay claims to it. And those claims have to be arbitrated. They have to be negotiated. So there will be a department, a group that's responsible for ensuring that the fundamental plant, the fundamental capabilities of the business are high quality and up and running and sustained. Having said that, the way that that is manifest is going to be much faster, much more local, much more in response to customer needs which often will break down functional type barriers. And so it's going to be this interesting combination of, on the one hand for efficiency and effectiveness standpoint, we're going to sustain that notion of a group that delivers while at the same time, everybody in the business is going to be participating more clearly in establishing the outcomes and how technology achieves those outcomes. It's very dynamic world and we haven't figured out how it's all going to come together. >> Well, we're seeing some trends, now you're seeing the marketing departments and these other departments taking some of that core competence that used to be kind of outsourced to the IT departments so analytics are moving in and data science and so you're seeing the early signs of that. I think modern analytics that CA was talking about was interesting, but I want to get your thoughts on the data value piece cause this is another billion dollar question or gazillion dollar question. Where is the value in the data? And from your research in the impact of digital business, where's the value come from? And how should companies think about extracting that value? >> Well, the value, first off, when we talk about the value of data we perhaps take a little license with the concept. And by that I mean, software to a computer scientist is data. It happens to be the absolutely most structured data you can possibly have. It is data that is so tightly structured that it can actually execute. So we bring software in under that rubric of the value of data. That's one way. The data is the basis for software and how we think about the business actually having consequential actions that are differentiated, increasing the digital world. One of the most important things, ultimately, about data is that unlike virtually every other asset that I can think about, money, labor, materials, all of those different types of assets are dominated by the economics of scarcity. You and I are sitting here having a conversation. I'm not running around and walking my dog right now. I can only do one thing with my time. I may have in my mind, thinking, but I can't create value at the same moment that I'm talking to you. I mean, we can create value here, I guess. Same thing if you have a machine and the machine is applied to pull a wire of a certain diameter, it's not pulling a wire of a different diameter. So these are all assets or sources that are dominated by scarcity. Data's different because the characteristics of data, the things that make data so unique and so interesting is that the same data can be applied to a lot of things at the same time. So we're talking about an asses that can actually amplify business value if it's appropriately utilized. And I think this is one of the, on the one hand, one of the reasons why data is often regarded, it's disposable, is because, oh I can just copy it or I can just do this with it or I can do that with it. It just goes away, it's ephemeral. But on the other hand, why leading businesses and a lot of these digital native companies, but increasing the other companies are now recognizing that with data as an asset, that kind of a thinking, you can apply the same data to a lot of different pursuits at the same time and quite frankly, that's what our customers want to see. Our customers want to see their requests, their needs be matched to capabilities, but also be used to build better products in the future, be used to ensure that the quality of the services that they're getting is high. That their needs are being met, their needs are being responded to. So they want to see data being applied to all these different uses. It's an absolutely essential feature in the future of digital business. >> And you've got to monitor in order to understand it. And for the folks watching, Peter had a great description in his Keynote, go check that video out around the elements of the digital business, how it's all working together. I'll let you go look at that. My final question for you is, you mention in your Keynote, the Wikibon private, true private cloud report. One of the things that's interesting in that graph, again on the Keynote he did present the slide, it's also on Wikibon.com if you're a member of the research subscription. It shows that actually the on premise assets are super valuable and that there's going to be a decline in labor, non differentiated labor or operational labor over the next six, seven years, around 1.6 billion dollars, but it shifts. And I think this was your point. Can you just explain in a little deeper way, the importance of that statistic because what it shows is, yes, automations coming. Whether it's analytics or machine learning and what not, but the value's shifting. Can you talk about that? >> Yeah, the very nature of the work that's performed within what we today call IT operations is shifting. It always has been. So when I was running around inside an IT organization, I remember some of the most frenetic activity that I saw was tape jockeys. We don't have too many tape jockeys in the world anymore, we still have tape, but we don't have a lot of tape jockeys anymore. So the first thing it suggests is that the very nature of the IT work that's going to be performed is going to change over the next few years. It's going to change largely in response to the fact that as folks recognize the value of the data and acknowledge that the placement of data to the event is going to be crucial to achieving that event within the envelope of time that that event requires. That ultimately the slow motion of dev op, which is still a maturing, changing, not broadly adopted set of concepts will start to change the nature of the work that we perform within that shared IT organization we were talking about a second ago. But the second thing it says is that we are going to be called upon to do a lot more work within an IT organization. A digital business is utilizing technology to perform a multitude of activities and that's just going to explode over the course of the next dozen years. So we have this combination of the works going to change, the amount of work that has, that's going to be performed by this group is going to expand dramatically, which means ultimately the only way out of this is the tooling is going to improve. So we expect to see significant advances in the productivity of an individual within an IT organization to support, sustain a digital business. And that's why we start to see some of the down tick in the cost of labor within IT. It's more important, more works going to be performed, but it's pretty clear that the industries now focus on improving that tooling and simplifying the way that that tooling works together. >> And having intelligence. >> Having intelligence, but also simplifying how it works together so it becomes more coherent. That's where we're going to need to improve these new levels of productivity. >> Real quick to end this segment, quickly talk about how CA connects to this because you know, they have modern analytics, they have modern monitoring strategies, the four pillars that you talked about. How do they connect into your research that you're talking about? >> Well I think one of the biggest things that a CIO is going to have to understand over the course of the next few years and we talked about a couple of them is, that this new architecture is not fully baked yet. We don't know what the new computing model is going to look like exactly. You know, not every business is Google. So Google's got a vision of it. Amazon's got a vision of it. But not every business is of those guys. So a lot of work on what is that new computing model? A second thing is this notion of ultimately where is or how is an IT organization going to deliver value? And it's clear that you're not going to deliver value by optimizing a single resource. You're going to deliver value by looking at all of these resources holistically and understand the inner connections and the interplay of these resources and how they achieve the business outcomes. So when I think about CA, I think of two things. First off, it is a company that has been at the vanguard of understanding how IT operations has worked, is working, and will likely continue to work as it evolves. And that's an important thing for a technology company that's serving IT operations to have. The second thing is, CA's core message, CA's tech core message now is evolving from just best of breed to how these things are going to come together. So the notion of modern moddering is to improve the visibility into everything as a holistic whole going back to that notion of, it's not just one device, it's how all devices holistically come together and the moddering fabric that we put in place has to focus on that and not just the productivity of any one piece. >> It's like an early day's test lick, it only gets better as they have that headroom to grow. Peter Burris head of research at Wikibon.com here, for one-on-one conversations, part of the cloud and modern analytics for digital business. Be back with more one-on-one conversations after this short break.

Published Date : Aug 22 2017

SUMMARY :

Some of the leading research on the topic. that nonetheless consistently deliver the business from the edge to the cloud or even just the amount of data that I have to move. of innovation going back to the mini computer mainframe is because the entire business is going to share Where is the value in the data? and the machine is applied to pull a wire It shows that actually the on premise assets of the data and acknowledge that the placement how it works together so it becomes more coherent. strategies, the four pillars that you talked about. So the notion of modern moddering is to improve part of the cloud and modern analytics

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
PeterPERSON

0.99+

GoogleORGANIZATION

0.99+

JohnPERSON

0.99+

Peter BurrisPERSON

0.99+

AmazonORGANIZATION

0.99+

WikibonORGANIZATION

0.99+

FacebookORGANIZATION

0.99+

Sudip DattaPERSON

0.99+

DavePERSON

0.99+

billion dollarQUANTITY

0.99+

second thingQUANTITY

0.99+

Silicon ValleyLOCATION

0.99+

Wikibon.comORGANIZATION

0.99+

CA TechnologiesORGANIZATION

0.99+

oneQUANTITY

0.99+

one pieceQUANTITY

0.99+

FirstQUANTITY

0.99+

eachQUANTITY

0.98+

one thingQUANTITY

0.98+

one deviceQUANTITY

0.98+

bothQUANTITY

0.98+

CALOCATION

0.98+

two thingsQUANTITY

0.98+

OneQUANTITY

0.98+

Palo Alto CubeLOCATION

0.98+

KeynoteTITLE

0.98+

firstQUANTITY

0.97+

50 milesQUANTITY

0.96+

1QUANTITY

0.96+

around 1.6 billion dollarsQUANTITY

0.96+

four pillarsQUANTITY

0.96+

gazillion dollarQUANTITY

0.96+

todayDATE

0.96+

one wayQUANTITY

0.95+

single resourceQUANTITY

0.92+

first thingQUANTITY

0.9+

this morningDATE

0.89+

SiliconANGLE MediaORGANIZATION

0.88+

seven yearsQUANTITY

0.82+

next few yearsDATE

0.81+

coupleQUANTITY

0.71+

ResearchORGANIZATION

0.69+

sixQUANTITY

0.66+

CAORGANIZATION

0.63+

second agoDATE

0.62+

next dozen yearsDATE

0.6+

thingsQUANTITY

0.51+