Image Title

Search Results for Maheswaran Surendra:

Maheswaran Surendra, IBM GTS & Dave Link, ScienceLogic | ScienceLogic Symposium 2019


 

>> From Washington D.C. it's theCUBE covering ScienceLogic Symposium 2019. Brought to you by ScienceLogic. >> Hi, I'm Stu Miniman and this is theCUBE's coverage of ScienceLogic Symposium 2019 here at The Ritz-Carlton in Washington D.C. About 460 people here, the events' grown about 50%, been digging in with a lot of the practitioners, the technical people as well as some of the partners. And for this session I'm happy to welcome to the program for the first time guest, Surendra who is the vice president and CTO for automation in IBM's global technology services. And joining us also is Dave Link who is the co-founder and CEO of ScienceLogic. Gentlemen, thank you so much for joining us. >> Thank you for having us. >> Thanks for having us. >> Alright, so Surendra let's start with you. Anybody that knows IBM services at the core of your business, primary driver, large number of the presented to the employees at IBM are there. You've got automation in your title so, let's flush out a little bit for us, your part of the organization and your role there. >> Alright, so as you pointed out, IBM, a big part of IBM is services; it's a large component. And that two major parts of that and though we come together as one in terms of IBM services, one is much more focused on infrastructure services and the other one on business services. So, the automation I'm dealing with primarily is in the infrastructure services area which means all the way from resources you have in a persons data center going into now much more of course in a hybrid environment, hybrid multi-cloud, with different clouds out there including our own and providing the automation around that. And when we mean automation we mean the things that we have to do to keep our clients' environments healthy from a availability and performance standpoint; making sure that our environment then we respond to the changes that they need to the environment because it obviously evolves over time, we do that effectively and correctly and certainly another very important part is to make sure that they're secure and compliant. So, if you think of Maslow's hierarchy of the things that IT operations has to do that in a nutshell sums it up. That's what we do for our clients. >> Yeah, so Dave luckily we've got a one on one with you today to dig out lots of nuggets from the kino and talk a bit about the company but, you talk about IT operations and one of the pieces I've got infrastructure, I've got applications, ScienceLogic sits at an interesting place in this heterogeneous ever-changing world that we live in today. >> It does and the world's changing quickly because the clouds transforming the way people build applications. And that is causing a lot of applications to be refactored to take advantage of some of these technologies. The especially focused global scale we've seen them, we've used them, applications that we use on our phone. They require a different footprint and that requires then a different set of tools to manage an application that lives in the cloud and it also might live in a multi-cloud environment with some data coming from private clouds that populate information on public clouds. What we found is the tools industry is at a bit of a crossroads because the applications now need to be infrastructure aware, but the infrastructure could be served from a lot of different places, meaning they've got lots of data sources to sort together and contextualize to understand how they relate to one another real time. And that's the challenge that we've been focused on solving for our customers. >> Alright, Surendra I want to know if we can get a little bit more to automation and we talk automation, >> There's also IBM use for a number of years, the cognitive and there was the analyst that spoke in the kino this morning. He put cognitive as this overarching umbrella and underneath that you had the AI and underneath that you had that machine learning and deep learning pieces there. Can you help tease out a little bit for IBM global services in your customers? How do they think of the relationship between the MLAI cognitive piece in automation? >> So I think the way you laid it out, the way it was talked about this morning absolutely makes sense, so cognitive is a broad definition and then within that of course AI and the different techniques within AI, machine learning being one, natural language processing, national languages understanding which not as much statistically driven as being another type of AI. And we use all of these techniques to make our automation smarter. So, often times when we're trying to automate something, there can be very prescriptive type of automation, say a particular event comes in and then you take a response to it. But then often times you have situations where you have events especially what Dave was talking about; when an application is distributed not just a classic of distributed application, but now distributed of infrastructure you may have. Some of it may be running on the main frame, some of it actually running in different clouds. And all of this comes together, you have events and signals coming from all of this and trying to reason over where a problem may be originating from because now you have a slow performance. What's the reason for the slow performance? Trying to do some degree of root cause determination, problem determination; that's where some of the smarts comes in in terms of how we actually want to be able to diagnose a problem and then actually kick off maybe more diagnostics and eventually kick off actions to automatically fix that or give the practitioner the ability to fix that in a effective fashion. So that's one place. The other areas that one type of machine learning I shouldn't say one type, but deadly machine learning techniques lend themselves to that. There's another arena of causes a lot of knowledge and information buried in tickets and knowledge documents and things like that. And to be able to extract from that, the things that are most meaningful and that's where the natural language understanding comes in and now you marry that with the information that's coming from machines, which is far more contextualized. And to be able to reason over these two together and be able to make decisions, so that's where the automation. >> Wonder if we can actually, let's some of those terms I want to up level a little bit. I hear knowledge I hear information; the core of everything that people are doing these today, it's data. And what I heard, and was really illuminated to me listening to what I've seen of ScienceLogic is that data collection and leveraging and unlocking value of data is such an important piece of what they're doing. From an IBM standpoint and your customers, where does data fit into that whole discussion? How do things like ScienceLogic fit in the overall portfolio of solutions that you're helping customers through either manager, deploying and services? >> So definitely in the IT Ops arena, a big part of IT Ops, at the heart of it really is monitoring and keeping track of systems. So, all sets of systems throw off a lot of data whether it's log data, real time performance data, events that are happening, monitoring of the performance of the application and that's tons and tons of data. And that's where a platform like ScienceLogic comes in, as a monitoring system with capabilities to do what we call also event management. And in the old days, actually probably would have thought about monitoring event management and logs as somewhat different things; these worlds are collapsing together a bit more. And so this is where ScienceLogic has a platform that lends itself to a marriage of these faces in that sense. And then that would feed a downstream automation system of informing it what actions to take. Dave, thoughts on that? >> Dave, if you want to comment on that I've got some follow ups too, but. >> Yeah, there's many areas of automation. There's layers of automation and I think Surendra's worked with customers over a story career to help them through the different layered cakes of automation. You have automation related to provisioning systems, the provision and in some case provision based on capacity analytics. There's automation based on analysis of a root cause and then once you know it, conducting other layers of automation to augment the root cause with other insights so that when you send up a case or a ticket, it's not just the event but other information that somebody would have to go and do, after they get the event to figure out what's going on. So you do that at time of event that's another automation layer and then the final automation layer, is if you know predictively about how to solve the problem just going ahead if you have 99% confidence that you can solve it based on these use case conditions just solve it. So when you look at the different layers of automation, ScienceLogic is in some cases a data engine, to get accurate clean data to make the right decisions. In other cases, we'll kick off automations in other tools. In some cases we'll automate into ecosystem platforms whether it's a ticketing system, a service desk system, a notifications systems, that augment our platform. So, all those layers really have to work together real time to create service assurance that IBM's customers expect. They expect perfection they expect that excellence the brand that IBM presents means it just works. And so you got to have the right tooling in place and the right automation layers to deliver that kind of service quality. >> Yeah, Dave I actually been, one of the things that really impressed me is that the balance between on the one hand, we've talked to customers that take many many tools and replace it with ScienceLogic. But, we understand that there is no one single pane of glass or one tool to rule them all, the theme of the shows; you get the superheros together because it takes a team. You give a little bit of a history lesson which resonated me. I remember SNMP was going to solve everything for us, right? But, the lot of focus on all the integrations that works, so if you've got your APM tools, your ITSM tools or things you're doing in the cloud. It's the API economy today, so balancing that you want to provide the solutions for your customers, but you're going to work with many of the things that they have; it's been an interesting balance to watch. >> Yeah, I think that's the one thing we've realized over the years; you can't rip and replace years and years of work that's been done for good reason. I did hear today that one of our new customers is replacing a record 51 tools with our product. But a lot of these might be shadow IT tools that they've built on top of special instrumentation they might have for a specific use cases or applications or a reason that a subject matter expert would apply another tool, another automation. So, the thing that we've realized is that you've got to pull data from so many sources today to get machine learning, artificial intelligence is only as good as the data that it's making those decisions upon. >> Absolutely. >> So you've got to pull data from many different sources, understand how they relate to one another and then make the right recommendations so that you get that smooth service assurance that everybody's shooting for. And in a time where systems are ephemeral where they're coming and going and moving around a lot, that's compounding the challenge that operations has not just in all the different technologies that make up the service; where those technologies are being delivered from, but the data sources that need to be mashed together in a common format to make intelligent decisions and that's really the problem we've been tackling. >> Alright, Surendra I wonder if you can bring us inside your, you talked to a lot of enterprise customers and it helped share their voices to in this space, not sure if they're probably not calling it AI ops there, but some of the big challenges that they're facing where you're helping them to meet those challenges and where ScienceLogic fits in. >> So certainly the, yes, they probably don't want to talk about it that. They want to make sure that their applications are always up and performing the way they expect them to be and at the same time, being responsive to changes because they need to respond to their business demands where the applications and what they have out there continually has to evolve, but at the same time be very available. So, all the way from even if you think about something that is traditional and is batch jobs which they have large processing of batch jobs; sometimes those things slow down and because now they're running through multiple systems and trying to understand the precedence and actions you take when a batch job is not running properly; as just one example, right? Then what actions we want, first diagnosing why it's not working well. Is it because some upstream system is not providing it the data it needs? Is it clogged up because it's waiting on instructions from some downstream system? And then how do you recover from this? Do you stop the thing? Just kill it or do you have to then understand what downstream further subsequent batch jobs needs to or other jobs will be impacted because you killed this one? And all of that planning needs to be done in some fashion and the actions taken such that if we have to take an action because something has failed, we take the right kind of action. So that's one type of thing where it matters for clients. Certainly, performance is one that matters a lot and even on the most modern of applications because it may be an application that's entirely sitting on the cloud, but it's using five or 10 different SAS providers. Understanding which of those interactions may be causing a performance issue is a challenge because you need to be able to diagnose that and take some actions against that. Maybe it's a log in or the IDN management service that you getting from somewhere else and understanding if they have any issues and whether that provider is providing the right kind of monitoring or information about their system such that you can reason over it and understand; okay my service which is dependent on this other service is actually being impacted. And all these kind of things, it's a lot of data and these need to come together. That's where the platform something like ScienceLogic would come into play. And then taking actions on top of that is now where a platform also starts to matter because you start to develop different types of what we call content. So we distinguish the space between an automation platform or a framework plus and the content you need to have that. And ScienceLogic they talk about power packs and these things you need to have that essentially call out the work flows of the kind of actions you need to take when you have the falling signature of a certain bundle of events that have come together. Can you reason over it to say okay, this is what I need to do? And that's where a lot of our focus is to make sure that we have the right content to make sure that our clients applications stay healthy. Did that get to, I think build on what you were talking about a bit? >> Absolutely. Yes, you've got, it's this confluence of a know how an intelligence from working with customers, solving problems for them and being proactive against the applications that really run their business; and that means you're constantly adjusting. These networks I think Surendra's said it before, they're like living organisms. Based on load, based on so many factors; they're not stagnant, they're changing all the time, unless you need the right tools to understand not just anomaly's what's different, but the new technologies that come in to augmenting solutions and enhancing them and how that effects the whole service delivery cadence. >> Mr. Surendra, I want to give you the final word. One of the things I found heartening when I look at this big wave of AI that's been coming is, there's been good focus on what kind of business outcomes customers are having. >> Okay. >> Because back in the big data wave I remember we did the survey's and it was like what was the most common use case? And it was custom. And what you don't want to have is a science project, right? >> Right. >> Yes. >> You actually want to get things done. So any kinds you can give as to, I know you understand we're still early in a lot of these deployments and rollouts but what do you seeing out there? What are some of the lighthouse use cases? >> So, certainly for us, right? We've been at using data for a while now to improve the service assurance for our clients and I'll be talking about this tomorrow, but one of the things we have done is we found that now in terms of the events and incidents that we deal with, we can automatically respond with essentially no human interference or involvement I should say about 55% of them. And a lot of this is because we have an engine behind it where we get data from multiple different sources. So, monitoring event data, configuration data of the systems that matter, tickets; not just incident tickets but change tickets and all of these things and a lot of that's unstructured information and you essentially make decisions over this and say okay, I know I have seen this kind of event before in these other situations and I can identify an automation whether it's a power pack, an automotor, an Ansible module, playbook. that has worked in the situation before in another client and these two situations are similar enough such that I can now say with these kind of events coming in, or group events I can respond to it in this particular fashion; that's how we keep pushing the envelope in terms of driving more and more automation and automated response such that the I would say certainly the easy or the trivial kinds of I shouldn't say trivial, but the easy kinds of events and monitoring things we see in monitoring are being taken care of even the more somewhat moderate ones where file systems are filling out for some unknown reasons we know how to act on them. Some services are going down in some strange ways we know how to act on them to getting to even more complex things like the batch job type of thing. Example I gave you because those can be some really pernicious things can be happening in a broad network and we have to be able to diagnose that problem, hopefully with smarts to be able to fix that. And into this we bring in lots of different techniques. When you have the incident tickets, change tickets and all of that, that's unstructured information; we need to reason over that using natural language understanding to pick out the right I'm getting a bit technical here, verp no pas that matter that say okay this probably led to these kind of incidents downstream from typical changes. In another client in a similar environment. Can we see that? And can we then do something proactively in this case. So those are all the different places that we're bringing in AI, call it whatever you want, AIML into a very practical environment of improving certainly how we respond to the incidents that we have in our clients environments. Understanding when I talked about the next level changes when people are making changes to systems, understanding the risks associated with that change; based on all the learning that we have because we are very large service provider with essentially, approximately 1,000 clients. We get learning over a very diverse and heterogeneous experience and we reason over that to understand okay, how risky is this change? And all the way into the compliance arena, understanding how much risk there is in the environment that our clients facing because they're not keeping up with patches or configurations for security parameters that are not as optimal as they could be. >> Alright, well Surendra we really appreciate you sharing a glimpse into some of your customers and the opportunities that they're facing. >> Thank you. >> Thanks so much for joining us. Alright and Dave, we'll be talking to you a little bit more later. >> Great, thanks for having me. >> All right. >> Thank you. >> And thank you as always for watching. I'm Stu Miniman and thanks for watching theCUBE. >> Thank you Dave. >> Thank you. (upbeat techno music)

Published Date : Apr 25 2019

SUMMARY :

Brought to you by ScienceLogic. And for this session I'm happy to welcome to the program of the presented to the employees at IBM are there. And that two major parts of that and though we come together Yeah, so Dave luckily we've got a one on one with you And that's the challenge that we've been focused on solving that you had the AI and underneath that you had that machine give the practitioner the ability to fix that in a effective the core of everything And in the old days, actually probably would have thought Dave, if you want to comment on that I've got some And so you got to have the right tooling in place and the It's the API economy today, so balancing that you want to the years; you can't rip and replace but the data sources that need to be mashed together in but some of the big challenges that they're facing where flows of the kind of actions you need to take when you have different, but the new technologies that come in to One of the things I found heartening when I look at this big Because back in the big data wave I remember we did the but what do you seeing out there? found that now in terms of the events and incidents that we Alright, well Surendra we really appreciate you sharing to you a little bit more later. And thank you as always for watching. Thank you.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
SurendraPERSON

0.99+

DavePERSON

0.99+

Dave LinkPERSON

0.99+

fiveQUANTITY

0.99+

IBMORGANIZATION

0.99+

99%QUANTITY

0.99+

Stu MinimanPERSON

0.99+

Washington D.C.LOCATION

0.99+

51 toolsQUANTITY

0.99+

two situationsQUANTITY

0.99+

todayDATE

0.99+

Maheswaran SurendraPERSON

0.99+

ScienceLogic Symposium 2019EVENT

0.99+

ScienceLogicORGANIZATION

0.98+

first timeQUANTITY

0.98+

one toolQUANTITY

0.98+

approximately 1,000 clientsQUANTITY

0.98+

oneQUANTITY

0.98+

one exampleQUANTITY

0.98+

tomorrowDATE

0.98+

theCUBEORGANIZATION

0.97+

about 50%QUANTITY

0.97+

twoQUANTITY

0.97+

one typeQUANTITY

0.95+

firstQUANTITY

0.95+

OneQUANTITY

0.94+

about 55%QUANTITY

0.94+

this morningDATE

0.93+

two major partsQUANTITY

0.92+

About 460 peopleQUANTITY

0.89+

SNMPORGANIZATION

0.86+

MaslowORGANIZATION

0.85+

tons and tons of dataQUANTITY

0.84+

one placeQUANTITY

0.83+

IBM GTSORGANIZATION

0.79+

single pane ofQUANTITY

0.77+

ScienceLogic SymposiumEVENT

0.76+

big wave ofEVENT

0.75+

10 different SAS providersQUANTITY

0.72+

ScienceLogicTITLE

0.72+

Ritz-CarltonLOCATION

0.67+

AnsibleORGANIZATION

0.65+

kinoORGANIZATION

0.6+