Maheswaran Surendra, IBM GTS & Dave Link, ScienceLogic | ScienceLogic Symposium 2019
>> From Washington D.C. it's theCUBE covering ScienceLogic Symposium 2019. Brought to you by ScienceLogic. >> Hi, I'm Stu Miniman and this is theCUBE's coverage of ScienceLogic Symposium 2019 here at The Ritz-Carlton in Washington D.C. About 460 people here, the events' grown about 50%, been digging in with a lot of the practitioners, the technical people as well as some of the partners. And for this session I'm happy to welcome to the program for the first time guest, Surendra who is the vice president and CTO for automation in IBM's global technology services. And joining us also is Dave Link who is the co-founder and CEO of ScienceLogic. Gentlemen, thank you so much for joining us. >> Thank you for having us. >> Thanks for having us. >> Alright, so Surendra let's start with you. Anybody that knows IBM services at the core of your business, primary driver, large number of the presented to the employees at IBM are there. You've got automation in your title so, let's flush out a little bit for us, your part of the organization and your role there. >> Alright, so as you pointed out, IBM, a big part of IBM is services; it's a large component. And that two major parts of that and though we come together as one in terms of IBM services, one is much more focused on infrastructure services and the other one on business services. So, the automation I'm dealing with primarily is in the infrastructure services area which means all the way from resources you have in a persons data center going into now much more of course in a hybrid environment, hybrid multi-cloud, with different clouds out there including our own and providing the automation around that. And when we mean automation we mean the things that we have to do to keep our clients' environments healthy from a availability and performance standpoint; making sure that our environment then we respond to the changes that they need to the environment because it obviously evolves over time, we do that effectively and correctly and certainly another very important part is to make sure that they're secure and compliant. So, if you think of Maslow's hierarchy of the things that IT operations has to do that in a nutshell sums it up. That's what we do for our clients. >> Yeah, so Dave luckily we've got a one on one with you today to dig out lots of nuggets from the kino and talk a bit about the company but, you talk about IT operations and one of the pieces I've got infrastructure, I've got applications, ScienceLogic sits at an interesting place in this heterogeneous ever-changing world that we live in today. >> It does and the world's changing quickly because the clouds transforming the way people build applications. And that is causing a lot of applications to be refactored to take advantage of some of these technologies. The especially focused global scale we've seen them, we've used them, applications that we use on our phone. They require a different footprint and that requires then a different set of tools to manage an application that lives in the cloud and it also might live in a multi-cloud environment with some data coming from private clouds that populate information on public clouds. What we found is the tools industry is at a bit of a crossroads because the applications now need to be infrastructure aware, but the infrastructure could be served from a lot of different places, meaning they've got lots of data sources to sort together and contextualize to understand how they relate to one another real time. And that's the challenge that we've been focused on solving for our customers. >> Alright, Surendra I want to know if we can get a little bit more to automation and we talk automation, >> There's also IBM use for a number of years, the cognitive and there was the analyst that spoke in the kino this morning. He put cognitive as this overarching umbrella and underneath that you had the AI and underneath that you had that machine learning and deep learning pieces there. Can you help tease out a little bit for IBM global services in your customers? How do they think of the relationship between the MLAI cognitive piece in automation? >> So I think the way you laid it out, the way it was talked about this morning absolutely makes sense, so cognitive is a broad definition and then within that of course AI and the different techniques within AI, machine learning being one, natural language processing, national languages understanding which not as much statistically driven as being another type of AI. And we use all of these techniques to make our automation smarter. So, often times when we're trying to automate something, there can be very prescriptive type of automation, say a particular event comes in and then you take a response to it. But then often times you have situations where you have events especially what Dave was talking about; when an application is distributed not just a classic of distributed application, but now distributed of infrastructure you may have. Some of it may be running on the main frame, some of it actually running in different clouds. And all of this comes together, you have events and signals coming from all of this and trying to reason over where a problem may be originating from because now you have a slow performance. What's the reason for the slow performance? Trying to do some degree of root cause determination, problem determination; that's where some of the smarts comes in in terms of how we actually want to be able to diagnose a problem and then actually kick off maybe more diagnostics and eventually kick off actions to automatically fix that or give the practitioner the ability to fix that in a effective fashion. So that's one place. The other areas that one type of machine learning I shouldn't say one type, but deadly machine learning techniques lend themselves to that. There's another arena of causes a lot of knowledge and information buried in tickets and knowledge documents and things like that. And to be able to extract from that, the things that are most meaningful and that's where the natural language understanding comes in and now you marry that with the information that's coming from machines, which is far more contextualized. And to be able to reason over these two together and be able to make decisions, so that's where the automation. >> Wonder if we can actually, let's some of those terms I want to up level a little bit. I hear knowledge I hear information; the core of everything that people are doing these today, it's data. And what I heard, and was really illuminated to me listening to what I've seen of ScienceLogic is that data collection and leveraging and unlocking value of data is such an important piece of what they're doing. From an IBM standpoint and your customers, where does data fit into that whole discussion? How do things like ScienceLogic fit in the overall portfolio of solutions that you're helping customers through either manager, deploying and services? >> So definitely in the IT Ops arena, a big part of IT Ops, at the heart of it really is monitoring and keeping track of systems. So, all sets of systems throw off a lot of data whether it's log data, real time performance data, events that are happening, monitoring of the performance of the application and that's tons and tons of data. And that's where a platform like ScienceLogic comes in, as a monitoring system with capabilities to do what we call also event management. And in the old days, actually probably would have thought about monitoring event management and logs as somewhat different things; these worlds are collapsing together a bit more. And so this is where ScienceLogic has a platform that lends itself to a marriage of these faces in that sense. And then that would feed a downstream automation system of informing it what actions to take. Dave, thoughts on that? >> Dave, if you want to comment on that I've got some follow ups too, but. >> Yeah, there's many areas of automation. There's layers of automation and I think Surendra's worked with customers over a story career to help them through the different layered cakes of automation. You have automation related to provisioning systems, the provision and in some case provision based on capacity analytics. There's automation based on analysis of a root cause and then once you know it, conducting other layers of automation to augment the root cause with other insights so that when you send up a case or a ticket, it's not just the event but other information that somebody would have to go and do, after they get the event to figure out what's going on. So you do that at time of event that's another automation layer and then the final automation layer, is if you know predictively about how to solve the problem just going ahead if you have 99% confidence that you can solve it based on these use case conditions just solve it. So when you look at the different layers of automation, ScienceLogic is in some cases a data engine, to get accurate clean data to make the right decisions. In other cases, we'll kick off automations in other tools. In some cases we'll automate into ecosystem platforms whether it's a ticketing system, a service desk system, a notifications systems, that augment our platform. So, all those layers really have to work together real time to create service assurance that IBM's customers expect. They expect perfection they expect that excellence the brand that IBM presents means it just works. And so you got to have the right tooling in place and the right automation layers to deliver that kind of service quality. >> Yeah, Dave I actually been, one of the things that really impressed me is that the balance between on the one hand, we've talked to customers that take many many tools and replace it with ScienceLogic. But, we understand that there is no one single pane of glass or one tool to rule them all, the theme of the shows; you get the superheros together because it takes a team. You give a little bit of a history lesson which resonated me. I remember SNMP was going to solve everything for us, right? But, the lot of focus on all the integrations that works, so if you've got your APM tools, your ITSM tools or things you're doing in the cloud. It's the API economy today, so balancing that you want to provide the solutions for your customers, but you're going to work with many of the things that they have; it's been an interesting balance to watch. >> Yeah, I think that's the one thing we've realized over the years; you can't rip and replace years and years of work that's been done for good reason. I did hear today that one of our new customers is replacing a record 51 tools with our product. But a lot of these might be shadow IT tools that they've built on top of special instrumentation they might have for a specific use cases or applications or a reason that a subject matter expert would apply another tool, another automation. So, the thing that we've realized is that you've got to pull data from so many sources today to get machine learning, artificial intelligence is only as good as the data that it's making those decisions upon. >> Absolutely. >> So you've got to pull data from many different sources, understand how they relate to one another and then make the right recommendations so that you get that smooth service assurance that everybody's shooting for. And in a time where systems are ephemeral where they're coming and going and moving around a lot, that's compounding the challenge that operations has not just in all the different technologies that make up the service; where those technologies are being delivered from, but the data sources that need to be mashed together in a common format to make intelligent decisions and that's really the problem we've been tackling. >> Alright, Surendra I wonder if you can bring us inside your, you talked to a lot of enterprise customers and it helped share their voices to in this space, not sure if they're probably not calling it AI ops there, but some of the big challenges that they're facing where you're helping them to meet those challenges and where ScienceLogic fits in. >> So certainly the, yes, they probably don't want to talk about it that. They want to make sure that their applications are always up and performing the way they expect them to be and at the same time, being responsive to changes because they need to respond to their business demands where the applications and what they have out there continually has to evolve, but at the same time be very available. So, all the way from even if you think about something that is traditional and is batch jobs which they have large processing of batch jobs; sometimes those things slow down and because now they're running through multiple systems and trying to understand the precedence and actions you take when a batch job is not running properly; as just one example, right? Then what actions we want, first diagnosing why it's not working well. Is it because some upstream system is not providing it the data it needs? Is it clogged up because it's waiting on instructions from some downstream system? And then how do you recover from this? Do you stop the thing? Just kill it or do you have to then understand what downstream further subsequent batch jobs needs to or other jobs will be impacted because you killed this one? And all of that planning needs to be done in some fashion and the actions taken such that if we have to take an action because something has failed, we take the right kind of action. So that's one type of thing where it matters for clients. Certainly, performance is one that matters a lot and even on the most modern of applications because it may be an application that's entirely sitting on the cloud, but it's using five or 10 different SAS providers. Understanding which of those interactions may be causing a performance issue is a challenge because you need to be able to diagnose that and take some actions against that. Maybe it's a log in or the IDN management service that you getting from somewhere else and understanding if they have any issues and whether that provider is providing the right kind of monitoring or information about their system such that you can reason over it and understand; okay my service which is dependent on this other service is actually being impacted. And all these kind of things, it's a lot of data and these need to come together. That's where the platform something like ScienceLogic would come into play. And then taking actions on top of that is now where a platform also starts to matter because you start to develop different types of what we call content. So we distinguish the space between an automation platform or a framework plus and the content you need to have that. And ScienceLogic they talk about power packs and these things you need to have that essentially call out the work flows of the kind of actions you need to take when you have the falling signature of a certain bundle of events that have come together. Can you reason over it to say okay, this is what I need to do? And that's where a lot of our focus is to make sure that we have the right content to make sure that our clients applications stay healthy. Did that get to, I think build on what you were talking about a bit? >> Absolutely. Yes, you've got, it's this confluence of a know how an intelligence from working with customers, solving problems for them and being proactive against the applications that really run their business; and that means you're constantly adjusting. These networks I think Surendra's said it before, they're like living organisms. Based on load, based on so many factors; they're not stagnant, they're changing all the time, unless you need the right tools to understand not just anomaly's what's different, but the new technologies that come in to augmenting solutions and enhancing them and how that effects the whole service delivery cadence. >> Mr. Surendra, I want to give you the final word. One of the things I found heartening when I look at this big wave of AI that's been coming is, there's been good focus on what kind of business outcomes customers are having. >> Okay. >> Because back in the big data wave I remember we did the survey's and it was like what was the most common use case? And it was custom. And what you don't want to have is a science project, right? >> Right. >> Yes. >> You actually want to get things done. So any kinds you can give as to, I know you understand we're still early in a lot of these deployments and rollouts but what do you seeing out there? What are some of the lighthouse use cases? >> So, certainly for us, right? We've been at using data for a while now to improve the service assurance for our clients and I'll be talking about this tomorrow, but one of the things we have done is we found that now in terms of the events and incidents that we deal with, we can automatically respond with essentially no human interference or involvement I should say about 55% of them. And a lot of this is because we have an engine behind it where we get data from multiple different sources. So, monitoring event data, configuration data of the systems that matter, tickets; not just incident tickets but change tickets and all of these things and a lot of that's unstructured information and you essentially make decisions over this and say okay, I know I have seen this kind of event before in these other situations and I can identify an automation whether it's a power pack, an automotor, an Ansible module, playbook. that has worked in the situation before in another client and these two situations are similar enough such that I can now say with these kind of events coming in, or group events I can respond to it in this particular fashion; that's how we keep pushing the envelope in terms of driving more and more automation and automated response such that the I would say certainly the easy or the trivial kinds of I shouldn't say trivial, but the easy kinds of events and monitoring things we see in monitoring are being taken care of even the more somewhat moderate ones where file systems are filling out for some unknown reasons we know how to act on them. Some services are going down in some strange ways we know how to act on them to getting to even more complex things like the batch job type of thing. Example I gave you because those can be some really pernicious things can be happening in a broad network and we have to be able to diagnose that problem, hopefully with smarts to be able to fix that. And into this we bring in lots of different techniques. When you have the incident tickets, change tickets and all of that, that's unstructured information; we need to reason over that using natural language understanding to pick out the right I'm getting a bit technical here, verp no pas that matter that say okay this probably led to these kind of incidents downstream from typical changes. In another client in a similar environment. Can we see that? And can we then do something proactively in this case. So those are all the different places that we're bringing in AI, call it whatever you want, AIML into a very practical environment of improving certainly how we respond to the incidents that we have in our clients environments. Understanding when I talked about the next level changes when people are making changes to systems, understanding the risks associated with that change; based on all the learning that we have because we are very large service provider with essentially, approximately 1,000 clients. We get learning over a very diverse and heterogeneous experience and we reason over that to understand okay, how risky is this change? And all the way into the compliance arena, understanding how much risk there is in the environment that our clients facing because they're not keeping up with patches or configurations for security parameters that are not as optimal as they could be. >> Alright, well Surendra we really appreciate you sharing a glimpse into some of your customers and the opportunities that they're facing. >> Thank you. >> Thanks so much for joining us. Alright and Dave, we'll be talking to you a little bit more later. >> Great, thanks for having me. >> All right. >> Thank you. >> And thank you as always for watching. I'm Stu Miniman and thanks for watching theCUBE. >> Thank you Dave. >> Thank you. (upbeat techno music)
SUMMARY :
Brought to you by ScienceLogic. And for this session I'm happy to welcome to the program of the presented to the employees at IBM are there. And that two major parts of that and though we come together Yeah, so Dave luckily we've got a one on one with you And that's the challenge that we've been focused on solving that you had the AI and underneath that you had that machine give the practitioner the ability to fix that in a effective the core of everything And in the old days, actually probably would have thought Dave, if you want to comment on that I've got some And so you got to have the right tooling in place and the It's the API economy today, so balancing that you want to the years; you can't rip and replace but the data sources that need to be mashed together in but some of the big challenges that they're facing where flows of the kind of actions you need to take when you have different, but the new technologies that come in to One of the things I found heartening when I look at this big Because back in the big data wave I remember we did the but what do you seeing out there? found that now in terms of the events and incidents that we Alright, well Surendra we really appreciate you sharing to you a little bit more later. And thank you as always for watching. Thank you.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Surendra | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Dave Link | PERSON | 0.99+ |
five | QUANTITY | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
99% | QUANTITY | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
Washington D.C. | LOCATION | 0.99+ |
51 tools | QUANTITY | 0.99+ |
two situations | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
Maheswaran Surendra | PERSON | 0.99+ |
ScienceLogic Symposium 2019 | EVENT | 0.99+ |
ScienceLogic | ORGANIZATION | 0.98+ |
first time | QUANTITY | 0.98+ |
one tool | QUANTITY | 0.98+ |
approximately 1,000 clients | QUANTITY | 0.98+ |
one | QUANTITY | 0.98+ |
one example | QUANTITY | 0.98+ |
tomorrow | DATE | 0.98+ |
theCUBE | ORGANIZATION | 0.97+ |
about 50% | QUANTITY | 0.97+ |
two | QUANTITY | 0.97+ |
one type | QUANTITY | 0.95+ |
first | QUANTITY | 0.95+ |
One | QUANTITY | 0.94+ |
about 55% | QUANTITY | 0.94+ |
this morning | DATE | 0.93+ |
two major parts | QUANTITY | 0.92+ |
About 460 people | QUANTITY | 0.89+ |
SNMP | ORGANIZATION | 0.86+ |
Maslow | ORGANIZATION | 0.85+ |
tons and tons of data | QUANTITY | 0.84+ |
one place | QUANTITY | 0.83+ |
IBM GTS | ORGANIZATION | 0.79+ |
single pane of | QUANTITY | 0.77+ |
ScienceLogic Symposium | EVENT | 0.76+ |
big wave of | EVENT | 0.75+ |
10 different SAS providers | QUANTITY | 0.72+ |
ScienceLogic | TITLE | 0.72+ |
Ritz-Carlton | LOCATION | 0.67+ |
Ansible | ORGANIZATION | 0.65+ |
kino | ORGANIZATION | 0.6+ |
Matt Kalmenson, VEEAM | IBM Think 2018
>> Narrator: From Las Vegas, it's theCUBE. Covering IBM Think 2018. Brought to you by IBM. >> Welcome back to theCUBE. We are live on Day 1 at the inaugural IBM Think 2018 event in Las Vegas at the Mandalay Bay. I'm Lisa Martin with Dave Vellante. Welcoming back to theCUBE, a multiple type cube alumni, Matt Kalmenson, the vice president of sales portfolio and service providers at Veeam. Hey, Matt. >> Hello Lisa, nice to see you again. Dave, nice to see you. It's been a while. >> Yeah. And appreciate you having me back. >> Absolutely, we're in the middle of the Veeam sandwich, we just had Rick Vanover on about 20 minutes ago so-- >> That's a tough act to follow, but I'll do my best >> Not enough screen. >> That's true, that's true. So, IBM, Veeam, what is going on there from a cloud perspective, any news you want to share? >> There is so much going on there that I probably wouldn't know where to start. Now, I'll tell you we started the relationship with IBM and Veeam, from a cloud perspective for about a year or year and a half ago and last year we announced Veeam availability on the IBM cloud. And really, if you think about moving your virtual workloads to the cloud, Veeam in conjunction with IBM, specifically we started out on the VM we're a cloud foundation, which is called VCF. Giving the organizations the ability to move their virtual workloads from on-premise to into the cloud. And really we extended that by saying, "Hey, you're on this journey to the cloud, "let Veeam be the tool and the product "that helps you along that journey and streamline "the operations to move to the cloud." Now, that's where we started, but I have to tell you over the last couple of months, we have a lot of exciting things happen. Here at the show we're going to announce that we're also available for physical workloads. So, when you think about Veeam historically, people think about our virtual environments, right? But the reality is we've had major success with servers and workstations and the availability of servers and workstations and we're now making that available on the IPM platform as well. And, we're also working with the business resiliency team within IBM, so you can now purchase Veeam, it's a new offer that the Brazilian C team is bringing to market that allows you to have a full back up and managed service from the IBM GTS team where the business resiliency team resides. So, lots of really exciting things happening. >> So, let's start with the cloud piece, why Veeam and IBM cloud? What are the synergies there? What's so special about Veeam, and why is the fit so good? >> Yeah, that's a really good question and there's so many options out there. Lisa and I were talking before. We were kind prepping for the discussion here today and we're talking about the journey in the enterprise, and the journey, still has a long way to go. You'll hear different stats, but most of the stats reside around 15%. 15% of enterprises have started along this cloud journey in any kind of meaningful way. So, what does that mean? What we see all kind of statistics and what we see all kind of numbers and information about who's leading the battle and who's already won, it's far from over. Now, being in that position, we think we have a really unique value proposition combining Veeam with IBM. Number one, when you purchase Veeam on the IBM cloud, you get access to the entire Veeam portfolio. Okay? Now, when you take that portfolio when you make it available on the IBM cloud, IBM cloud is across some 50 some odd different data centers. Right? And across those different data centers, there's no charge for the bandwidth, so moving data from one data center to another data center is a really unique value proposition. So, on the one hand you take this organization that's had wild success in the data availability marketplace. And you give the access to IBM customers, so the whole portfolio and they have something in that portfolio that really differentiates them and that they don't charge for bandwidth, that means your economy is a scale greater, you've eliminated some of the economic barriers right at the gate, when you compare it to other cloud platforms that are out there. It gives you a lot of flexibility to move workloads and when you talk about back up and you talk about disaster recovery, which will all encompass within the business continuity or data availability story, moving workloads round is paramount. So, you take that combination of not having these extra charges, of having these unique value propositions from both organizations. And I believe it's just a phenomenal opportunity that we continue to build upon. >> There are many bandwidth charges can be some of the most expensive on the cloud bill, why is that Matt? Is it because IBM owns it's own infrastructure there? And so, it's a sunk cost, and passes that on to benefit on to it's customers? >> It really is one of the key differentiators. Some of IBM's clouds competitors who I won't mention, that's how they make their business. That's how they make their living. So, this is a literal sunk cost into the business that offers tremendous economic advantages to an IBM cloud over other clouds. >> And talk about the data movement, I mean a lot of people would say, I don't want to move my data because I don't want to pay the bandwidth cost, but as well, it's just moving a lot of data through a little pipe tank takes a long time. So, what are the use cases where you see people moving data, I mean obviously, offside data protection but what else? >> Yeah, so there's so many use cases, right? And when you think about the Veeam in particular, you could be talking about having Veeam as a part of a complete infrastructure as a service, right? So, you can come to the IBM cloud and purchase Veeam and have it as a part of the infrastructure service with your compute platform, your virtualized machine, your storage, and obviously, your back-up in data availability need will be protected. We can also work with customers that are just looking for a back-up as a service. So like we said, a lot of organizations have not made the journey to the cloud yet and they're just making this evolutionary journey, right? It's not something that happens overnight. So, they may still have traditional on prem uses of Veeam, but what do they want to do, they still need to move copies of their back up jobs offsite. They need to move them to another location and that goes back to what's called the 3-2-1 Work Rule. The 3-2-1 rule is having three copies of your data on two different media, with one of them being offsite. So, we give the ability just to use your on prem as part of a hybrid cloud solution moving just copies of your back-up jobs offsite too. In addition, we could talk about replication needs. Now, we have something called Veeam cloud connect back-up, which I've just talked about, follows the 3-2-1 Rule. But you can also replicate data from onsite to one of these cloud provider, cloud locations that we've discussed earlier. So there's lots of different use cases. >> With respect to IBM, what is the go to market strategy like for Veeam to go to market with IBM? Also, some of the things that you're announcing this week, what is that, how is that changing the game for Veeam going to market with your own sales organization? >> So, any time you're talking about service providers and cloud providers, it's really disruptive to what I would call the legacy organizations in the marketplace. It disrupts manufacturers, it disrupts resellers, it disrupts traditional sales teams. It gets complicated when you start talking about various commission plans. It really on the one hand have this mechanism that can bring so many advantages to the marketplace, but it can at the same time cause turmoil while under your own roof. At Veeam, I think we really done a nice job at cracking the code. So, while I represent the service provider business and the cloud provider business, I have peers across the country and across the globe, who would I call a lot more traditional, and user-facing sales people, all right? If you think multiple years back, what are they trying to do is have their customers consume Veeam as a license. And then after the license, they'll pay maintenance fees for perpetuity, hopefully. What we've decided is how do we put a plan in place, where our sales team can go after their end-users and their perspective end-users, and say to them how you consume is your business. What makes the most sense for you? Do you want to consume on-prem? Fantastic. Do you want to consume on-prem and make a copy of your back-up job and move it to the cloud? That's great too. Do you want to push all of the business and use IBM cloud as part of the infrastructure as a service but you won't own? Any of the Veeam technology on premises but IBM will own it and they'll provide it to you as a service? We have you covered there too. What we did was we came up with a compensation model internal that makes the cloud and service provider business in integral part of the go to market plan of our sales organization. So, we have compensation models that when an end-user's sales rep for like the better term is selling to their end customer, they can offer up consumption models that benefit the customer the best way and still get compensated at an even playing field. So, there's some mathematical equations behind the scene to make sure that we figured out how to compensate them and some operational tools we put in place, to make sure that they are compensated accordingly. And that really eliminates a lot of the friction between sales organizations. >> So, if an IBM cloud customer wants to buy back up as a service monthly, they can do that? They can pay, probably make them sign up for some period of time, is that right? So let's say it's an annual commitment or maybe it's a variety. >> We have various styles. Yes. >> Whatever it is, but, and I'm sure there's various incentives the longer you sign up, the cheaper it is per month. But they can consume monthly, pay monthly presumably or okay. And you guys work it out the back ends. >> Yes. >> You and IBM. >> Internally at Veeam, we worked it out so we could pay our sales teams, right? So, the IBM sales teams will continue to get paid based on consumption. >> Transparent to them? >> Transparent to them, that's the key. It's transparent to them. All they know is that they have an army of Veeam sales people that have vested interest to make their joint customers successful regardless of consumption models. >> Okay. And then, as it relates to the business resiliency team, that's some of them are different, well, could it involve cloud, obviously, but it's a different equation, right? So, you got IBM GTS guys in there maybe doing business impact analysis, do you guys participate in that or how does that relationship work? >> So, it's a very new relationship that we're all putting the foundational elements in place so that we would participate in those types of proofs of concepts and foundational elements where they make sense. And in those scenarios too, we do have programs and policies in place within Veeam to kind of mitigate, eliminate any friction between the sales organizations. >> Okay, go ahead. >> I was just going to say, in terms of with the cloud for a second, sounds like basically regardless of who's selling it, the end-user business, it's like a choose your own adventure, whatever is ideal and efficient for their business. Are you seeing any industries in particular that are sort of early adopters of what you guys are doing with IBM. You think of heavily regulated industries, financial services, healthcare, are you seeing any sort of leading industries there or is it sort of a horizontal challenge that-- >> Yeah, it's a really great question. When you think about the use cases for data availability, especially as it pertains to the cloud, back-up and disaster recovery are really one and two as far as cloud use cases. So, it's really universal. I would say I probably couldn't put my finger on one vertical market because they all have a need. Now, when you get into the highly regulated markets of healthcare and financial services, some of our cloud providers such as IBM, really has some unique expertise but everyone really needs the best solution for back-up and DR in the cloud. You know I can talk about some unique case studies like we have with Movius. Movius is an enterprise communications company, who happens to be here. And some of the Veeam's staff will be doing a session with the folks from Movius, talking about one Movius shows for their enterprise which has thousands of customers and really works with some of the largest telephony companies in the world. Why they chose the IBM cloud and why they chose IBM cloud with Veeam in particular? So, it crosses across all segments, really. >> Can you talk about the channel dynamic here? Basically, you think about the channel with cloud really started to take off, the message to the box or the box sounds we love you but, and was moving 90% of the market for a hardware and software but you could see that differentiation wasn't there. So, it was getting commoditized. You had to change, you had to add value somehow whether you're an SAP specialist, you're an Oracle specialist or VMware maybe an ISV and then you have who you service the cloud service providers, how is the channel adapted to all this? >> The channel is adapting and it's evolving rapidly. Just like any change in an ecosystem, some aren't going to be here in the years to come, if they don't evolve and adapt quick enough. What I'm really saying and what my team is saying is that a lot of our traditional channel partners are either teaming up with cloud providers, so IBM has a cloud provider program and a lot of the resellers we worked with they resell IBM cloud and Veeam on the IBM cloud. You have a lot of other channel partners that are really starting to develop their manage service practice. So, they'll put a wrapper around some of the cloud offerings and cloud services that are out there. That might be a multi-cloud environment which is inclusive of the IBM cloud, or might be a different scenario. But that's probably the fastest growing segment of the IT management spaces, really the service providers because they have to evolve and they have to adapt and a lot of them are trying to figure out what is their next play. How do they differentiate? Are they as expert in healthcare space? Are they an expert in the financial services space? But the first step is transitioning from that traditional, upfront cutbacks business model where they move in a box to building a revenue, recurring revenue based business model that offers cloud services and management of cloud services. >> How about the service providers? How do you see them differentiate? John and I had a big sort of debate this morning. AWS infrastructure service, how does IBM differentiate? Software was sort of my push. But how are you seeing the cloud service providers beyond the big three, four or five differentiating from the big whales? >> Every day, they're trying to figure out how to differentiate from the big whales. I mean that's part of what they get up every morning and when I go to sleep every night thinking about. So, sometimes they partner with the big whales, right? This is an island of technology so to speak. This is truly an ecosystem. Some of the best service providers within the ecosystem that I'm responsible for, offer phenomenal services to their customers. There are some workloads that they manage themselves. There are some workloads that they'll be the first to say you're better off being managed or run in a hyperscale type environment like in IBM cloud or in Azure or go in an AWS and they may provide some kind of management service. So, a lot of them do is again, they start to build these wraparound services, so that they can evolve. Because there is no one right answer and even within an organization there may not be one right answer because different workloads require different business, different clouds, different manage services. They need to be handled differently. If you have workloads that are very elastic, very spiky, so to speak, all right? Maybe it's an online application around the holiday season that's going to be hit hard and it's going to hit often but for a very short period of time perhaps that type of application you put up in the public cloud in a hyperscale or for again lack of the better term. There maybe kind of the old steady applications that the manage service provider might want to manage themselves, right? But, they will come in and they'll do the needs assessment. They'll evaluate the situation. They'll make the recommendation, and then they'll build their services around that recommendation. The beautiful thing about working with Veeam is that no matter what the answer is, we have the solution. >> So, VeeamON is coming up in May 14th to the 16th, theCUBE's going to be back there again. What are you excited about with the VeeamON 2018, maybe some customers that might be onstage sharing your stories? What can you share with us about what excites you about your big event? >> Sure, sure. When I think about VeeamON, what excites me? Now, this is a little bit personal cause I have to have responsibility for this team but for the last nine quarters, one of the fastest growing excitement of the Veeam business has been it's cloud business. We have grown over 50% year over year and that's a global number. So, while Veeam itself is having phenomenal growth, the marketplace in which we compete, growing 7% to 8% again depending on who you read. Veeam is a total 2016 to 2017 growing at 36%, our cloud business growing at over 50%, and helping that cloud become a part of everyone's story in everyone's business is really exciting to me. So, we'll have multiple service providers, multiple cloud providers up on stage doing case studies and testimonials, talking about how our mutual end-customers are benefited from the programs that we put in place to help everyone get better together. >> Yeah, I think the other thing, if I can interject. My takeaway from last year was you guys going hard. Everybody's going after multicloud, but your perspective on digital business and availability to support multiple clouds and you're building relationships with companies like IBM, and you got a good vision around that. So, I got to believe we're going to hear a lot about that as well. >> You sure will. (laughs) >> Well, sounds like a lot of momentum. Matt, thanks so much for stopping by theCUBE's sharing what's new, what excites you and the momentum that you guys are carrying forward. >> Thank you. >> Lisa: Pretty exciting stuff. >> Thank you for having me. I really appreciate it. It's great to be back and I'll look forward to speaking with you at VeeamON. >> Well, see you then. >> All right, see you then. >> We want to thank you for watching theCUBE live on day one of IBM Think 2018. I'm Lisa Martin for Dave Vellante. Check out Wikibon. Check out SiliconANGLE media for the latest news and analyst insights into all things cloud, AI, machine learning, watching et cetera. David and I are going to be right back with our next guest after a short break. We'll see you in just a few minutes. (upbeat music)
SUMMARY :
Brought to you by IBM. We are live on Day 1 at the inaugural Hello Lisa, nice to see you again. And appreciate you having me back. any news you want to share? the ability to move So, on the one hand you cost into the business And talk about the data movement, the journey to the cloud and say to them how you is that right? We have various styles. the longer you sign up, the So, the IBM sales teams will continue that's the key. So, you got IBM GTS guys in there between the sales organizations. of what you guys are doing with IBM. for back-up and DR in the cloud. how is the channel adapted to all this? and a lot of the resellers we worked with How about the service providers? that the manage service provider What are you excited about excitement of the Veeam business and availability to You sure will. that you guys are carrying forward. to speaking with you at VeeamON. David and I are going to be
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Matt Kalmenson | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
David | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
John | PERSON | 0.99+ |
Lisa | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
2016 | DATE | 0.99+ |
90% | QUANTITY | 0.99+ |
2017 | DATE | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
Movius | ORGANIZATION | 0.99+ |
May 14th | DATE | 0.99+ |
Veeam | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Matt | PERSON | 0.99+ |
7% | QUANTITY | 0.99+ |
36% | QUANTITY | 0.99+ |
last year | DATE | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
Mandalay Bay | LOCATION | 0.99+ |
15% | QUANTITY | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
one | QUANTITY | 0.99+ |
16th | DATE | 0.99+ |
VeeamON | ORGANIZATION | 0.99+ |
first step | QUANTITY | 0.99+ |
8% | QUANTITY | 0.99+ |
3-2-1 | OTHER | 0.98+ |
Rick Vanover | PERSON | 0.98+ |
first | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
Azure | TITLE | 0.98+ |
both organizations | QUANTITY | 0.98+ |
over 50% | QUANTITY | 0.98+ |
two | QUANTITY | 0.98+ |
five | QUANTITY | 0.98+ |
Brazilian | OTHER | 0.98+ |
IBM Think 2018 | EVENT | 0.96+ |
around 15% | QUANTITY | 0.96+ |
four | QUANTITY | 0.93+ |
thousands of customers | QUANTITY | 0.93+ |