Image Title

Search Results for IBM 2020:

Mike Gilfix, IBM | AWS re:Invent 2020 Partner Network Day


 

>>from >>around the globe. It's the Cube with digital coverage of AWS reinvent 2020 Special coverage sponsored by A. W s Global Partner Network. >>Hello and welcome to the Cube. Virtual in our coverage of AWS reinvent 2020 and our special coverage of a PM partner experience where the Cube virtual and I'm your host, Justin Warren. And today I'm joined by Mike Gill. Fix. Who is the chief product officer for IBM Cloud PACs. Mike, welcome to the Cube. >>Thank you. Thanks for happening. >>Now. Cloud PACs is a new thing from IBM. I'm not particularly familiar with it, but it's it's related to IBM's partnership with with a W s. So maybe you could just start us off quickly by explaining what is cloud packs and what's your role as chief product officer there? >>Well, Klopp acts sort of our next generation platform. What we've been doing is bringing the power of IBM software really across the board and bringing it to a hybrid cloud environments, so make it really easy for our customers to consume it wherever wherever they want, however, they want to choose to do it with a consistent skill set and making it really easy to kind of get those things up and running and deliver value quickly. And this is part of IBM hybrid approach. So what we've seen is organizations that can leverage the same skill set and, you know, basically take those work quotes, make him run where they need thio. Yields about a 2.5 times are y and cloud packs it at the center of that running on the open shift platform so they get consistent security skills and powerful software to run their business running everywhere. And we've been partnering with AWS because we want to make sure that those customers that have made that choice could get access to those capabilities easy and as fast as possible. >>Right? And and the cloud PACs have built on the red hat open. Now let me get this right. It's the open hybrid cloud platform. So is that open shift? >>It is Open shift. Yes. I >>mean, IBM >>is incredibly committed to being thio. Open software and open ship does provide that common layer, and the reason that's important is you want consistent security. You want to avoid lock in, right? That gives you a very powerful platform A fabric, if you will, that can truly run anywhere with any workload. And we've been working very closely with a W s to make sure that is Ah, Premier. First class experience on AWS. >>Yes. So the the open shift on AWS is is relatively new from IBM. So could you explain what is open shift on AWS? And how does that differ from the open shift that people may be already familiar with? >>Well, the Colonel, if you will, is the same. It's the same sort of central open source software, but in working closely with AWS were now making those things available a simple services that you can quickly provisioned and run, and that makes it really easy for people to get started. But again, sort of carrying forward that same sort of skill set. So that's kind of a key way in which we see that you can gain that sort of consistency, you know, no matter where you're running that workflow and we've been investing in that integration, working closely with them Amazon, >>right? And we all know red hats, commitment, thio, open source software and the open ecosystems. Red hat is rightly famous for it, and I I am old enough to remember when it was a brand new thing, particularly in enterprise. Thio allow open source toe to come in and have anything to do with workloads. And now it's It's ALS, the rage, and people are running quite critical workloads on it. So what are you seeing in the adoption within the enterprise off open software? >>The adoption is massive, I think. Well, first, let me describe what's driving it. I mean, people want to tap into innovation and the beauty of open source is your your kind of crowd sourcing, if you will, this massive community of developers that are creating just an incredible amount of innovation and incredible speed, and it's a great way to ensure that you avoid vendor lock in. So enterprises of all types are looking to open solutions as a way both of innovating faster and getting protection. And that commitment is something certainly redheaded tapped into its behind the great success of Red Hat. And it's something that, frankly, is permeating throughout IBM and that we're very committed to driving this sort of open approach. And that means that you know, we need to ensure that people get access to the innovation they need, run it where they want and ensure that they feel that they have choice >>on the choice. I think is a key part of it that isn't really coming through in some of the narrative that there's a lot of discussion about how you should actually, should you go cloud. I remember when it was. Either you should stay on site or should you go, Go to Cloud and we had a long discussion there. Hybrid Cloud really does seem to have come of age where it's it's a a realistic kind of compromise, probably the wrong word, but it's it's a trade off between doing all of one thing or all another. And for most enterprises, that doesn't actually seem to be the choice that that's actually viable for them. So hybrid seems like it's actually just the practical approach. Would that be accurate? >>Well, our studies have shown that if you look statistically at the set of work, oh, that's moved to clouds, you know, something like 20% of workloads have only moved to cloud, meaning the other 80% is experiencing barriers to move >>and some >>of those barriers is figuring out what to do with all this data that's sitting on Prem or, you know, these these applications that have years and years of intelligence baked into them that cannot easily be ported. And so organizations looking to hybrid approaches because they give them more choice. It helps them deal with fragmentation, meaning as I move more workload, I have consistent skill set. It helps me extend my existing investments and bring it into the cloud world. And all those things again are done with consistent security. That's really important, right? Organizations need to make sure they're protecting their assets. Their data throughout, you know, leveraging a consistent platform. So that's really the benefit of the hybrid approach. It essentially is going to enable these organizations unlocked more workload and gain the acceleration and the transformative, effective clouds. And that's why I think they're really That's why it's becoming a necessity, right, because they just can't get that 80% to move. Yah, >>Yeah, I've long said that the cloud is a state of mind rather than a particular location. It's It's more about an operational model of how you do things, so hearing that we've only got 20% of workloads have moved to this new way of doing things does rather suggest that there's a lot more work to be done. What for? Those organizations that are just looking to do this now they've they've done a bit of it and they're looking for those next new workloads. Where do you see customers struggling the most? And where do you think that IBM can help them there? >>Well, um, boy, where they struggling the most? First, I think skills. I mean, they have to figure out a new set of technologies to go and transition from the old World to the new. And at the heart of that is lots of really critical debate. Like, how do they modernize the way that they do software delivery for many enterprises, right. Embrace new ways of doing software delivery. How do they deal with the data issues that arise from where the data sets their obligations for data protection? Um, what happened to the data spans multiple different places, but you have to provide high quality performance and security thes air, all parts of issues that you know, spanned different environments. And so they have to figure out how to manage those kinds of things and make it work in one place. I think the benefit of partnering, you know, with Amazon is clearly there's a huge, you know, customer base. That's interesting. Amazon. I think the benefit of the idea and partnership is you know, we can help to go and unlock some of those new workloads and find ways to get that cloud benefit and help to move them to the cloud faster again with that consistency of experience. And that's why I think it's a good match partnership. We're giving more customers choice. We're helping them to unlock innovation substantially, faster, >>right? And so, for people who might want to just get started without how would they approach this? Do you think people might have some experience with AWS? It's It's almost difficult not to these days, but for those who aren't familiar with the red hat on a W s with open shift on AWS, how would they get started with you? Thio to explore what's possible? >>Well, one of the things that we're offering to our clients is a service that we refer to his I. D. Um garage Z you know, an engagement model, if you will, within IBM, where we work with our clients and we really help them to do co creation. So help to understand their business problem. Or, you know, the target state of where they want their I t to get to. And in working with them in co creation, you know, we help them to affect that transition. Let's say that it's about, you know, delivering business applications faster. Let's say it's about modernizing the applications they have or offering new services new business models again, all in the spirit of co creation. And we found that to be really popular. Um, it's a great way to get started. We we leverage design thinking approach. They can think about the customer experience and their outcome. If they're creating new business, processes, new applications and then really help them toe uplift their skills and, you know, get ready. Thio adopt cloud technology and everything that they dio. >>It sounds like this is, ah, lot of established workloads that people already have in their organizations. It's already there. It's generating real money. It's It's not those experimental workloads that we saw early on which was a well, let's try. This cloud is a fabulous way where we can run some experiments, and if it doesn't work, we just turn it off again. These sound like a lot more workloads, air kind of more important to the business. Is that be true? >>Yeah, I think that's true now. I wouldn't say they're just existing work clothes, because I think there's lots of new business innovation that many of our, you know, clients want to go on launch. And so this gives them an opportunity to do that new innovation but not forget the past, meaning they could bring it forward and bring it forward into an integrated experience. I mean, that's what everyone demands of a true digital business, right? They expect that your experience is integrated, that it's responsive that it's targeted and personalized, and the only way to do that is to allow for experimentation that integrates in with the, you know, standard business processes and things that you did before. And so you need to be able to connect those things together seamlessly, >>right? So it sounds like it's it's a transition more than creating new thing completely from scratch. It's well Look, we've done a lot of innovation over the past decade or so in cloud. We know what works, but we still have workloads that people clearly no one value. How do we put those things together and do it in such a way that we maintain the flexibility to be able to make new changes as we as we learn new things? >>Yeah, leverage what you've got. Play to your strength. I mean, that's that's how you create speed. If you have to reinvent the wheel every time, it's going to be a slow roll. >>Yeah, that does seem like an area where an organization, probably at this point should be looking to partner with other people who have done the hard yards. They've They've already figured this out. What, as you say, Why can't make all of these obvious areas yourself when you're you're starting from scratch? When there's a wealth of experience out there, and particularly this whole ecosystem that exists around around open software? Uh, in fact, maybe you could tell us a little bit about the ecosystem opportunities that are there because red, that's been part of this for a very long time. AWS has a very broad ecosystem is we're all familiar with being here. It reinvent yet again. How does that ecosystem claim toe? What's possible? >>I well, let me explain why I think IBM brings a different dimension to that trio, right? IBM brings the industry expertise. I mean, we've long worked with all of our clients are partners on solving some of the biggest business problems and being embedded in the thing that they do. So we have deep knowledge of their enterprise challenges where they're trying to take them. Deep knowledge of their business processes were ableto bring that that industry know how mixed with, you know, red hats approach to an open, foundational platform coupled with, you know, the great infrastructure you could get from Amazon. And, you know, that's a great sort of powerful combination that we can bring to each of our clients and, you know, maybe just to bring it back a little bit to that idea of Okay, so what's the rolling cloud packs in that? I mean, compact are the kind of software that we've built to enable enterprises to run their essential business processes right in the essential digital operations that they run everything from security to protecting their data or giving them powerful data tools to implement a I and, you know, to implement ai algorithms in the heart of their business or giving them powerful automation capabilities so they can digitize their operations and also make sure those things were going to run effectively. It's those kinds of capabilities that we're bringing in the form of cloud PACs. Think of that is that that substrate that runs a digital business that now could be brought through right running on AWS infrastructure. Good. It's integration that we've done >>right. So basically taking things that as a pre package module that we can just grab that module, drop it in and and start using it rather than having to build it ourselves from scratch. >>That's right. They make them leverage of those powerful capabilities and get focused on innovating the things that matter. Right? So the huge accelerant to getting business value. >>And it does sound a lot easier than trying to learn how to do the complex sort of deep learning and linear algorithms that they're involved in machine learning. I have looked into it a bit and trying to manage that sort of deep maths, and I think I'd much rather just just grab one off the shelf, plug it in and just use it. >>Yeah, It's also better than writing assembler code, which was some of my first programming experiences as well. So I think the software industry has moved on just a little bit since then. >>I think we have to say I do not miss the days of handwriting. Assemble at all, uh, sometimes for nostalgia reasons. But if we want to get things done, I think I'd much rather work in something a little higher level >>specific drinking. >>So thank you so much for my for my guest there. Mike Gill. Fix chief product officer for IBM Cloud PACs from IBM. This has been the cubes coverage off AWS reinvent 2020 and the a p m. Partner experience. I've been your host, Justin Warren. Make sure you come back and join us for more coverage later on

Published Date : Dec 3 2020

SUMMARY :

It's the Cube with digital coverage Who is the chief product officer for Thanks for happening. So maybe you could just start us off quickly by explaining what is cloud packs and what's your role as can leverage the same skill set and, you know, basically take those work quotes, And and the cloud PACs have built on the red hat open. I and the reason that's important is you want consistent security. And how does that differ from the open shift that you can quickly provisioned and run, and that makes it really easy for people to get started. So what are you seeing in the adoption within the enterprise off And that means that you know, we need to ensure that people get access to the innovation they need, of the narrative that there's a lot of discussion about how you should actually, should you go cloud. So that's really the benefit of the hybrid approach. And where do you think that IBM can help them there? I think the benefit of partnering, you know, with Amazon is clearly there's a huge, And in working with them in co creation, you know, we help them to affect that transition. Is that be true? that integrates in with the, you know, standard business processes and things that you did before. to be able to make new changes as we as we learn new things? I mean, that's that's how you create speed. Yeah, that does seem like an area where an organization, probably at this point should be looking to partner with that industry know how mixed with, you know, red hats approach to an open, that module, drop it in and and start using it rather than having to build it ourselves from scratch. So the huge accelerant to getting business value. that sort of deep maths, and I think I'd much rather just just grab one off the shelf, plug it in and just So I think the software industry has moved on just a little bit since then. I think we have to say I do not miss the days of handwriting. So thank you so much for my for my guest there.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Justin WarrenPERSON

0.99+

IBMORGANIZATION

0.99+

Mike GillPERSON

0.99+

AmazonORGANIZATION

0.99+

Mike GilfixPERSON

0.99+

AWSORGANIZATION

0.99+

20%QUANTITY

0.99+

80%QUANTITY

0.99+

A. W s Global Partner NetworkORGANIZATION

0.99+

MikePERSON

0.99+

FirstQUANTITY

0.99+

eachQUANTITY

0.98+

firstQUANTITY

0.98+

Red hatTITLE

0.97+

CubeCOMMERCIAL_ITEM

0.97+

oneQUANTITY

0.97+

todayDATE

0.96+

Cube virtualCOMMERCIAL_ITEM

0.95+

2.5 timesQUANTITY

0.95+

one placeQUANTITY

0.95+

bothQUANTITY

0.95+

W s.ORGANIZATION

0.93+

2020DATE

0.87+

red hatsTITLE

0.83+

trioQUANTITY

0.79+

Cloud PACsCOMMERCIAL_ITEM

0.79+

CloudCOMMERCIAL_ITEM

0.78+

first programming experiencesQUANTITY

0.77+

First classQUANTITY

0.76+

one thingQUANTITY

0.76+

KloppORGANIZATION

0.72+

2020TITLE

0.72+

InventEVENT

0.69+

2020 Partner Network DayEVENT

0.69+

past decadeDATE

0.67+

VirtualCOMMERCIAL_ITEM

0.63+

Red HatTITLE

0.6+

yearsQUANTITY

0.58+

reinvent 2020EVENT

0.51+

PACsTITLE

0.36+

reinventEVENT

0.3+

Mike Gilfix, IBM | AWS re:Invent 2020 Partner Network Day


 

>> Reporter: From around the globe. It's theCUBE with digital coverage of AWS re:Invent 2020. Special coverage sponsored by AWS global partner network. >> Hello, and welcome to theCUBE virtual and our coverage of AWS re:Invent 2020 and our special coverage of APN partner experience. We are theCUBE virtual and I'm your host, Justin Warren. And today I'm joined by Mike Gilfix who is the Chief Product Officer for IBM Cloud Paks. Mike, welcome to theCUBE. >> Thank you. Thanks for having me. Now, Cloud Paks is a new thing from IBM. I'm not particularly familiar with it, but it's related to IBM's partnership with AWS. So maybe you could just start us off quickly by explaining what is Cloud Paks and what's your role as Chief Product Officer there? >> Well, Cloud Paks is sort of our next generation platform. What we've been doing is bringing the power of IBM software really across the board and bringing it to a hybrid cloud environment. So making it really easy for our customers to consume it wherever they want, however, they want to choose to do it with a consistent skillset and making it really easy to kind of get those things up and running and deliver value quickly. And this is part of IBM's hybrid approach. So what we've seen is organizations that can leverage the same skillset and, you know basically take those workloads make them run where they need to yields about a two and a half times ROI and Caltech sit at the center of that running on the OpenShift platform. So they get consistent security, skills and powerful software to run their business running everywhere. And we've been partnering with AWS because we want to make sure that those customers that have made that choice, can get access to those capabilities easy and as fast as possible. >> Right. And the Cloud Paks and Built On the Red Hat open. Now, let me get this right. It's the open hybrid cloud platform. So is that OpenShift? >> It is OpenShift, yes. I mean IBM is incredibly committed to open software and OpenShift does provide that common layer. And the reason that's important is you want consistent security. You want to avoid lock-in, right? That gives you a very powerful platform, (indistinct) if you will, they can truly run anywhere with any workload. And we've been working very closely with AWS to make sure that is a premiere first-class experience on AWS. >> Yes so the OpenShift on AWS is relatively new from IBM. So could you explain what is OpenShift on AWS and how does that differ from the OpenShift that people may be already familiar with? Well, the kernel, if you will, is the same it's the same sort of central open source software but in working closely with AWS we're now making those things available as simple services that you can quickly provision and run. And that makes it really easy for people to get started, but again sort of carrying forward that same sort of skill sets. So that's kind of a key way in which we see that you can gain that sort of consistency, you know, no matter where you're running that workload. And we've been investing in that integration working closely with them, Amazon. >> Yeah, and we all know Red Hat's commitment to open source software in the open ecosystems. Red hat is rightly famous for it. And I am old enough to remember when it was a brand new thing, particularly in enterprise to allow open source to come in and have anything to do with workloads. And now it's all the rage and people are running quite critical workloads on it. So what are you seeing in the adoption within the enterprise of open software? >> The adoption is massive. I think, well first let me describe what's driving it. I mean, people want to tap into innovation and the beauty of open source is you're kind of crowdsourcing if you will, this massive community of developers that are creating just an incredible amount of innovation at incredible speed. And it's a great way to ensure that you avoid vendor lock-in. So enterprises of all types are looking to open solutions as a way, both of innovating faster and getting protection. And that commitment, is something certainly Red Hat has tapped into. It's behind the great success of Red Hat. And it's something that frankly is permeating throughout IBM in that we're very committed to driving this sort of open approach. And that means that, you know, we need to ensure that people can get access to the innovation they need, run it where they want and ensure that they feel that they have choice. >> And the choice I think is a key part of it that isn't really coming through in some of the narrative. There's a lot of discussion about how you should actually pick, should you go cloud? I remember when it was either you should stay on-site or should you go to cloud? And we had a long discussion there. Hybrid cloud really does seem to have come of age where it's a realistic kind of compromise is probably the wrong word, but it's a trade off between doing all the one thing or all another. And for most enterprises, that doesn't actually seem to be the choice that's actually viable for them. So hybrid seems like it's actually just the practical approach. Would that be accurate? >> Well our studies have shown that if you look statistically at the set of workload that's moved to cloud, you know something like 20% of workloads have only moved to cloud meaning the other 80% is experiencing barriers to move. And some of those barriers is figuring out what to do with all this data that's sitting on-prem or you know, these applications that have years and years of intelligence baked into them that can not easily be ported. And so organizations are looking at the hybrid approaches because they give them more choice. It helps them deal with fragmentation. Meaning as I move more workload, I have consistent skillset. It helps me extend my existing investments and bring it into the cloud world. And all those things again are done with consistent security. That's really important, right? Organizations need to make sure they're protecting their assets, their data throughout, you know leveraging a consistent platform. So that's really the benefit of the hybrid approach. It essentially is going to enable these organizations to unlock more workload and gain the acceleration and the transformative effect of cloud. And that's why it's becoming a necessity, right? Because they just can't get that 80% to move yet. >> Yeah and I've long said that the cloud is a state of mind rather than a particular location. It's more about an operational model of how you do things. So hearing that we've only got 20% of workloads have moved to this new way of doing things does rather suggest that there's a lot more work to be done. What, for those organizations that are just looking to do this now or they've done a bit of it and they're looking for those next new workloads, where do you see customers struggling the most and where do you think that IBM can help them there? >> Well,(indistinct) where are they struggling the most? First I think skills. I mean, they have to figure out a new set of technologies to go and transition from this old world to the new and at the heart of that is lots of really critical debates. Like how do they modernize the way that they do software delivery for many enterprises, right? Embrace new ways of doing software delivery. How do they deal with the data issues that arise from where the data sits, their obligations for data protection, what happens if the data spans multiple different places but you have to provide high quality performance and security. These are all parts of issues that, you know, span different environments. And so they have to figure out how to manage those kinds of things and make it work in one place. I think the benefit of partnering, you know, with Amazon is, clearly there's a huge customer base that's interested in Amazon. I think the benefit of the IBM partnership is, you know, we can help to go and unlock some of those new workloads and find ways to get that cloud benefit and help to move them to the cloud faster again with that consistency of experience. And that's why I think it's a good match partnership where we're giving more customers choice. We're helping them to unlock innovation substantially faster. >> Right. And so for people who might want to just get started without it, how would they approach this? People might have some experience with AWS, it's almost difficult not to these days, but for those who aren't familiar with the Red Hat on AWS with OpenShift on AWS, how would they get started with you to explore what's possible? >> Well, one of the things that we're offering to our clients is a service that we refer to as IBM garage. It's, you know, an engagement model if you will, within IBM, where we work with our clients and we really help them to do co-creation so help to understand their business problem or, you know, the target state of where they want their IT to get to. And in working with them in co-creation, you know, we help them to affect that transition. Let's say that it's about delivering business applications faster. Let's say it's about modernizing the applications they have or offering new services, new business models, again all in the spirit of co-creation. And we found that to be really popular. It's a great way to get started. We've leveraged design thinking and approach. They can think about the customer experience and their outcome. If they're creating new business processes, new applications, and then really help them to uplift their skills and, you know, get ready to adopt cloud technology and everything that they do. >> It sounds like this is a lot of established workloads that people already have in their organizations. It's already there, it's generating real money. It's not those experimental workloads that we saw early on which was a, well let's try this. Cloud is a fabulous way where we can run some experiments. And if it doesn't work, we just turn it off again. These sound like a lot more workloads are kind of more important to the business. Is that be true? >> Yeah. I think that's true. Now I wouldn't say they're just existing workloads because I think there's lots of new business innovation that many of our, you know, clients want to go and launch. And so this gives them an opportunity to do that new innovation, but not forget the past meaning they can bring it forward and bring it forward into an integrated experience. I mean, that's what everyone demands of a true digital business, right? They expect that your experience is integrated, that it's responsive, that it's targeted and personalized. And the only way to do that is to allow for experimentation that integrates in with the, you know, standard business processes and things that you did before. And so you need to be able to connect those things together seamlessly. >> Right. So it sounds like it's a transition more than creating new thing completely from scratch. It's well, look, we've done a lot of innovation over the past decade or so in cloud, we know what works but we still have workloads that people clearly know and value. How do we put those things together and do it in such a way that we maintain the flexibility to be able to make new changes as we learn new things. >> Yeah, leverage what you've got play to your strengths. I mean that's how you create speed. If you have to reinvent the wheel every time it's going to be a slow roll. >> Yeah and that does seem like an area where an organization probably at this point should be looking to partner with other people who have done the hard yards. They've already figured this out. Well, as you say, why can't we make all of these obvious areas yourself when you're starting from scratch, when there's a wealth of experience out there and particularly this whole ecosystem that exists around the open software? In fact maybe you could tell us a little bit about the ecosystem opportunities that are there because Red Hat has been part of this for a very long time. AWS has a very broad ecosystem as we're all familiar with being here at re:Invent yet again. How does that ecosystem play into what's possible? >> Well, let me explain why I think IBM brings a different dimension to that trio, right? IBM brings deep industry expertise. I mean, we've long worked with all of our clients, our partners on solving some of their biggest business problems and being embedded in the thing that they do. So we have deep knowledge of their enterprise challenges, deep knowledge of their business processes. deep knowledge of their business processes. We are able to bring that industry know how mixed with, you know, Red Hat's approach to an open foundational platform, coupled with, you know, the great infrastructure you can get from Amazon and, you know, that's a great sort of powerful combination that we can bring to each of our clients. And, you know, maybe just to bring it back a little bit to that idea, okay so what's the role in Cloud Paks in that? I mean, Cloud Paks are the kind of software that we've built to enable enterprises to run their essential business processes, right? In the central digital operations that they run everything from security to protecting their data or giving them powerful data tools to implement AI and you know, to implement AI algorithms in the heart of their business or giving them powerful automation capabilities so they can digitize their operations. And also we make sure those things are going to run effectively. It's those kinds of capabilities that we're bringing in the form of Cloud Paks think of that as that substrate that runs a digital business that now can be brought through right? Running on AWS infrastructure through this integration that we've done. >> Right. So basically taking things as a pre-packaged module that we can just grab that module drop it in and start using it rather than having to build it ourselves from scratch. >> That's right. And they can leverage those powerful capabilities and get focused on innovating the things that matter, right? So the huge accelerant to getting business value. >> And it does sound a lot easier than trying to learn how to do the complex sort of deep learning and linear algorithms that they're involved in machine learning. I have looked into it a bit and trying to manage that sort of deep masses. I think I'd much rather just grab one off the shelf plug it in and just use it. >> Yeah. It's also better than writing assembler code which was some of my first programming experiences as well. So I think the software industry has moved on just a little bit since then. (chuckles) >> I think we have is that I do not miss the days of handwriting assembly at all. Sometimes for this (indistinct) reasons. But if we want to get things done, I think I'd much rather work in something a little higher level. (Mike laughing) So thank you very much for joining me. My guest Mike Gilfix there from IBM, sorry, from IBM cloud. And this has been, sorry, go ahead. We'll cut that. Can we cut and reedit this outro? >> Cameraman: Yeah, you guys can or you can just go ahead and just start over again. >> I'll just do, I'll just do the outro. Try it again. >> Cameraman: Yeah, sounds good. >> So thank you so much for my guests there Mike Gilfix, Chief Product Officer for IBM Cloud Paks from IBM. This has been theCUBES coverage of AWS re:Invent 2020 and the APN partner experience. I've been your host, Justin Warren, make sure you come back and join us for more coverage later on.

Published Date : Nov 28 2020

SUMMARY :

Reporter: From around the globe. and our coverage of AWS re:Invent 2020 So maybe you could just and bringing it to a And the Cloud Paks and And the reason that's important is Well, the kernel, if you will, is the same And I am old enough to remember And that means that, you know, And the choice I get that 80% to move yet. that are just looking to do And so they have to it's almost difficult not to these days, and everything that they do. important to the business. that many of our, you know, and do it in such a way that I mean that's how you create speed. that exists around the open software? and you know, to implement AI algorithms that we can just grab that module So the huge accelerant to just grab one off the shelf So I think the software is that I do not miss the or you can just go ahead I'll just do, I'll just do the outro. and the APN partner experience.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Justin WarrenPERSON

0.99+

Justin WarrenPERSON

0.99+

AmazonORGANIZATION

0.99+

Mike GilfixPERSON

0.99+

AWSORGANIZATION

0.99+

IBMORGANIZATION

0.99+

20%QUANTITY

0.99+

80%QUANTITY

0.99+

CaltechORGANIZATION

0.99+

MikePERSON

0.99+

Red HatORGANIZATION

0.99+

OpenShiftTITLE

0.99+

eachQUANTITY

0.99+

FirstQUANTITY

0.98+

Cloud PaksTITLE

0.98+

firstQUANTITY

0.98+

CameramanPERSON

0.97+

Red hatTITLE

0.96+

bothQUANTITY

0.96+

todayDATE

0.95+

one placeQUANTITY

0.95+

APNORGANIZATION

0.95+

oneQUANTITY

0.93+

Red HatTITLE

0.91+

theCUBEORGANIZATION

0.91+

Invent 2020 Partner Network DayEVENT

0.88+

pastDATE

0.85+

2020TITLE

0.82+

about a two and a half timesQUANTITY

0.81+

first programming experiencesQUANTITY

0.77+

re:EVENT

0.69+

IBM cloudORGANIZATION

0.67+

re:Invent 2020EVENT

0.65+

yearsQUANTITY

0.64+

AWSEVENT

0.61+

first-classQUANTITY

0.6+

trioQUANTITY

0.56+

Sam Werner, IBM and Brent Compton, Red Hat | KubeCon + CloudNativeCon NA 2020


 

>>from around the globe. It's the Cube with coverage of Yukon and Cloud. Native Con North America. 2020. Virtual Brought to You by Red Hat, The Cloud, Native Computing Foundation and Ecosystem Partners. Hey, welcome back, everybody. Jeffrey here with the Cube coming to you from our Palo Alto studios with our ongoing coverage of Q. Khan Cloud, Native Con 2020 North America. Of course, it's virtual like everything else is in 2020 but we're excited to be back. It's a terrific show, and we're excited our next guest. So let's introduce him. And we've got Sam Warner, the VP of offering manager and business line executive for storage for IBM. Sam. Great to see you. >>Great to be here. >>And also joining us is Brent Compton. He's a senior director of data services for Redhead. Great. See you, Brent. >>Thank you. >>So let's let's jump into it. Cloud Native. Everything's about cloud native. Everything's about containers. Everything is about kind of container ization and flexibility. But then there's this thing in the back and called storage. We actually have toe keep this stuff and record this stuff and have data protection for this stuff in business resiliency love to jump into it, so lets you know where does storage fit within a container world? And how is the growth of containers and the adoption containers really had you rethink the way that you think about storage and how clients you think about stories saying, Let's start with you >>e mean, it's a great question. And first off, I'm really excited about another cube con. Uh, we did Europe now, uh, doing North America so very excited to be, you know, seeing all the you know, all the news and all the people talking about the advancements around kubernetes. And we're very excited about it now. You asked a very good question. Important question. We're seeing an acceleration of digital transformation, and the people that are going through this digital transformation are using containers to now modernize the rest of their infrastructure. The interesting thing about it, though, is those initiatives are being driven out of the application teams. The business lines in an organization, and a lot of them don't understand that there's a lot of complexity to this storage piece here. So the storage teams I talked to are all of a sudden getting these initiatives thrown on them or a kind of halfway their strategy. And they're scratching their heads, trying to figure out now how they can support these applications with persistent storage. Because that's not where containers started. They started with micro services, and now now they're in a quandary. They have to deliver a certain S L. A to their customers, and they're trying to figure out how they do it in this new environment, which in a lot of cases, has been designed outside of their scope. So they're seeing issues with data protection. Some of the kind of core things that they've been dealing with for years are now. They're now having to solve all over again. So that's what we're working on helping them with reinventing how storage is deployed to help them deliver the same level of security, availability and everything they have in the past. Uh, in these new environments, >>right? So, yeah, e say you've been involved in this for a long time. You know, you've worked in hyper converge. You've worked in big data. You know, the evolution of big data continues to change, as ultimately we want to get people the information to make good decisions, but we've gone through a lot of integrations over the years. So how is it different? You know? Now how is it different with containers? What can we finally do you as a as an architect that we couldn't do before? >>Infrastructure is code. That's, I think, one of the fundamental differences of the storage admin of yesteryear versus storage admin of today today, Azaz Sam mentioned As people are developing and deploying applications, those applications need to dynamically provisioned the infrastructure dynamically provisioned what they need from compute dynamically provisioned what they need from storage dynamically provisioned network paths and so that that that element of infrastructure is code. A dynamically provisioned infrastructure is very different from well from yesterday, when applications or teams needed to. Well, when they needed storage, they would you know, they would file a ticket and typically wait. Now they make an a p A. Now they make an A p. I call and storage is dynamically provisioned and provided to their application. >>But what what I think hard to understand for the layman. And maybe it's just me, right? I It's very easy to understand dynamic infrastructure around, um compute right, I'm Pepsi. I'm running it out for the Super Bowl. I need I know how much people are gonna hit by hit my site and it's kind of easy to understand. Dynamic provisioning around networking again for the same example. What's less easy to understand its dynamic provisioning for storage? It's one thing to say, you know, there's a there's a pool of storage resource is that I'm going to dynamically provisioned for this particular after this particular moment. But one of the whole things about the dynamic is not only is it available when you need it, but I could make it big, and conversely, I could make it smaller go away. I get that for servers, and I kind of get that for networking, supporting an application and that example I just talked about. But we can't It doesn't go away a lot of the time for storage, right? That's important data that's maybe feeding another process. There's all kinds of rules and regulations, So when you talk about dynamic infrastructure for storage, it makes a lot of sense for grabbing some to provision for some new application. But it's >>hard to >>understand in terms of true dynamics in terms of either scaling down or scaling up or turning off when I don't particularly need that much capacity or even that application right now, how does it work within storage versus No, just servers or I'm grabbing them and then I'm putting it back in the pool. >>Let me start on this one, and then I'm gonna hand it off to Brent. Um, you know, let's not forget, by the way, that enterprises have very significant investments in infrastructure and they're able to deliver six nines of availability on their storage. And they have d are worked out in all of their security, encryption, everything. It's already in place, and they're sure that they can deliver on their SLS. So they want to start with that. You have to leverage that investment. So first of all, you have to figure out how to automate that into the environment, that existing sand, and that's where things like uh, a P I s the container storage interface CS I drivers come in. IBM provides that across your entire portfolio, allowing you to integrate your storage into a kubernetes environment into an open shipped environment so that it can be automated, but you have to go beyond that and be able to extend that environment, then into other infrastructure, for example, into a public cloud. So with the IBM flash system, family with our spectrum virtualized software were actually able to deploy that storage layer not only on Prem on our award winning a race, but we can also do it in the cloud. So we allow you to take your existing infrastructure investments and integrate that into your communities environment and using things like danceable, fully automated environment. I'll get into data protection before we're done talking. But I do want Brent to talk a bit about how container native storage comes into that next as well. On how you can start building out new environments for, uh, for your applications. >>Yeah, What the two of you are alluding to is effectively kubernetes services layer, which is not storage. It consumes storage from the infrastructure, Assam said. Just because people deploy Kubernetes cluster doesn't mean that they go out and get an entirely new infrastructure for that. If they're deploying their kubernetes cluster on premises, they have servers. If they're deploying their kubernetes cluster on AWS or an azure on G C P. They have infrastructure there. Uh, what the two of you are alluding to is that services layer, which is independent of storage that can dynamically provisioned, provide data protection services. As I mentioned, we have good stuff to talk about their relative to data protection services for kubernetes clusters. But that's it's the abstraction layer or data services layer that sits on top of storage, which is different. So the basics of storage underneath in the infrastructure, you know, remain the same, Jeff. But the how that storage is provisioned and this abstraction layer of services which sits on top of the storage storage might be IBM flash system array storage, maybe E m c sand storage, maybe a W S E B s. That's the storage infrastructure. But this abstraction layer that sits on top this data services layer is what allows for the dynamic interaction of applications with the underlying storage infrastructure. >>And then again, just for people that aren't completely tuned in, Then what's the benefit to the application developer provider distributor with that type of an infrastructure behind And what can they do that they just couldn't do before? >>Well, I mean Look, we're, uh, e I mean, we're trying to solve the same problem over and over again, right? It's always about helping application developers build applications more quickly helps them be more agile. I t is always trying to keep up with the application developer and always struggles to. In fact, that's where the emergency cloud really came from. Just trying to keep up with the developer eso by giving them that automation. It gives them the ability to provision storage in real time, of course, without having open a ticket like friends said. But really, the Holy Grail here is getting to a developed once and deploy anywhere model. That's what they're trying to get to. So having an automated storage layer allows them to do that and ensure that they have access to storage and data, no matter where their application gets it >>right, Right, that pesky little detail. When I have to develop that up, it does have to sit somewhere and and I don't think storage really has gotten enough of of the bright light, really in kind of this app centric, developer centric world, we talk all the time about having compute available and and software defined networking. But you know, having this software defined storage that lives comfortably in this container world is pretty is pretty interesting. In a great development, I want to shift gears a >>little bit. Just one thing. Go >>ahead, >>plus one to Sam's comments. There all the application developer wants, they want an A P I and they want the same a p I to provision the storage regardless of where their app is running. The rest of the details they usually don't care about. Sure. They wanted to perform what not give him an A p I and make it the same regardless of where they're running the app. >>Because not only do they want to perform, they probably just presume performance, right? I mean, that's the other thing is that the best in class quickly becomes presumed baseline in a very short short period of time. So you've got to just you just got to just deliver the goods, right? They're gonna get frustrated and not be productive. But I wanted to shift gears up a little bit and talk about some of the macro trends. Right? We're here towards the end of 2020. Obviously, Cove It had a huge impact on business and a lot of different ways. And it's really evolved from March, this light switch moment. Everybody work from home, too. Now, this kind of extended time, that's probably gonna go on for a while. I'm just curious some of the things that you've seen with your customers not so much at the beginning, because that was that was a special and short period of time. But mawr, as we've extended and and are looking to, um, probably extended this for a while, you know, What is the impact of this increased work from home increase attack surface? You know, some of these macro things that we're seeing that cove it has caused and any other kind of macro trends beyond just this container ization that you guys were seeing impacting your world. Start with you, Sam. >>You know, I don't think it's actually changed what people were going to do or the strategy. What I've seen it do is accelerate things and maybe changed the way they're getting their, uh and so they're actually a lot of enterprises were running into challenges more quickly than they thought they would. And so they're coming to us and asking us to help them. Saw them, for example, backing up their data and these container environments as you move mission critical applications that maybe we're gonna move more slowly. They're realizing that as they've moved them, they can't get the level of data protection they need. And that's why actually we just announced it at the end of October. Updates to our modern data protection portfolio. It now is containerized. It could be deployed very easily in an automated fashion, but on top of that, it integrates down into the A P. I layer down into CSE drivers and allows you to do container where snapshots of your applications so you could do operational recovery. If there's some sort of an event you can recover from that you can do D R. And you can even use it for data migration. So we're helping them accelerate. So the biggest I think requests I'm getting from our customers, and how can you help us accelerate? And how can you help us fix these problems that we went running into as we tried to accelerate our digital transformation? >>Brent, Anyone that you wanna highlight? >>Mm. Okay. Ironically, one of my team was just speaking with one of the cruise lines, um, two days ago. We all know what's happened them. So if we just use them as an example, I'm clearly our customers need to do things differently now. So plus one to Sam's statement about acceleration on I would add another word to that which is agility, you know, frankly, they're having to do things in ways they never envisioned 10 months ago. So there need to cut cycle times to deploy effectively new ways of how they transact business has resulted in accelerated poll for these types of infrastructure is code technologies. >>That's great. The one that jumped in my mind. Sam, is you were talking. We've we've had a lot of conversations. Obvious security always comes up on baking security and is is a theme. But ransomware as a specific type of security threat and the fact that these guys not only wanna lock up your data, but they want to go in and find the backup copies and and you know and really mess you up so it sounds like that's even more important to have the safe. And we're hearing, you know, all these conversations about air gaps and dynamic air gaps and, you know, can we get air gaps and some of these infrastructure set up so that we can, you know, put put those backups? Um, and recovery data sets in a safe place so that if we have a ransomware issue, getting back online is a really, really important thing, and it seems to just be increasing every day. We're seeing things, you know, if you can actually break the law sometimes if you if you pay the ransom because where these people operate, there's all kind of weird stuff that's coming out of. Ransomware is a very specific, you know, kind of type of security threat that even elevates, you know, kind of business continuity and resiliency on a whole nother level for this one particular risk factor. When if you're seeing some of that as well, >>it's a great point. In fact, it's clearly an industry that was resilient to a pandemic because we've seen it increase things. Is organized crime at this point, right? This isn't the old days of hackers, you know, playing around this is organized crime and it is accelerating. And that's one thing. I'm really glad you brought up. It's an area we've been really focused on across our whole portfolio. Of course, IBM tape offers the best most of the actual riel air gapping, physical air gapping We could take a cartridge offline. But beyond that we offer you the ability to dio you know, different types of logical air gaps, whether it's to a cloud we support. In fact, we just announced Now the spectrum protect. We have support for Google Cloud. We already supported AWS Azure IBM Cloud. So we give you the ability to do logical air gapping off to those different cloud environments. We give you the ability to use worm capability so you can put your backups in a vault that can't be changed. So we give you lots of different ways to do it. In our high end enterprise storage, we offer something called Safeguarded copy where we'll actually take data off line that could be recovered almost instantly. Something very unique to our storage that gives you, for the most mission critical applications. The fastest path recovery. One of things we've seen is some of our customers have done a great job creating a copy. But when the event actually happens, they find is gonna take too long to recover the data and they end up having to pay the ransom anyway. So you really have to think through an Indian strategy on we're able to help customers do a kind of health checks of their environment and figure out the right strategy. We have some offerings to help come in and do that for our customers. >>Shift gears a little bit, uh, were unanswerable fest earlier this year and a lot of talk about automation. Obviously, answer was part of the Red Hat family, which is part of the IBM family. But, you know, we're seeing Mawr and Mawr conversations about automation about, you know, moving the mundane and the air prone and all the things that we shouldn't be doing as people and letting people doom or high value stuff. When if you could talk a little bit about the role of automation, that the kind of development of automation and how you're seeing that, you know, impact your deployments, >>right? You want to take that one first? >>Yeah, sure. Um, s o the first is, um when you think about individual kubernetes clusters. There's a level of automation that's required there. I mean, that's the fundamental. I mean, back to the infrastructure is code that's inherently. That's automation. To effectively declare the state of what you want your application, your cluster to be, and that's the essence of kubernetes. You declare what the state is, and then you pass that declaration to kubernetes, and it makes it so. So there's the kubernetes level automation. But then there's, You know what happens for larger enterprises when you have, you know, tens or hundreds of kubernetes clusters. Eso That's an area of Jeff you mentioned answerable. Now that's an area of with, you know, the work, the red hats doing the community for multi cluster management, actually in the community and together with IBM for automating the management of multiple clusters. And last thing I'll touch on here is that's particularly important as you go to the edge. I mean, this is all well and good when you're talking about, you know, safe raised floor data center environments. But what happens when you're tens or hundreds or even thousands of kubernetes clusters are running in an oil field somewhere? Automation becomes not only nice to have, but it's fundamental to the operation. >>Yeah, but let me just add onto that real quick. You know, it's funny, because actually, in this cove it era, you're starting to see that same requirement in the data center in the core data center. In fact, I would say that because there's less bodies now in the data center, more people working remotely. The automation in need for automation is actually actually accelerating as well. So I think what you said is actually true for the core data center now as well, >>right? So I wanna give you guys the last word before before we close the segment. Um, I'm gonna start with you, Brent. Really, From a perspective of big data and you've been involved again in big data for a long time. As you look back, it kind of the data warehouse era. And then we had kind of this whole rage with the Hadoop era, and, you know, we just continue to get more and more sophisticated with big data processes and applications. But at the end of the day, still about getting the right data to the right person at the right time to do something about it. I wonder if if you can, you know, kind of reflect over that journey and where we are now in terms of this mission of getting, you know, the right data to the right person at the right time so they could make the right decision. >>I think I'll close with accessibility. Um, that Z these days, we you know, the data scientists and data engineers that we work with. The key problem that they have is is accessibility and sharing of data. I mean, this has been wonderfully manifest. In fact, we did some work with the province of Ontario. You could look that stop hashtag house my flattening eso the work with them to get a pool of data. Scientists in the community in the province of Ontario, Canada, toe work together toe understand how to track co vid cases s such so that government could make intelligent responses and policy based on based on the fax so that that need highlights the accessibility that's required from today's, you know, yesteryear. It was maybe, uh, smaller groups of individual data scientists working in silos. Now it's people across industry as manifest by that That need accessibility as well as agility. They need to be able to spin up an environment that will allow them to in this case, um, to develop and deploy inference models using shared data sets without going through years of design. So accessibility on back to the back to the the acceleration and agility that Sam talked about. So I'll close with those words >>That's great. And the consistent with the democratization of two is another word that we're here, you know, over and over again in terms of, you know, getting it out of the hands of the data scientists and getting it into the hands of the people who are making frontline business decisions every day. And Sam for you, for your clothes. I love for you Thio reflect on kind of the changing environment in terms of your requirements for the types of workloads that you now are, you know, looking to support. So it's not just taking care of the data center and relatively straightforward stuff. But you've got hybrid. You've got multi cloud, not to mention all the media, the developments in the media between tape and obviously flash, um, spinning, spinning drives. But you know, really, We've seen this huge thing with flash. But now, with cloud and the increased kind of autumn autonomy ization of of units to be able to apply big batches in small batches to particular workloads across all these different requirements. When if you could just share a little bit about how you guys are thinking about, you know, modernizing storage and moving storage forward. What are some of your what are some of your your priorities? What are you looking forward to, uh, to be able to deliver, You know, basically the stuff underneath all these other applications. I mean, applications basically is data whether you I and some in some computer on top. You guys something underneath the whole package? >>Yeah. Yeah. You know, first of all, you know, back toe what Brent was saying, Uh, data could be the most valuable asset of an enterprise. You could give an enterprising, incredible, uh, competitive advantage as an incumbent if you could take advantage of that data using modern analytics and a I. So it could be your greatest asset. And it can also be the biggest inhibitor to digital transformation. If you don't figure out how to build a new type of modern infrastructure to support access to that data and support these new deployment models of your application. So you have to think that through. And that's not just for your big data, which the big data, of course, is extremely important and growing at incredible pace. All this unstructured data, You also have to think about your mission critical applications. We see a lot of people going through their transformation and modernization of S a p with move toe s four Hana. They have to think about how that fits into a multi cloud environment. They need to think about the life cycle of their data is they go into these new modern environments. And, yes, tape is still a very vibrant part of that deployment. So what we're working on an IBM has always been a leader in software defined storage. We have an incredible portfolio of capabilities. We're working on modernizing that software to help you automate your infrastructure. And sure, you can deliver enterprise class sls. There's no nobody's going to alleviate the requirements of having, you know, near perfect availability. You don't because you're moving into a kubernetes environment. Get a break on your downtime. So we're able to give that riel enterprise class support for doing that. One of the things we just announced that the end of October was we've containerized our spectrum scale client, allowing you now toe automate the deployment of your cluster file system through communities. So you'll see more and more of that. We're offering you leading modern native protection for kubernetes will be the first to integrate with OCP and open ship container storage for data protection. And our flashes from family will continue to be on the leading edge of the curve around answerable automation and C s I integration with who are already so we'll continue to focus on that and ensure that you could take advantage of our world class storage products in your new modern environment. And, of course, giving you that portability between on from in any cloud that you choose to run in >>exciting times. No, no shortage of job security for you, gentlemen, that's for sure. All right, Well, Brent, Sam, thanks for taking a few minutes and, uh, is great to catch up. And again. Congratulations on the success. Thank you. Thank you. Thank you. Alrighty, Sammy's Brent. I'm Jeff, You're watching the cubes. Continuing coverage of Q. Khan Cloud, Native Con North America 2020. Thanks for watching. We'll see you next time.

Published Date : Nov 18 2020

SUMMARY :

Jeffrey here with the Cube coming to you from our Palo Alto studios with our ongoing coverage of And also joining us is Brent Compton. to jump into it, so lets you know where does storage fit within a container to be, you know, seeing all the you know, all the news and What can we finally do you as a as an architect Well, when they needed storage, they would you But one of the whole things about the dynamic is not only is it available when you need how does it work within storage versus No, just servers or I'm grabbing them and then I'm putting it back in the pool. So we allow you to take your existing infrastructure investments Yeah, What the two of you are alluding to is effectively kubernetes services layer, But really, the Holy Grail here is getting to a developed once and deploy anywhere But you know, having this software defined storage Just one thing. The rest of the details they usually don't care about. and are looking to, um, probably extended this for a while, you know, What is the impact of this increased So the biggest I think requests I'm getting from our customers, and how can you help us accelerate? on I would add another word to that which is agility, you know, frankly, they're having to do things And we're hearing, you know, all these conversations about air gaps and dynamic air gaps and, you know, But beyond that we offer you the ability to dio you know, different types of logical air gaps, that the kind of development of automation and how you're seeing that, you know, impact your deployments, To effectively declare the state of what you want your application, So I think what you said is actually true for the core data center of getting, you know, the right data to the right person at the right time so they could make the right decision. we you know, the data scientists and data engineers that we work with. the types of workloads that you now are, you know, looking to support. that software to help you automate your infrastructure. We'll see you next time.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
SamPERSON

0.99+

Red HatORGANIZATION

0.99+

IBMORGANIZATION

0.99+

Brent ComptonPERSON

0.99+

Sam WarnerPERSON

0.99+

JeffPERSON

0.99+

BrentPERSON

0.99+

Native Computing FoundationORGANIZATION

0.99+

RedheadORGANIZATION

0.99+

yesterdayDATE

0.99+

Sam WernerPERSON

0.99+

JeffreyPERSON

0.99+

EuropeLOCATION

0.99+

SammyPERSON

0.99+

2020DATE

0.99+

twoQUANTITY

0.99+

Ecosystem PartnersORGANIZATION

0.99+

hundredsQUANTITY

0.99+

The CloudORGANIZATION

0.99+

tensQUANTITY

0.99+

Super BowlEVENT

0.99+

AWSORGANIZATION

0.99+

todayDATE

0.99+

North AmericaLOCATION

0.99+

10 months agoDATE

0.99+

MawrPERSON

0.99+

end of 2020DATE

0.99+

two days agoDATE

0.99+

Q. KhanPERSON

0.99+

PepsiORGANIZATION

0.99+

MarchDATE

0.98+

Palo AltoLOCATION

0.98+

Azaz SamPERSON

0.98+

firstQUANTITY

0.98+

AssamPERSON

0.98+

KubeConEVENT

0.97+

oneQUANTITY

0.97+

CloudNativeConEVENT

0.97+

OntarioLOCATION

0.96+

end of OctoberDATE

0.96+

OneQUANTITY

0.96+

one thingQUANTITY

0.95+

earlier this yearDATE

0.95+

ThioPERSON

0.92+

six ninesQUANTITY

0.91+

CloudORGANIZATION

0.9+

Q. KhanPERSON

0.89+

Ontario, CanadaLOCATION

0.87+

NA 2020EVENT

0.85+

thousands of kubernetesQUANTITY

0.84+

coveORGANIZATION

0.82+

G C P.TITLE

0.8+

kubernetesQUANTITY

0.8+

Eric Herzog, IBM & Sam Werner, IBM | CUBE Conversation, October 2020


 

(upbeat music) >> Announcer: From theCUBE Studios in Palo Alto and Boston, connecting with thought leaders all around the world. This is a CUBE conversation. >> Hey, welcome back everybody. Jeff Frick here with the CUBE, coming to you from our Palo Alto studios today for a CUBE conversation. we've got a couple of a CUBE alumni veterans who've been on a lot of times. They've got some exciting announcements to tell us today, so we're excited to jump into it, So let's go. First we're joined by Eric Herzog. He's the CMO and VP worldwide storage channels for IBM Storage, made his time on theCUBE Eric, great to see you. >> Great, thanks very much for having us today. >> Jeff: Absolutely. And joining him, I think all the way from North Carolina, Sam Werner, the VP of, and offering manager business line executive storage for IBM. Sam, great to see you as well. >> Great to be here, thank you. >> Absolutely. So let's jump into it. So Sam you're in North Carolina, I think that's where the Red Hat people are. You guys have Red Hat, a lot of conversations about containers, containers are going nuts. We know containers are going nuts and it was Docker and then Kubernetes. And really a lot of traction. Wonder if you can reflect on, on what you see from your point of view and how that impacts what you guys are working on. >> Yeah, you know, it's interesting. We talk, everybody hears about containers constantly. Obviously it's a hot part of digital transformation. What's interesting about it though is most of those initiatives are being driven out of business lines. I spend a lot of time with the people who do infrastructure management, particularly the storage teams, the teams that have to support all of that data in the data center. And they're struggling to be honest with you. These initiatives are coming at them, from application developers and they're being asked to figure out how to deliver the same level of SLAs the same level of performance, governance, security recovery times, availability. And it's a scramble for them to be quite honest they're trying to figure out how to automate their storage. They're trying to figure out how to leverage the investments they've made as they go through a digital transformation and keep in mind, a lot of these initiatives are accelerating right now because of this global pandemic we're living through. I don't know that the strategy's necessarily changed, but there's been an acceleration. So all of a sudden these storage people kind of trying to get up to speed or being thrown right into the mix. So we're working directly with them. You'll see, in some of our announcements, we're helping them, you know, get on that journey and provide the infrastructure their teams need. >> And a lot of this is driven by multicloud and hybrid cloud, which we're seeing, you know, a really aggressive move to before it was kind of this rush to public cloud. And that everybody figured out, "Well maybe public cloud isn't necessarily right for everything." And it's kind of this horses for courses, if you will, with multicloud and hybrid cloud, another kind of complexity thrown into the storage mix that you guys have to deal with. >> Yeah, and that's another big challenge. Now in the early days of cloud, people were lifting and shifting applications trying to get lower capex. And they were also starting to deploy DevOps, in the public cloud in order to improve agility. And what they found is there were a lot of challenges with that, where they thought lifting and shifting an application will lower their capital costs the TCO actually went up significantly. Where they started building new applications in the cloud. They found they were becoming trapped there and they couldn't get the connectivity they needed back into their core applications. So now we're at this point where they're trying to really, transform the rest of it and they're using containers, to modernize the rest of the infrastructure and complete the digital transformation. They want to get into a hybrid cloud environment. What we found is, enterprises get two and a half X more value out of the IT when they use a hybrid multicloud infrastructure model versus an all public cloud model. So what they're trying to figure out is how to piece those different components together. So you need a software-driven storage infrastructure that gives you the flexibility, to deploy in a common way and automate in a common way, both in a public cloud but on premises and give you that flexibility. And that's what we're working on at IBM and with our colleagues at Red Hat. >> So Eric, you've been in the business a long time and you know, it's amazing as it just continues to evolve, continues to evolve this kind of unsexy thing under the covers called storage, which is so foundational. And now as data has become, you know, maybe a liability 'cause I have to buy a bunch of storage. Now it is the core asset of the company. And in fact a lot of valuations on a lot of companies is based on its value, that's data and what they can do. So clearly you've got a couple of aces in the hole you always do. So tell us what you guys are up to at IBM to take advantage of the opportunity. >> Well, what we're doing is we are launching, a number of solutions for various workloads and applications built with a strong container element. For example, a number of solutions about modern data protection cyber resiliency. In fact, we announced last year almost a year ago actually it's only a year ago last week, Sam and I were on stage, and one of our developers did a demo of us protecting data in a container environment. So now we're extending that beyond what we showed a year ago. We have other solutions that involve what we do with AI big data and analytic applications, that are in a container environment. What if I told you, instead of having to replicate and duplicate and have another set of storage right with the OpenShift Container configuration, that you could connect to an existing external exabyte class data lake. So that not only could your container apps get to it, but the existing apps, whether they'll be bare-metal or virtualized, all of them could get to the same data lake. Wow, that's a concept saving time, saving money. One pool of storage that'll work for all those environments. And now that containers are being deployed in production, that's something we're announcing as well. So we've got a lot of announcements today across the board. Most of which are container and some of which are not, for example, LTO-9, the latest high performance and high capacity tape. We're announcing some solutions around there. But the bulk of what we're announcing today, is really on what IBM is doing to continue to be the leader in container storage support. >> And it's great, 'cause you talked about a couple of very specific applications that we hear about all the time. One obviously on the big data and analytics side, you know, as that continues to do, to kind of chase history of honor of ultimately getting the right information to the right people at the right time so they can make the right decision. And the other piece you talked about was business continuity and data replication, and to bring people back. And one of the hot topics we've talked to a lot of people about now is kind of this shift in a security threat around ransomware. And the fact that these guys are a little bit more sophisticated and will actually go after your backup before they let you know that they're into your primary storage. So these are two, really important market areas that we could see continue activity, as all the people that we talk to every day. You must be seeing the same thing. >> Absolutely we are indeed. You know, containers are the wave. I'm a native California and I'm coming to you from Silicon Valley and you don't fight the wave, you ride it. So at IBM we're doing that. We've been the leader in container storage. We, as you know, way back when we invented the hard drive, which is the foundation of almost this entire storage industry and we were responsible for that. So we're making sure that as container is the coming wave that we are riding that in and doing the right things for our customers, for our channel partners that support those customers, whether they be existing customers, and obviously, with this move to containers, is going to be some people searching for probably a new vendor. And that's something that's going to go right into our wheelhouse because of the things we're doing. And some of our capabilities, for example, with our FlashSystems, with our Spectrum Virtualize, we're actually going to be able to support CSI snapshots not only for IBM Storage, but our Spectrum Virtualize products supports over 500 different arrays, most of which aren't ours. So if you got that old EMC VNX2 or that HPE, 3PAR or aNimble or all kinds of other storage, if you need CSI snapshot support, you can get it from IBM, with our Spectrum Virtualize software that runs on our FlashSystems, which of course cuts capex and opex, in a heterogeneous environment, but gives them that advanced container support that they don't get, because they're on older product from, you know, another vendor. We're making sure that we can pull our storage and even our competitor storage into the world of containers and do it in the right way for the end user. >> That's great. Sam, I want to go back to you and talk about the relationship with the Red Hat. I think it was about a year ago, I don't have my notes in front of me, when IBM purchased Red Hat. Clearly you guys have been working very closely together. What does that mean for you? You've been in the business for a long time. You've been at IBM for a long time, to have a partner you know, kind of embed with you, with Red Hat and bringing some of their capabilities into your portfolio. >> It's been an incredible experience, and I always say my friends at Red Hat because we spend so much time together. We're looking at now, leveraging a community that's really on the front edge of this movement to containers. They bring that, along with their experience around storage and containers, along with the years and years of enterprise class storage delivery that we have in the IBM Storage portfolio. And we're bringing those pieces together. And this is a case of truly one plus one equals three. And you know, an example you'll see in this announcement is the integration of our data protection portfolio with their container native storage. We allow you to in any environment, take a snapshot of that data. You know, this move towards modern data protection is all about a movement to doing data protection in a different way which is about leveraging snapshots, taking instant copies of data that are application aware, allowing you to reuse and mount that data for different purposes, be able to protect yourself from ransomware. Our data protection portfolio has industry leading ransomware protection and detection in it. So we'll actually detect it before it becomes a problem. We're taking that, industry leading data protection software and we are integrating it into Red Hat, Container Native Storage, giving you the ability to solve one of the biggest challenges in this digital transformation which is backing up your data. Now that you're moving towards, stateful containers and persistent storage. So that's one area we're collaborating. We're working on ensuring that our storage arrays, that Eric was talking about, that they integrate tightly with OpenShift and that they also work again with, OpenShift Container Storage, the Cloud Native Storage portfolio from, Red Hat. So we're bringing these pieces together. And on top of that, we're doing some really, interesting things with licensing. We allow you to consume the Red Hat Storage portfolio along with the IBM software-defined Storage portfolio under a single license. And you can deploy the different pieces you need, under one single license. So you get this ultimate investment protection and ability to deploy anywhere. So we're, I think we're adding a lot of value for our customers and helping them on this journey. >> Yeah Eric, I wonder if you could share your perspective on multicloud management. I know that's a big piece of what you guys are behind and it's a big piece of kind of the real world as we've kind of gotten through the hype and now we're into production, and it is a multicloud world and it is, you got to manage this stuff it's all over the place. I wonder if you could speak to kind of how that challenge you know, factors into your design decisions and how you guys are about, you know, kind of the future. >> Well we've done this in a couple of ways in things that are coming out in this launch. First of all, IBM has produced with a container-centric model, what they call the Multicloud Manager. It's the IBM Cloud Pak for multicloud management. That product is designed to manage multiple clouds not just the IBM Cloud, but Amazon, Azure, et cetera. What we've done is taken our Spectrum Protect Plus and we've integrated it into the multicloud manager. So what that means, to save time, to save money and make it easier to use, when the customer is in the multicloud manager, they can actually select Spectrum Protect Plus, launch it and then start to protect data. So that's one thing we've done in this launch. The other thing we've done is integrate the capability of IBM Spectrum Virtualize, running in a FlashSystem to also take the capability of supporting OCP, the OpenShift Container Platform in a Clustered environment. So what we can do there, is on-premise, if there really was an earthquake in Silicon Valley right now, that OpenShift is sitting on a server. The servers just got crushed by the roof when it caved in. So you want to make sure you've got disaster recovery. So what we can do is take that OpenShift Container Platform Cluster, we can support it with our Spectrum Virtualize software running on our FlashSystem, just like we can do heterogeneous storage that's not ours, in this case, we're doing it with Red Hat. And then what we can do is to provide disaster recovery and business continuity to different cloud vendors not just to IBM Cloud, but to several cloud vendors. We can give them the capability of replicating and protecting that Cluster to a cloud configuration. So if there really was an earthquake, they could then go to the cloud, they could recover that Red Hat Cluster, to a different data center and run it on-prem. So we're not only doing the integration with a multicloud manager, which is multicloud-centric allowing ease of use with our Spectrum Protect Plus, but incase of a really tough situation of fire in a data center, earthquake, hurricane, whatever, the Red Hat OpenShift Cluster can be replicated out to a cloud, with our Spectrum Virtualize Software. So in most, in both cases, multicloud examples because in the first one of course the multicloud manager is designed and does support multiple clouds. In the second example, we support multiple clouds where our Spectrum Virtualize for public clouds software so you can take that OpenShift Cluster replicate it and not just deal with one cloud vendor but with several. So showing that multicloud management is important and then leverage that in this launch with a very strong element of container centricity. >> Right >> Yeah, I just want to add, you know, and I'm glad you brought that up Eric, this whole multicloud capability with, the Spectrum Virtualize. And I could see the same for our Spectrum Scale Family, which is our storage infrastructure for AI and big data. We actually, in this announcement have containerized the client making it very simple to deploy in Kubernetes Cluster. But one of the really special things about Spectrum Scale is it's active file management. This allows you to build out a file system not only on-premises for your, Kubernetes Cluster but you can actually extend that to a public cloud and it automatically will extend the file system. If you were to go into a public cloud marketplace which it's available in more than one, you can go in there click deploy, for example, in AWS Marketplace, click deploy it will deploy your Spectrum Scale Cluster. You've now extended your file system from on-prem into the cloud. If you need to access any of that data, you can access it and it will automatically cash you on locally and we'll manage all the file access for you. >> Yeah, it's an interesting kind of paradox between, you know, kind of the complexity of what's going on in the back end, but really trying to deliver simplicity on the front end. Again, this ultimate goal of getting the right data to the right person at the right time. You just had a blog post Eric recently, that you talked about every piece of data isn't equal. And I think it's really highlighted in this conversation we just had about recovery and how you prioritize and how you, you know, think about, your data because you know, the relative value of any particular piece might be highly variable, which should drive the way that you treated in your system. So I wonder if you can speak a little bit, you know, to helping people think about data in the right way. As you know, they both have all their operational data which they've always had, but now they've got all this unstructured data that's coming in like crazy and all data isn't created equal, as you said. And if there is an earthquake or there is a ransomware attack, you need to be smart about what you have available to bring back quickly. And maybe what's not quite so important. >> Well, I think the key thing, let me go to, you know a modern data protection term. These are two very technical terms was, one is the recovery time. How long does it take you to get that data back? And the second one is the recovery point, at what point in time, are you recovering the data from? And the reason those are critical, is when you look at your datasets, whether you replicate, you snap, you do a backup. The key thing you've got to figure out is what is my recovery time? How long is it going to take me? What's my recovery point. Obviously in certain industries you want to recover as rapidly as possible. And you also want to have the absolute most recent data. So then once you know what it takes you to do that, okay from an RPO and an RTO perspective, recovery point objective, recovery time objective. Once you know that, then you need to look at your datasets and look at what does it take to run the company if there really was a fire and your data center was destroyed. So you take a look at those datasets, you see what are the ones that I need to recover first, to keep the company up and rolling. So let's take an example, the sales database or the support database. I would say those are pretty critical to almost any company, whether you'd be a high-tech company, whether you'd be a furniture company, whether you'd be a delivery company. However, there also is probably a database of assets. For example, IBM is a big company. We have buildings all over, well, guess what? We don't lease a chair or a table or a whiteboard. We buy them. Those are physical assets that the company has to pay, you know, do write downs on and all this other stuff, they need to track it. If we close a building, we need to move the desk to another building. Like even if we leasing a building now, the furniture is ours, right? So does an asset database need to be recovered instantaneously? Probably not. So we should focus on another thing. So let's say on a bank. Banks are both online and brick and mortar. I happened to be a Wells Fargo person. So guess what? There's Wells Fargo banks, two of them in the city I'm in, okay? So, the assets of the money, in this case now, I don't think the brick and mortar of the building of Wells Fargo or their desks in there but now you're talking financial assets or their high velocity trading apps. Those things need to be recovered almost instantaneously. And that's what you need to do when you're looking at datasets, is figure out what's critical to the business to keep it up and rolling, what's the next most critical. And you do it in basically the way you would tear anything. What's the most important thing, what's the next most important thing. It doesn't matter how you approach your job, how you used to approach school, what are the classes I have to get an A and what classes can I not get an A and depending on what your major was, all that sort of stuff, you're setting priorities, right? And the dataset, since data is the most critical asset of any company, whether it's a Global Fortune 500 or whether it's Herzog Cigar Store, all of those assets, that data is the most valuable. So you've got to make sure, recover what you need as rapidly as you need it. But you can't recover all of it. You just, there's just no way to do that. So that's why you really ranked the importance of the data to use sameware, with malware and ransomware. If you have a malware or ransomware attack, certain data you need to recover as soon as you can. So if there, for example, as a, in fact there was one Jeff, here in Silicon Valley as well. You've probably read about the University of California San Francisco, ended up having to pay over a million dollars of ransom because some of the data related to COVID research University of California, San Francisco, it was the health care center for the University of California in Northern California. They are working on COVID and guess what? The stuff was held for ransom. They had no choice, but to pay them. And they really did pay, this is around end of June, of this year. So, okay, you don't really want to do that. >> Jeff: Right >> So you need to look at everything from malware and ransomware, the importance of the data. And that's how you figure this stuff out, whether be in a container environment, a traditional environment or virtualized environment. And that's why data protection is so important. And with this launch, not only are we doing the data protection we've been doing for years, but now taking it to the heart of the new wave, which is the wave of containers. >> Yeah, let me add just quickly on that Eric. So think about those different cases you talked about. You're probably going to want for your mission critically. You're going to want snapshots of that data that can be recovered near instantaneously. And then, for some of your data, you might decide you want to store it out in cloud. And with Spectrum Protect, we just announced our ability to now store data out in Google cloud. In addition to, we already supported AWS Azure IBM Cloud, in various on-prem object stores. So we already provided that capability. And then we're in this announcement talking about LTL-9. And you got to also be smart about which data do you need to keep, according to regulation for long periods of time, or is it just important to archive? You're not going to beat the economics nor the safety of storing data out on tape. But like Eric said, if all of your data is out on tape and you have an event, you're not going to be able to restore it quickly enough at least the mission critical things. And so those are the things that need to be in snapshot. And that's one of the main things we're announcing here for Kubernetes environments is the ability to quickly snapshot application aware backups, of your mission critical data in your Kubernetes environments. It can very quickly to be recovered. >> That's good. So I'll give you the last word then we're going to sign off, we are out of time, but I do want to get this in it's 2020, if I didn't ask the COVID question, I would be in big trouble. So, you know, you've all seen the memes and the jokes about really COVID being an accelerant to digital transformation, not necessarily change, but certainly a huge accelerant. I mean, you guys have a, I'm sure a product roadmap that's baked pretty far and advanced, but I wonder if you can speak to, you know, from your perspective, as COVID has accelerated digital transformation you guys are so foundational to executing that, you know, kind of what is it done in terms of what you're seeing with your customers, you know, kind of the demand and how you're seeing this kind of validation as to an accelerant to move to these better types of architectures? Let's start with you Sam. >> Yeah, you know I, and I think i said this, but I mean the strategy really hasn't changed for the enterprises, but of course it is accelerating it. And I see storage teams more quickly getting into trouble, trying to solve some of these challenges. So we're working closely with them. They're looking for more automation. They have less people in the data center on-premises. They're looking to do more automation simplify the management of the environment. We're doing a lot around Ansible to help them with that. We're accelerating our roadmaps around that sort of integration and automation. They're looking for better visibility into their environments. So we've made a lot of investments around our storage insights SaaS platform, that allows them to get complete visibility into their data center and not just in their data center. We also give them visibility to the stores they're deploying in the cloud. So we're making it easier for them to monitor and manage and automate their storage infrastructure. And then of course, if you look at everything we're doing in this announcement, it's about enabling our software and our storage infrastructure to integrate directly into these new Kubernetes, initiatives. That way as this digital transformation accelerates and application developers are demanding more and more Kubernetes capabilities. They're able to deliver the same SLAs and the same level of security and the same level of governance, that their customers expect from them, but in this new world. So that's what we're doing. If you look at our announcement, you'll see that across, across the sets of capabilities that we're delivering here. >> Eric, we'll give you the last word, and then we're going to go to Eric Cigar Shop, as soon as this is over. (laughs) >> So it's clearly all about storage made simple, in a Kubernetes environment, in a container environment, whether it's block storage, file storage, whether it be object storage and IBM's goal is to offer ever increasing sophisticated services for the enterprise at the same time, make it easier and easier to use and to consume. If you go back to the old days, the storage admins manage X amount of gigabytes, maybe terabytes. Now the same admin is managing 10 petabytes of data. So the data explosion is real across all environments, container environments, even old bare-metal. And of course the not quite so new anymore virtualized environments. The admins need to manage that more and more easily and automated point and click. Use AI based automated tiering. For example, we have with our Easy Tier technology, that automatically moves data when it's hot to the fastest tier. And when it's not as hot, it's cool, it pushes down to a slower tier, but it's all automated. You point and you click. Let's take our migration capabilities. We built it into our software. I buy a new array, I need to migrate the data. You point, you click, and we automatic transparent migration in the background on the fly without taking the servers or the storage down. And we always favor the application workload. So if the application workload is heavy at certain times a day, we slow the migration. At night for sake of argument, If it's a company that is not truly 24 by seven, you know, heavily 24 by seven, and at night, it slows down, we accelerate the migration. All about automation. We've done it with Ansible, here in this launch, we've done it with additional integration with other platforms. So our Spectrum Scale for example, can use the OpenShift management framework to configure and to grow our Spectrum Scale or elastic storage system clusters. We've done it, in this case with our Spectrum Protect Plus, as you saw integration into the multicloud manager. So for us, it's storage made simple, incredibly new features all the time, but at the same time we do that, make sure that it's easier and easier to use. And in some cases like with Ansible, not even the real storage people, but God forbid, that DevOps guy messes with a storage and loses that data, wow. So by, if you're using something like Ansible and that Ansible framework, we make sure that essentially the DevOps guy, the test guy, the analytics guy, basically doesn't lose the data and screw up the storage. And that's a big, big issue. So all about storage made simple, in the right way with incredible enterprise features that essentially we make easy and easy to use. We're trying to make everything essentially like your iPhone, that easy to use. That's the goal. And with a lot less storage admins in the world then there has been an incredible storage growth every single year. You'd better make it easy for the same person to manage all that storage. 'Cause it's not shrinking. It is, someone who's sitting at 50 petabytes today, is 150 petabytes the next year and five years from now, they'll be sitting on an exabyte of production data, and they're not going to hire tons of admins. It's going to be the same two or four people that were doing the work. Now they got to manage an exabyte, which is why this storage made simplest is such a strong effort for us with integration, with the Open, with the Kubernetes frameworks or done with OpenShift, heck, even what we used to do in the old days with vCenter Ops from VMware, VASA, VAAI, all those old VMware tools, we made sure tight integration, easy to use, easy to manage, but sophisticated features to go with that. Simplicity is really about how you manage storage. It's not about making your storage dumb. People want smarter and smarter storage. Do you make it smarter, but you make it just easy to use at the same time. >> Right. >> Well, great summary. And I don't think I could do a better job. So I think we'll just leave it right there. So congratulations to both of you and the teams for these announcement after a whole lot of hard work and sweat went in, over the last little while and continued success. And thanks for the, check in, always great to see you. >> Thank you. We love being on theCUBE as always. >> All right, thanks again. All right, he's Eric, he was Sam, I'm I'm Jeff, you're watching theCUBE. We'll see you next time, thanks for watching. (upbeat music)

Published Date : Nov 2 2020

SUMMARY :

leaders all around the world. coming to you from our Great, thanks very Sam, great to see you as well. on what you see from your point of view the teams that have to that you guys have to deal with. and complete the digital transformation. So tell us what you guys are up to at IBM that you could connect to an existing And the other piece you talked and I'm coming to you to have a partner you know, and ability to deploy anywhere. of what you guys are behind and make it easier to use, And I could see the same for and how you prioritize that the company has to pay, So you need to look at and you have an event, to executing that, you know, of security and the same Eric, we'll give you the last word, And of course the not quite so new anymore So congratulations to both of you We love being on theCUBE as always. We'll see you next time,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JeffPERSON

0.99+

Eric HerzogPERSON

0.99+

IBMORGANIZATION

0.99+

Sam WernerPERSON

0.99+

SamPERSON

0.99+

twoQUANTITY

0.99+

EricPERSON

0.99+

Silicon ValleyLOCATION

0.99+

Jeff FrickPERSON

0.99+

Wells FargoORGANIZATION

0.99+

October 2020DATE

0.99+

Wells FargoORGANIZATION

0.99+

Red HatORGANIZATION

0.99+

BostonLOCATION

0.99+

50 petabytesQUANTITY

0.99+

10 petabytesQUANTITY

0.99+

North CarolinaLOCATION

0.99+

Palo AltoLOCATION

0.99+

last yearDATE

0.99+

150 petabytesQUANTITY

0.99+

CaliforniaLOCATION

0.99+

oneQUANTITY

0.99+

University of CaliforniaORGANIZATION

0.99+

2020DATE

0.99+

a year agoDATE

0.99+

both casesQUANTITY

0.99+

24QUANTITY

0.99+

iPhoneCOMMERCIAL_ITEM

0.99+

CUBEORGANIZATION

0.99+

AWSORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

next yearDATE

0.99+

threeQUANTITY

0.99+

bothQUANTITY

0.99+

second exampleQUANTITY

0.99+

Eric Cigar ShopORGANIZATION

0.99+

Herzog Cigar StoreORGANIZATION

0.99+

OpenShiftTITLE

0.99+

todayDATE

0.99+

DevOpsTITLE

0.98+

over 500 different arraysQUANTITY

0.98+

end of JuneDATE

0.98+

four peopleQUANTITY

0.98+

vCenter OpsTITLE

0.98+

Skyla Loomis, IBM | AnsibleFest 2020


 

>> (upbeat music) [Narrator] From around the globe, it's theCUBE with digital coverage of AnsibleFest 2020, brought to you by Red Hat. >> Hello welcome back to theCUBE virtual coverage of AnsibleFest 2020 Virtual. We're not face to face this year. I'm John Furrier, your host. We're bringing it together remotely. We're in the Palo Alto Studios with theCUBE and we're going remote for our guests this year. And I hope you can come together online enjoy the content. Of course, go check out the events site on Demand Live. And certainly I have a lot of great content. I've got a great guest Skyla Loomis Vice president, for the Z Application Platform at IBM. Also known as IBM Z talking Mainframe. Skyla, thanks for coming on theCUBE Appreciate it. >> Thank you for having me. So, you know, I've talked many conversations about the Mainframe of being relevant and valuable in context to cloud and cloud native because if it's got a workload you've got containers and all this good stuff, you can still run anything on anything these days. By integrating it in with all this great glue layer, lack of a better word or oversimplifying it, you know, things going on. So it's really kind of cool. Plus Walter Bentley in my previous interview was talking about the success of Ansible, and IBM working together on a really killer implementation. So I want to get into that, but before that let's get into IBM Z. How did you start working with IBM Z? What's your role there? >> Yeah, so I actually just got started with Z about four years ago. I spent most of my career actually on the distributed platform, largely with data and analytics, the analytics area databases and both On-premise and Public Cloud. But I always considered myself a friend to Z. So in many of the areas that I'd worked on, we'd, I had offerings where we'd enabled it to work with COS or Linux on Z. And then I had this opportunity come up where I was able to take on the role of leading some of our really core runtimes and databases on the Z platform, IMS and z/TPF. And then recently just expanded my scope to take on CICS and a number of our other offerings related to those kind of in this whole application platform space. And I was really excited because just of how important these runtimes and this platform is to the world,really. You know, our power is two thirds of our fortune 100 clients across banking and insurance. And it's you know, some of the most powerful transaction platforms in the world. You know doing hundreds of billions of transactions a day. And you know, just something that's really exciting to be a part of and everything that it does for us. >> It's funny how distributed systems and distributed computing really enable more longevity of everything. And now with cloud, you've got new capabilities. So it's super excited. We're seeing that a big theme at AnsibleFest this idea of connecting, making things easier you know, talk about distributed computing. The cloud is one big distribute computer. So everything's kind of playing together. You have a panel discussion at AnsibleFest Virtual. Could you talk about what your topic is and share, what was some of the content in there? Content being, content as in your presentation? Not content. (laughs) >> Absolutely. Yeah, so I had the opportunity to co-host a panel with a couple of our clients. So we had Phil Allison from Black Knight and Pat Lane from Allstate and they were really joining us and talking about their experience now starting to use Ansible to manage to z/OS. So we just actually launched some content collections and helping to enable and accelerate, client's use of using Ansible to manage to z/OS back in March of this year. And we've just seen tremendous client uptake in this. And these are a couple of clients who've been working with us and, you know, getting started on the journey of now using Ansible with Z they're both you know, have it in the enterprise already working with Ansible on other platforms. And, you know, we got to talk with them about how they're bringing it into Z. What use cases they're looking at, the type of culture change, that it drives for their teams as they embark on this journey and you know where they see it going for them in the future. >> You know, this is one of the hot items this year. I know that events virtual so has a lot of content flowing around and sessions, but collections is the top story. A lot of people talking collections, collections collections, you know, integration and partnering. It hits so many things but specifically, I like this use case because you're talking about real business value. And I want to ask you specifically when you were in that use case with Ansible and Z. People are excited, it seems like it's working well. Can you talk about what problems that it solves? I mean, what was some of the drivers behind it? What were some of the results? Could you give some insight into, you know, was it a pain point? Was it an enabler? Can you just share why that was getting people are getting excited about this? >> Yeah well, certainly automation on Z, is not new, you know there's decades worth of, of automation on the platform but it's all often proprietary, you know, or bundled up like individual teams or individual people on teams have specific assets, right. That they've built and it's not shared. And it's certainly not consistent with the rest of the enterprise. And, you know, more and more, you're kind of talking about hybrid cloud. You know, we're seeing that, you know an application is not isolated to a single platform anymore right. It really expands. And so being able to leverage this common open platform to be able to manage Z in the same way that you manage the entire rest of your enterprise, whether that's Linux or Windows or network or storage or anything right. You know you can now actually bring this all together into a common automation plane in control plane to be able to manage to all of this. It's also really great from a skills perspective. So, it enables us to really be able to leverage. You know Python on the platform and that's whole ecosystem of Ansible skills that are out there and be able to now use that to work with Z. >> So it's essentially a modern abstraction layer of agility and people to work on it. (laughs) >> Yeah >> You know it's not the joke, Hey, where's that COBOL programmer. I mean, this is a serious skill gap issues though. This is what we're talking about here. You don't have to replace the, kill the old to bring in the new, this is an example of integration where it's classic abstraction layer and evolution. Is that, am I getting that right? >> Absolutely. I mean I think that Ansible's power as an orchestrator is part of why, you know, it's been so successful here because it's not trying to rip and replace and tell you that you have to rewrite anything that you already have. You know, it is that glue sort of like you used that term earlier right? It's that glue that can span you know, whether you've got rec whether you've got ACL, whether you're using z/OSMF you know, or any other kind of custom automation on the platform, you know, it works with everything and it can start to provide that transparency into it as well, and move to that, like infrastructure as code type of culture. So you can bring it into source control. You can have visibility to it as part of the Ansible automation platform and tower and those capabilities. And so you, it really becomes a part of the whole enterprise and enables you to codify a lot of that knowledge. That, you know, exists again in pockets or in individuals and make it much more accessible to anybody new who's coming to the platform. >> That's a great point, great insight.& It's worth calling out. I'm going to make a note of that and make a highlight from that insight. That was awesome. I got to ask about this notion of client uptake. You know, when you have z/OS and Ansible kind of come in together, what are the clients area? When do they get excited? When do they know that they've got to do? And what are some of the client reactions? Are they're like, wake up one day and say, "Hey, yeah I actually put Ansible and z/OS together". You know peanut butter and chocolate is (mumbles) >> Honestly >> You know, it was just one of those things where it's not obvious, right? Or is it? >> Actually I have been surprised myself at how like resoundingly positive and immediate the reactions have been, you know we have something, one of our general managers runs a general managers advisory council and at some of our top clients on the platform and you know we meet with them regularly to talk about, you know, the future direction that we're going. And we first brought this idea of Ansible managing to Z there. And literally unanimously everybody was like yes, give it to us now. (laughs) It was pretty incredible, you know? And so it's you know, we've really just seen amazing uptake. We've had over 5,000 downloads of our core collection on galaxy. And again that's just since mid to late March when we first launched. So we're really seeing tremendous excitement with it. >> You know, I want to want to talk about some of the new announcements, but you brought that up. I wanted to kind of tie into it. It is addictive when you think modernization, people success is addictive. This is another theme coming out of AnsibleFest this year is that when the sharing, the new content you know, coders content is the theme. I got to ask you because you mentioned earlier about the business value and how the clients are kind of gravitating towards it. They want it.It is addictive, contagious. In the ivory towers in the big, you know, front office, the business. It's like, we've got to make everything as a service. Right. You know, you hear that right. You know, and say, okay, okay, boss You know, Skyla, just go do it. Okay. Okay. It's so easy. You can just do it tomorrow, but to make everything as a service, you got to have the automation, right. So, you know, to bridge that gap has everything is a service whether it's mainframe. I mean okay. Mainframe is no problem. If you want to talk about observability and microservices and DevOps, eventually everything's going to be a service. You got to have the automation. Could you share your, commentary on how you view that? Because again, it's a business objective everything is a service, then you got to make it technical then you got to make it work and so on. So what's your thoughts on that? >> Absolutely. I mean, agility is a huge theme that we've been focusing on. We've been delivering a lot of capabilities around a cloud native development experience for folks working on COBOL, right. Because absolutely you know, there's a lot of languages coming to the platform. Java is incredibly powerful and it actually runs better on Z than it runs on any other platform out there. And so, you know, we're seeing a lot of clients you know, starting to, modernize and continue to evolve their applications because the platform itself is incredibly modern, right? I mean we come out with new releases, we're leading the industry in a number of areas around resiliency, in our security and all of our, you know, the face of encryption and number of things that come out with, but, you know the applications themselves are what you know, has not always kept pace with the rate of change in the industry. And so, you know, we're really trying to help enable our clients to make that leap and continue to evolve their applications in an important way, and the automation and the tools that go around it become very important. So, you know, one of the things that we're enabling is the self service, provisioning experience, right. So clients can, you know, from Open + Shift, be able to you know, say, "Hey, give me an IMS and z/OS connect stack or a kicks into DB2 stack." And that is all under the covers is going to be powered by Ansible automation. So that really, you know, you can get your system programmers and your talent out of having to do these manual tasks, right. Enable the development community. So they can use things like VS Code and Jenkins and GET Lab, and you'll have this automated CICB pipeline. And again, Ansible under the covers can be there helping to provision those test environments. You know, move the data, you know, along with the application, changes through the pipeline and really just help to support that so that, our clients can do what they need to do. >> You guys got the collections in the hub there, so automation hub, I got to ask you where do you see the future of the automating within z/OS going forward? >> Yeah, so I think, you know one of the areas that we'd like to see go is head more towards this declarative state so that you can you know, have this declarative configuration defined for your Z environment and then have Ansible really with the data and potency right. Be able to, go out and ensure that the environment is always there, and meeting those requirements. You know that's partly a culture change as well which goes along with it, but that's a key area. And then also just, you know, along with that becoming more proactive overall part of, you know, AI ops right. That's happening. And I think Ansible on the automation that we support can become you know, an integral piece of supporting that more intelligent and proactive operational direction that, you know, we're all going. >> Awesome Skyla. Great to talk to you. And so insightful, appreciate it. One final question. I want to ask you a personal question because I've been doing a lot of interviews around skill gaps and cybersecurity, and there's a lot of jobs, more job openings and there are a lot of people. And people are with COVID working at home. People are looking to get new skilled up positions, new opportunities. Again cybersecurity and spaces and event we did and want to, and for us its huge, huge openings. But for people watching who are, you know, resetting getting through this COVID want to come out on the other side there's a lot of online learning tools out there. What skill sets do you think? Cause you brought up this point about modernization and bringing new people and people as a big part of this event and the role of the people in community. What areas do you think people could really double down on? If I wanted to learn a skill. Or an area of coding and business policy or integration services, solution architects, there's a lot of different personas, but what skills can I learn? What's your advice to people out there? >> Yeah sure. I mean on the Z platform overall and skills related to Z, COBOL, right. There's, you know, like two billion lines of COBOL out there in the world. And it's certainly not going away and there's a huge need for skills. And you know, if you've got experience from other platforms, I think bringing that in, right. And really being able to kind of then bridge the two things together right. For the folks that you're working for and the enterprise we're working with you know, we actually have a bunch of education out there. You got to master the mainframe program and even a competition that goes on that's happening now, for folks who are interested in getting started at any stage, whether you're a student or later in your career, but you know learning, you know, learn a lot of those platforms you're going to be able to then have a career for life. >> Yeah. And the scale on the data, this is so much going on. It's super exciting. Thanks for sharing that. Appreciate it. Want to get that plug in there. And of course, IBM, if you learn COBOL you'll have a job forever. I mean, the mainframe's not going away. >> Absolutely. >> Skyla, thank you so much for coming on theCUBE Vice President, for the Z Application Platform and IBM, thanks for coming. Appreciate it. >> Thanks for having me. >> I'm John Furrier your host of theCUBE here for AnsibleFest 2020 Virtual. Thanks for watching. (upbeat music)

Published Date : Oct 2 2020

SUMMARY :

brought to you by Red Hat. And I hope you can come together online So, you know, I've And it's you know, some you know, talk about with us and, you know, getting started And I want to ask you in the same way that you of agility and people to work on it. kill the old to bring in on the platform, you know, You know, when you have z/OS and Ansible And so it's you know, we've I got to ask you because You know, move the data, you know, so that you can you know, But for people watching who are, you know, And you know, if you've got experience And of course, IBM, if you learn COBOL Skyla, thank you so much for coming I'm John Furrier your host of theCUBE

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
IBMORGANIZATION

0.99+

John FurrierPERSON

0.99+

Phil AllisonPERSON

0.99+

Red HatORGANIZATION

0.99+

AnsibleFestORGANIZATION

0.99+

Walter BentleyPERSON

0.99+

Skyla LoomisPERSON

0.99+

JavaTITLE

0.99+

PythonTITLE

0.99+

tomorrowDATE

0.99+

LinuxTITLE

0.99+

two thingsQUANTITY

0.99+

WindowsTITLE

0.99+

Pat LanePERSON

0.99+

this yearDATE

0.99+

SkylaPERSON

0.99+

AnsibleORGANIZATION

0.98+

bothQUANTITY

0.98+

midDATE

0.98+

100 clientsQUANTITY

0.98+

oneQUANTITY

0.98+

One final questionQUANTITY

0.98+

over 5,000 downloadsQUANTITY

0.97+

ZTITLE

0.97+

two billion linesQUANTITY

0.97+

March of this yearDATE

0.95+

Z.TITLE

0.95+

VS CodeTITLE

0.95+

COBOLTITLE

0.93+

z/OSTITLE

0.92+

single platformQUANTITY

0.91+

hundreds of billions of transactions a dayQUANTITY

0.9+

firstQUANTITY

0.9+

AllstateORGANIZATION

0.88+

Palo Alto StudiosLOCATION

0.88+

Z Application PlatformTITLE

0.86+

four years agoDATE

0.84+

COVIDEVENT

0.81+

late MarchDATE

0.81+

aboutDATE

0.8+

VicePERSON

0.79+

JenkinsTITLE

0.78+

Vice PresidentPERSON

0.77+

AnsibleFest 2020EVENT

0.77+

IBM Z.TITLE

0.72+

two thirdsQUANTITY

0.72+

one big distribute computerQUANTITY

0.72+

one dayQUANTITY

0.71+

z/OSMFTITLE

0.69+

Z.ORGANIZATION

0.69+

Black KnightTITLE

0.64+

ACLTITLE

0.64+

CICSORGANIZATION

0.63+

IMSTITLE

0.63+

Eric Herzog, IBM | VMworld 2020


 

>> Announcer: From around the globe, it's theCUBE. With digital coverage of VMworld 2020, brought to you by VMware and its ecosystem partners. >> Welcome back, I'm Stu Miniman. This is theCUBE's coverage of VMworld 2020 of course, happening virtually. And there are certain people that we talk to every year at theCUBE, and this guest, I believe, has been on theCUBE at VMworld more than any others. It's actually not Pat Gelsinger, Eric Herzog. He is the chief marketing officer and vice president of global storage channels at IBM. Eric, Mr. Zoginstor, welcome back to theCUBE, nice to see you. >> Thank you very much, Stu. IBM always enjoys hanging with you, John, and Dave. And again, glad to be here, although not in person this time at VMworld 2020 virtual. Thanks again for having IBM. >> Alright, so, you know, some things are the same, others, very different. Of course, Eric, IBM, a long, long partner of VMware's. Why don't you set up for us a little bit, you know, 2020, the major engagements, what's new with IBM and VMware? >> So, a couple of things, first of all, we have made our Spectrum Virtualize software, software defined block storage work in virtual machines, both in AWS and IBM Cloud. So we started with IBM Cloud and then earlier this year with AWS. So now we have two different cloud platforms where our Spectrum Virtualize software sits in a VM at the cloud provider. The other thing we've done, of course, is V7 support. In fact, I've done several VMUGs. And in fact, my session at VMworld is going to talk about both our support for V7 but also what we're doing with containers, CSI, Kubernetes overall, and how we can support that in a virtual VMware environment, and also we're doing with traditional ESX and VMware configurations as well. And of course, out to the cloud, as I just talked about. >> Yeah, that discussion of hybrid cloud, Eric, is one that we've been hearing from IBM for a long time. And VMware has had that message, but their cloud solutions have really matured. They've got a whole group going deep on cloud native. The Amazon solutions have been something that they've been partnering, making sure that, you know, data protection, it can span between, you know, the traditional data center environment where VMware is so dominant, and the public clouds. You're giving a session on some of those hybrid cloud solutions, so share with us a little bit, you know, where do the visions completely agree? What's some of the differences between what IBM is doing and maybe what people are hearing from VMware? >> Well, first of all, our solutions don't always require VMware to be installed. So for example, if you're doing it in a container environment, for example, with Red Hat OpenShift, that works slightly different. Not that you can't run Red Hat products inside of a virtual machine, which you can, but in this case, I'm talking Red Hat native. We also of course do VMware native and support what VMware has announced with their Kubernetes based solutions that they've been talking about since VMworld last year, obviously when Pat made some big announcements onstage about what they were doing in the container space. So we've been following that along as well. So from that perspective, we have agreement on a virtual machine perspective and of course, what VMware is doing with the container space. But then also a slightly different one when we're doing Red Hat OpenShift as a native configuration, without having a virtual machine involved in that configuration. So those are both the commonalities and the differences that we're doing with VMware in a hybrid cloud configuration. >> Yeah. Eric, you and I both have some of those scars from making sure that storage works in a virtual environment. It took us about a decade to get things to really work at the VM level. Containers, it's been about five years, it feels like we've made faster progress to make sure that we can have stateful environments, we can tie up with storage, but give us a little bit of a look back as to what we've learned and how we've made sure that containerized, Kubernetes environments, you know, work well with storage for customers today. >> Well, I think there's a couple of things. First of all, I think all the storage vendors learn from VMware. And then the expansion of virtual environments beyond VMware to other virtual environments as well. So I think all the storage vendors, including IBM learned through that process, okay, when the next thing comes, which of course in this case happens to be containers, both in a VMware environment, but in an open environment with the Kubernetes management framework, that you need to be able to support it. So for example, we have done several different things. We support persistent volumes in file block and object store. And we started with that almost three years ago on the block side, then we added the file side and now the object storage side. We also can back up data that's in those containers, which is an important feature, right? I am sitting there and I've got data now and persistent volume, but I got to back it up as well. So we've announced support for container based backup either with Red Hat OpenShift or in a generic Kubernetes environment, because we're realistic at IBM. We know that you have to exist in the software infrastructure milieu, and that includes VMware and competitors of VMware. It includes Red Hat OpenShift, but also competitors to Red Hat. And we've made sure that we support whatever the end user needs. So if they're going with Red Hat, great. If they're going with a generic container environment, great. If they're going to use VMware's container solutions, great. And on the virtualization engines, the same thing. We started with VMware, but also have added other virtualization engines. So you think the storage community as a whole and IBM in particular has learned, we need to be ready day one. And like I said, three years ago, we already had persistent volume support for block store. It's still the dominant storage and we had that three years ago. So for us, that would be really, I guess, two years from what you've talked about when containers started to take off. And within two years we had something going that was working at the end user level. Our sales team could sell our business partners. As you know, many of the business partners are really rallying around containers, whether it be Red Hat or in what I'll call a more generic environment as well. They're seeing the forest through the trees. I do think when you look at it from an end user perspective, though, you're going to see all three. So, particularly in the Global Fortune 1000, you're going to see Red Hat environments, generic Kubernetes environments, VMware environments, just like you often see in some instances, heterogeneous virtualization environments, and you're still going to see bare metal. So I think it's going to vary by application workload and use case. And I think all, I'd say midsize enterprise up, let's say, $5 billion company and up, probably will have at least two, if not all three of those environments, container, virtual machine, and bare metal. So we need to make sure that at IBM we support all those environments to keep those customers happy. >> Yeah, well, Eric, I think anybody, everybody in the industry knows, IBM can span those environments, you know, support through generations. And very much knows that everything in IT tends to be additive. You mentioned customers, Eric, you talk to a lot of customers. So bring us inside, give us a couple examples if you would, how are they dealing with this transition? For years we've been talking about, you know, enabling developers, having them be tied more tightly with what the enterprise is doing. So what are you seeing from some of your customers today? >> Well, I think the key thing is they'd like to use data reuse. So, in this case, think of a backup, a snap or replica dataset, which is real world data, and being able to use that and reuse that. And now the storage guys want to make sure they know who's, if you will, checked it out. We do that with our Spectrum Copy Data Management. You also have, of course, integration with the Ansible framework, which IBM supports, in fact, we'll be announcing some additional support for more features in Ansible coming at the end of October. We'll be doing a large launch, very heavily on containers. Containers and primary storage, containers in hybrid cloud environments, containers in big data and AI environments, and containers in the modern data protection and cyber resiliency space as well. So we'll be talking about some additional support in this case about Ansible as well. So you want to make sure, one of the key things, I think, if you're a storage guy, if I'm the VP of infrastructure, or I'm the CIO, even if I'm not a storage person, in fact, if you think about it, I'm almost 70 now. I have never, ever, ever, ever met a CIO who used to be a storage guy, ever. Whether I, I've been with big companies, I was at EMC, I was at Seagate Maxtor, I've been at IBM actually twice. I've also done seven startups, as you guys know at theCUBE. I have never, ever met a CIO who used to be a storage person. Ever, in all those years. So, what appeals to them is, how do I let the dev guys and the test guys use that storage? At the same time, they're smart enough to know that the software guys and the test guys could actually screw up the storage, lose the data, or if they don't lose the data, cost them hundreds of thousands to millions of dollars because they did something wrong and they have to reconfigure all the storage solutions. So you want to make sure that the CIO is comfortable, that the dev and the test teams can use that storage properly. It's a part of what Ansible's about. You want to make sure that you've got tight integration. So for example, we announced a container native version of our Spectrum Discover software, which gives you comprehensive metadata, cataloging and indexing. Not only for IBM's scale-out file, Spectrum Scale, not only for IBM object storage, IBM cloud object storage, but also for Amazon S3 and also for NetApp filers and also for EMC Isilon. And it's a container native. So you want to make sure in that case, we have an API. So the AI software guys, or the big data software guys could interface with that API to Spectrum Discover, let them do all the work. And we're talking about a piece of software that can traverse billions of objects in two seconds, billions of them. And is ideal to use in solutions that are hundreds of petabytes, up into multiple exabytes. So it's a great way that by having that API where the CIO is confident that the software guys can use the API, not mess up the storage because you know, the storage guys and the data scientists can configure Spectrum Discover and then save it as templates and run an AI workload every Monday, and then run a big data workload every Tuesday, and then Wednesday run a different AI workload and Thursday run a different big data. And so once they've set that up, everything is automated. And CIOs love automation, and they really are sensitive. Although they're all software guys, they are sensitive to software guys messing up the storage 'cause it could cost them money, right? So that's their concern. We make it easy. >> Absolutely, Eric, you know, it'd be lovely to say that storage is just invisible, I don't need to think about it, but when something goes wrong, you need those experts to be able to dig in. You spent some time talking about automation, so critically important. How about the management layer? You know, you think back, for years it was, vCenter would be the place that everything can plug in. You could have more generalists using it. The HCI waves were people kind of getting away from being storage specialists. Today VMware has, of course vCenter's their main estate, but they have Tanzu. On the IBM and Red Hat side, you know, this year you announced the Advanced Cluster Management. What's that management landscape look like? How does the storage get away from managing some of the bits and bytes and, you know, just embrace more of that automation that you talked about? >> So in the case of IBM, we make sure we can support both. We need to appeal to the storage nerd, the storage geek if you will. The same time to a more generalist environment, whether it be an infrastructure manager, whether it be some of the software guys. So for example, we support, obviously vCenter. We're going to be supporting all of the elements that are going to happen in a container environment that VMware is doing. We have hot integration and big time integration with Red Hat's management framework, both with Ansible, but also in the container space as well. We're announcing some things that are coming again at the end of October in the container space about how we interface with the Red Hat management schema. And so you don't always have to have the storage expert manage the storage. You can have the Red Hat administrator, or in some cases, the DevOps guys do it. So we're making sure that we can cover both sides of the fence. Some companies, this just my personal belief, that as containers become commonplace while the software guys are going to want to still control it, there eventually will be a Red Hat/container admin, just like all the big companies today have VMware admins. They all do. Or virtualization admins that cover VMware and VMware's competitors such as Hyper-V. They have specialized admins to run that. And you would argue, VMware is very easy to use, why aren't the software guys playing with it? 'Cause guess what? Those VMs are sitting on servers containing both apps and data. And if the software guy comes in to do something, messes it up, so what have of the big entities done? They've created basically a virtualization admin layer. I think that over time, either the virtualization admins become virtualization/container admins, or if it's a big enough for both estates, there'll be container admins at the Global Fortune 500, and they'll also be virtualization admins. And then the software guys, the devOps guys will interface with that. There will always be a level of management framework. Which is why we integrate, for example, with vCenter, what we're doing with Red Hat, what we do with generic Kubernetes, to make sure that we can integrate there. So we'll make sure that we cover all areas because a number of our customers are very large, but some of our customers are very small. In fact, we have a company that's in the software development space for autonomous driving. They have over a hundred petabytes of IBM Spectrum Scale in a container environment. So that's a small company that's gone all containers, at the same time, we have a bunch of course, Global Fortune 1000s where IBM plays exceedingly well that have our products. And they've got some stuff sitting in VMware, some such sitting in generic Kubernetes, some stuff sitting in Red Hat OpenShift and some stuff still in bare metal. And in some cases they don't want their software people to touch it, in other cases, these big accounts, they want their software people empowered. So we're going to make sure we could support both and both management frameworks. Traditional storage management framework with each one of our products and also management frameworks for virtualization, which we've already been doing. And now management frame first with container. We'll make sure we can cover all three of those bases 'cause that's what the big entities will want. And then in the smaller names, you'll have to see who wins out. I mean, they may still use three in a small company, you really don't know, so you want to make sure you've got everything covered. And it's very easy for us to do this integration because of things we've already historically done, particularly with the virtualization environment. So yes, the interstices of the integration are different, but we know here's kind of the process to do the interconnectivity between a storage management framework and a generic management framework, in, originally of course, vCenter, and now doing it for the container world as well. So at least we've learned best practices and now we're just tweaking those best practices in the difference between a container world and a virtualization world. >> Eric, VMworld is one of the biggest times of the year, where we all get together. I know how busy you are going to the show, meeting with customers, meeting with partners, you know, walking the hallways. You're one of the people that traveled more than I did pre-COVID. You know, you're always at the partner shows and meeting with people. Give us a little insight as to how you're making sure that, partners and customers, those conversations are still happening. We understand everything over video can be a little bit challenging, but, what are you seeing here in 2020? How's everybody doing? >> Well, so, a couple of things. First of all, I already did two partner meetings today. (laughs) And I have an end user meeting, two end user meetings tomorrow. So what we've done at IBM is make sure we do a couple things. One, short and to the point, okay? We have automated tools to actually show, drawing, just like the infamous walk up to the whiteboard in a face to face meeting, we've got that. We've also now tried to make sure everybody is being overly inundated with WebEx. And by the way, there's already a lot of WebEx anyway. I can think of meeting I had with a telco, one of the Fortune 300, and this was actually right before Thanksgiving. I was in their office in San Jose, but they had guys in Texas and guys in the East Coast all on. So we're still over WebEx, but it also was a two and a half hour meeting, actually almost a three hour meeting. And both myself and our Flash CTO went up to the whiteboard, which you could then see over WebEx 'cause they had a camera showing up onto the whiteboard. So now you have to take that and use integrated tools. One, but since people are now, I would argue, over WebEx. There is a different feel to doing the WebEx than when you're doing it face to face. We have to fly somewhere, or they have to fly somewhere. We have to even drive somewhere, so in between meetings, if you're going to do four customer calls, Stu, as you know, I travel all over the world. So I was in Sweden actually right before COVID. And in one day, the day after we had a launch, we launched our new Flash System products in February on the 11th, on February 12th, I was still in Stockholm and I had two partner meetings and two end user meetings. But the sales guy was driving me around. So in between the meetings, you'd be in the car for 20 minutes or half an hour. So it connects different when you can do WebEx after WebEx after WebEx with basically no break. So you have to be sensitive to that when you're talking to your partners, sensitive of that when you're talking to the customers sensitive when you're talking to the analysts, such as you guys, sensitive when you're talking to the press and all your various constituents. So we've been doing that at IBM, really, since the COVID thing got started, is coming up with some best practices so we don't overtax the end users and overtax our channel partners. >> Yeah, Eric, the joke I had on that is we're all following the Bill Belichick model now, no days off, just meeting, meeting, meeting every day, you can stack them up, right? You used to enjoy those downtimes in between where you could catch up on a call, do some things. I had to carve out some time to make sure that stack of books that normally I would read in the airports or on flights, everything, you know. I do enjoy reading a book every now and again, so. Final thing, I guess, Eric. Here at VMworld 2020, you know, give us final takeaways that you want your customers to have when it comes to IBM and VMware. >> So a couple of things, A, we were tightly integrated and have been tightly integrated for what they've been doing in their traditional virtualization environment. As they move to containers we'll be tightly integrated with them as well, as well as other container platforms, not just from IBM with Red Hat, but again, generic Kubernetes environments with open source container configurations that don't use IBM Red Hat and don't use VMware. So we want to make sure that we span that. In traditional VMware environments, like with Version 7 that came out, we make sure we support it. In fact, VMware just announced support for NVMe over Fibre Channel. Well, we've been shipping NVMe over Fibre Channel for just under two years now. It'll be almost two years, well, it will be two years in October. So we're sitting here in September, it's almost been two years since we've been shipping that. But they haven't supported it, so now of course we actually, as part of our launch, I pre say something, as part of our launch, the last week of October at IBM's TechU it'll be on October 27th, you can join for free. You don't need to attend TechU, we'll have a free registration page. So just follow Zoginstor or look at my LinkedIns 'cause I'll be posting shortly when we have the link, but we'll be talking about things that we're doing around V7, with support for VMware's announcement of NVMe over Fibre Channel, even though we've had it for two years coming next month. But they're announcing support, so we're doing that as well. So all of those sort of checkbox items, we'll continue to do as they push forward into the container world. IBM will be there right with them as well because we know it's a very large world and we need to support everybody. We support VMware. We supported their competitors in the virtualization space 'cause some customers have, in fact, some customers have both. They've got VMware and maybe one other of the virtualization elements. Usually VMware is the dominant of course, but if they've got even a little bit of it, we need to make sure our storage works with it. We're going to do the same thing in the container world. So we will continue to push forward with VMware. It's a tight relationship, not just with IBM Storage, but with the server group, clearly with the cloud team. So we need to make sure that IBM as a company stays very close to VMware, as well as, obviously, what we're doing with Red Hat. And IBM Storage makes sure we will do both. I like to say that IBM Storage is a Switzerland of the storage industry. We work with everyone. We work with all these infrastructure players from the software world. And even with our competitors, our Spectrum Virtualized software that comes on our Flash Systems Array supports over 550 different storage arrays that are not IBM's. Delivering enterprise-class data services, such as snapshot, replication data, at rest encryption, migration, all those features, but you can buy the software and use it with our competitors' storage array. So at IBM we've made a practice of making sure that we're very inclusive with our software business across the whole company and in storage in particular with things like Spectrum Virtualize, with what we've done with our backup products, of course we backup everybody's stuff, not just ours. We're making sure we do the same thing in the virtualization environment. Particularly with VMware and where they're going into the container world and what we're doing with our own, obviously sister division, Red Hat, but even in a generic Kubernetes environment. Everyone's not going to buy Red Hat or VMware. There are people going to do Kubernetes industry standard, they're going to use that, if you will, open source container environment with Kubernetes on top and not use VMware and not use Red Hat. We're going to make sure if they do it, what I'll call generically, if they use Red Hat, if they use VMware or some combo, we will support all of it and that's very important for us at VMworld to make sure everyone is aware that while we may own Red Hat, we have a very strong, powerful connection to VMware and going to continue to do that in the future as well. >> Eric Herzog, thanks so much for joining us. Always a pleasure catching up with you. >> Thank you very much. We love being with theCUBE, you guys do great work at every show and one of these days I'll see you again and we'll have a beer. In person. >> Absolutely. So, definitely, Dave Vellante and John Furrier send their best, I'm Stu Miniman, and thank you as always for watching theCUBE. (relaxed electronic music)

Published Date : Sep 29 2020

SUMMARY :

brought to you by VMware He is the chief marketing officer And again, glad to be here, you know, 2020, the major engagements, So we started with IBM Cloud so share with us a little bit, you know, and the differences that we're doing to make sure that we can and now the object storage side. So what are you seeing from and containers in the On the IBM and Red Hat side, you know, So in the case of IBM, we and meeting with people. and guys in the East Coast all on. in the airports or on and maybe one other of the Always a pleasure catching up with you. We love being with theCUBE, and thank you as always

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
EricPERSON

0.99+

Pat GelsingerPERSON

0.99+

IBMORGANIZATION

0.99+

Eric HerzogPERSON

0.99+

JohnPERSON

0.99+

ZoginstorPERSON

0.99+

TexasLOCATION

0.99+

DavePERSON

0.99+

StockholmLOCATION

0.99+

SwedenLOCATION

0.99+

20 minutesQUANTITY

0.99+

Dave VellantePERSON

0.99+

$5 billionQUANTITY

0.99+

San JoseLOCATION

0.99+

Stu MinimanPERSON

0.99+

FebruaryDATE

0.99+

SeptemberDATE

0.99+

billionsQUANTITY

0.99+

2020DATE

0.99+

October 27thDATE

0.99+

AWSORGANIZATION

0.99+

last yearDATE

0.99+

John FurrierPERSON

0.99+

VMworldORGANIZATION

0.99+

two secondsQUANTITY

0.99+

half an hourQUANTITY

0.99+

VMwareORGANIZATION

0.99+

ThursdayDATE

0.99+

WednesdayDATE

0.99+

Red HatTITLE

0.99+

bothQUANTITY

0.99+

February 12thDATE

0.99+

Red Hat OpenShiftTITLE

0.99+

Red HatORGANIZATION

0.99+

two yearsQUANTITY

0.99+

twoQUANTITY

0.99+

end of OctoberDATE

0.99+

twiceQUANTITY

0.99+

two and a half hourQUANTITY

0.99+

tomorrowDATE

0.99+

OctoberDATE

0.99+

SwitzerlandLOCATION

0.99+

hundreds of petabytesQUANTITY

0.99+

hundreds of thousandsQUANTITY

0.99+

StuPERSON

0.99+

PatPERSON

0.99+

Seagate MaxtorORGANIZATION

0.99+

telcoORGANIZATION

0.99+

three years agoDATE

0.99+

Sam Werner, IBM | VMworld 2020


 

(upbeat music) >> Narrator: From around the globe, it's theCUBE with digital coverage of VMworld 2020, brought to you by VMware and its ecosystem partners. >> Welcome back. I'm Stu Miniman. And this is theCUBE's coverage of VMworld 2020. Hard to believe our 11th year at the show, obviously the first time we're doing these virtually. Happy to welcome back to the program one of our CUBE alumni, so regular on the program, Sam Werner. He is the Vice President of Product at IBM. Sam, thanks so much for joining us. >> Hey, Stu, thanks for having me. It's great to be with you again and great to be at VMworld virtually. Different experience this year but still just as exciting as always. >> Yeah, well, obviously a long history between IBM and VMware. I go back in my memory to like 2002. Most people hadn't even heard of virtualization. I was working for a certain storage company, and IBM and HP and Dell were all banging on our door saying, "You really need to support this stuff, this is really important." Obviously, a lot has changed in the subsequent years. VMware's high level message talks a lot about cloud, they've got a lot of big partnerships, including IBM, of course on the cloud side as well as the system side. So why don't you bring us in your team, the relationship with VMware these days. >> Yeah. Thanks. And that's a great intro. We do have a very long relationship and history with VMware. And the thing I love about the VMware community is I'm a storage person. People in the VMworld really understand the importance of storage and having a strategy around storage for how it's deployed. Simplifying the management, automating things and probably most importantly, bringing some of the security aspects especially in today's world. So, we've got really, really strong integration with our flash system family, making it very easy to deploy and ensure you've got end-to-end data protection, encryption and everything you need to secure your mission critical applications in your VMware environment. And we spent... IBM is a leader in data protection software. And we've made large investments in our integration with VMware to ensure our customers are able to secure their data and ensure that they have backups that they can easily restore. And we've tried to make it simple enough that the VM administrator can actually do it on their own. >> Yeah. Sam, I mentioned one of the big messages we hear from VMware, of course, is that you can take that VMware stack and put it lots of places. Of course, they have heavy data center environments, but can live in Amazon with VMware and AWS. I mentioned the IBM Cloud partnership, all the other clouds and from a data protection standpoint, really, they've made it so that their partners can kind of come along with that story. So, what are you seeing from your standpoint obviously, I expect the IBM Cloud is a piece of it. But are you also... Your data protection, does that play across the full spectrum of what VMware is doing? >> Absolutely. So I mean, if you want to backup your VMware environment on AWS, you can use Spectrum Protect Plus, you can do it for on-prem, you can do it in IBM Cloud. It's interesting, because the data protection software is now being used in a much broader use cases. We've moved to a world where you take snapshots of your data, which allows you to do instantaneous recovery. It allows you to offload for longer term backups and archives or disaster recovery. But it also allows you to do things like data migration, open up new analytics, make data available for analytics and other environments. So we're seeing our customers who are using spectrum protect suite on-premises, actually then leverage it in different cloud environments, both for DR in the cloud and for things like dev test or analytics. So I think that connection, both leveraging the underlying VMware capabilities, but having a very strong application running on top that can help you with the orchestration gives you the ability to really take advantage of a hybrid multi-cloud environment. >> Yeah. And Sam, something that really goes side by side, if we're talking about data protection, big conversations we've been having with customers last few years has been things like governance, dealing with GDPR and CCCP from California, as well as the cyber resiliency, ransomware and everything like that. So how does that fit into? Give us the update on your end when it comes to those pieces? >> It's a great question because, as storage administrators, I think they struggle quite a bit with a lot of different priorities that are at odds with each other. There's this big push for AI, and big push for driving great insights from the institutional knowledge of an enterprise and driving new value to customers. And enterprises are obviously hiring data scientists and building out these neural networks. The problem is, at odds with that strategy of making data available, you have GDPR requirements, and you also have growing cyber threats out there. We've even seen an increase within this COVID world. If you think that criminals back off when there's a global pandemic, the answer is no, they do not. So there's this increased threat and increased regulation. So you really need a strategy for how you're going to manage that data. And actually, that's where something like a Spectrum Protect Plus can come in and allows you to take snapshots, build a catalog of your data, and do some analysis on the different types of data you want to make available for these different use cases. And actually bring that data into an environment where it's safe and secure. And you can also bring the copy back later. Early on, people would make copies and move it everywhere, you lose track of that data. You don't really have a single source of truth anymore. So it's really important to have an intelligent, catalogued approach to doing this. >> Wonderful. Well, Sam, one of the other big themes we see at the show, obviously, is VMware's has a big push into that cloud native discussion, Kubernetes, containerization. I've spoken with your team plenty of times at theCUBE con shows, so help connect us. There's still a little bit of two different worlds. VMs and containers, yes, they're coming together. But it's infrastructure versus app developers and oftentimes there's the technology pieces and then of course, as we always know, those organizational challenges can really slow things down if we don't plan properly. >> Yeah, I mean, it's a really good point. And in fact, VMware took years and years to get to where it is today. But now, it gives you a lot of the core capabilities you need to do to do data protection. VADP wasn't built overnight. When you look at Kubernetes, and where it is today, it's still in pretty early stages of that. We have CSI drivers, the container storage interface, they allow you to do snapshots. If you go to various storage vendors, they're in kind of different phases in their work on it. It's a little bit of the Wild West, I would say right now. Early on in Kubernetes environments or container environments, they were used for stateless applications, as we all know, now that we're moving more mission critical workloads and moving towards stateful applications, data protection becomes critical. And in fact, from our customers, it's one of the biggest challenges they say they're encountering in their digital transformation, as they move to a hybrid multi-cloud container world. So what we're doing with Spectrum Protect Plus is we're integrating directly into the CSI drivers, and providing customers the capability to do application and container where snapshots of their data again, building this catalog and information about the data and being able to make it not only available for other use cases, but also in the event you have to recover. If there's a ransomware attack, if you lose a file, if you know, something, anything malicious happens or a disaster, you can actually get back to the data you need quickly, which is obviously just as important in a Kubernetes environment as in a VMware environment. >> Yeah, Sam, it's so good to hear some of the progress here. You and I, we lived through that, fixing a storage for virtual environment, and really took about a decade to go from just, "Okay, well, I can backup everything," to, "Wait, I can really have that VM granularity." But we're about five years into containerization and storage. You talked about the CSI plugins, you talked about what we can do there. So it looks like we've learned from the past, we can accelerate a bit what we're doing so that we can have that full stack solution in these modern environments. >> Yeah, I mean, obviously, we're taking the learnings that we got from those environments. And we have a lot of customers who are leveraging their VMware environment and building on top of that, and we also have others that are looking at moving towards bare metal, in some cases. So we need to provide a lot of the same level of automation and integration that we have in VMware environments. So we're able to leverage all the learnings we have from all those years and all the challenges we had to make storage much easier to manage and deploy in these environments. So I think it'll be a much shorter learning curve this time around. >> Yeah, absolutely. It's been great to see the communities rally, so much maturation. All right, Sam, what else should people know about these solutions? We're not going to be all jammed together in either San Francisco or Las Vegas, but there's always great conversations at the show. Lots of customers, lots of learning. So what do you want people to take away from VMworld when it comes to IBM? >> Well, I think obviously, the announcements the VMware is making this week are very exciting. And I think you'll see that our storage platforms continue to come along with VMware and provide, I would say the most secure and performance storage options for VMware environments with end-to-end encryption, our ability to do snapshots and cataloguing of the snapshot for quick recovery, our ability to move into hybrid multi-cloud environments, I think gives a very flexible storage infrastructure for your VMware world. And also, as you move beyond, as you adopt containers, we have very good integration with OpenShift and any type of Kubernetes framework. So we're able to support you today with VMware and whether you're continuing to move forward with VMware and Kubernetes is on top of it, or moving forward to a hybrid multi-cloud version on bare metal, we can support all those environments with very secure storage infrastructure. The one other thing that I think people need to keep in mind is the concept of air gapping. And having copies outside of their storage infrastructure. We're actually able to bring you tape storage, as an extension to your environment. Tape is the true air gap, we actually can pull a cartridge out and put it on a shelf, and I can assure you, nobody is going to be able to change that data. So in the event, something really happens, we can recover from tape. We can give you the ability to copy the data to the cloud in a logical air gap. You can consider that a separate network. So to some extent, it is an air gap and you can retrieve the data back. And we can give you the ability to do snapshots in place, which would be your quickest recovery path. So we can give you the ability to do all three of those things within our storage products. Giving the ultimate secure environment and many options for recovery in this (mumbles) vicious world of IT, I guess. >> Yeah. Well, Sam, what's old is new again, we know that everything in IT is always additive. I remember a couple of years ago, we were joking with we had that "flip turn" of taking flash and tape. And a few years back, if you looked underneath a lot of the cloud solutions like some of those deep archives, there often was (mumbles) there. So, Sam, thank you so much. Great to catch up with you, so many pieces. Hope you and the team have lots of good conversations at VMworld. >> Thank you, Stu. It's great to be here with you again. >> Stay tuned, lots more coverage from VMworld 2020, the global digital online experience. I'm Stu Miniman and as always, thank you for watching theCUBE. (upbeat music)

Published Date : Sep 29 2020

SUMMARY :

brought to you by VMware He is the Vice President It's great to be with you again of course on the cloud side and everything you need to secure your of course, is that you both for DR in the cloud and everything like that. and allows you to take snapshots, and then of course, as we always know, but also in the event you have to recover. You and I, we lived through that, and all the challenges we had So what do you want people to take away So we can give you the ability to do lot of the cloud solutions It's great to be here with you again. I'm Stu Miniman and as always,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Sam WernerPERSON

0.99+

IBMORGANIZATION

0.99+

DellORGANIZATION

0.99+

HPORGANIZATION

0.99+

VMwareORGANIZATION

0.99+

SamPERSON

0.99+

AWSORGANIZATION

0.99+

San FranciscoLOCATION

0.99+

AmazonORGANIZATION

0.99+

Las VegasLOCATION

0.99+

StuPERSON

0.99+

Stu MinimanPERSON

0.99+

11th yearQUANTITY

0.99+

CaliforniaLOCATION

0.99+

2002DATE

0.99+

OpenShiftTITLE

0.99+

bothQUANTITY

0.98+

VMworld 2020EVENT

0.98+

KubernetesTITLE

0.98+

CUBEORGANIZATION

0.98+

todayDATE

0.98+

oneQUANTITY

0.97+

VMwareTITLE

0.97+

theCUBEORGANIZATION

0.97+

about five yearsQUANTITY

0.96+

GDPRTITLE

0.96+

VMworldORGANIZATION

0.95+

first timeQUANTITY

0.95+

single sourceQUANTITY

0.94+

threeQUANTITY

0.94+

Spectrum Protect PlusCOMMERCIAL_ITEM

0.94+

this weekDATE

0.93+

this yearDATE

0.91+

yearsQUANTITY

0.91+

two different worldsQUANTITY

0.88+

couple of years agoDATE

0.87+

last few yearsDATE

0.82+

few years backDATE

0.8+

pandemicEVENT

0.77+

IBM CloudTITLE

0.75+

Spectrum ProtectTITLE

0.74+

globalEVENT

0.73+

Vice PresidentPERSON

0.64+

COVIDEVENT

0.63+

about a decadeQUANTITY

0.59+

Denis Kennelly, IBM | VMworld 2020


 

>> Narrator: From around the globe, it's the Cube with digital coverage of VMworld 2020, brought to you by VMware and its ecosystem partners. >> Hi everybody welcome back, this is the Cube's coverage of VMworld 2020, of course,it's remote coverage virtual VMworld 2020, Dennis Kennelly is here. He's the newly minted, General Manager of IBM storage, Dennis, thanks so much for spending some time with us, and congratulations. >> Thank you, Dave. Great to be here and great to talk. >> Yeah, so you're 30 days in, you know, so you're an expert by now, but of course, long time IBMer and you've touched a lot of different basis at IBM. So that's very exciting, but your background is in engineering and products, which I think is significant. And I want to talk about that a little bit, but you've got expertise in Cloud, hybrid Cloud. You ran the security division for quite a bit of time. Actually spent some time in data management as well. So, why do you feel as though this is a great opportunity for you and of course for IBM, given your background? >> Yeah Dave, I think as you say, I'm a technologist, I'm a product guy for many, many years, almost 30 years in the business. I came to IBM as a lot of people through an acquisition, of a small company in the networking space. But since then I've had, you know, two or three careers in IBM where I worked in security, I worked in hybrid Cloud, and actually way, way back, I worked for EMC, in the storage business. >> Yeah Right now, you know, as you look at hybrid Cloud, we're in this hybrid multicloud world, I think, and again, that ties into what VMware is also talking about, I think we're the two vendors in the Mac that really pushing and focused on that strategy. And, you know, the reality is of Clouds, if you look at Cloud today where the world is, I mean, even though we have 10, 15 years, you know, 15 years into the Cloud business is 15 years since the first hyperscaler was launched. Reality is about 20, 25% of what I would call enterprise workload have actually moved on to the Cloud. And there are many reasons for that, be it security, compliance, data, privacy, et cetera, transformation, a lot of other people challenges, et cetera. But now we are actually right in the cusp of adapting that enterprise workload. The storage has a critical role to play in that, especially in the hybrid multicloud world, and we're making sure that storage is a key enabler on that genre. And that's why I think it's a critical time right now to be in storage and to help in that journey. >> And I want to come back to talk a little bit about it, but one of the things that I am excited about, in terms of your background, you've got a strong product background, and for years I had indicated that IBM sort of for a while, lost its formula in storage, you'd do all this R&D and it never hit the market, and then under your predecessor, I think IBM has done a much, much better job and you see it in your now, the last couple of quarters have been really strong for you guys. Of course, you've got the mainframe attached, which is the gift that keeps on giving, but how are you looking at your business? Again, I know you're only 30 days in, but there's been some tailwinds lately, you guys have seemed to do pretty well relative to the market. >> Yeah, my predecessor has done a fantastic job, I mean, if you look at our core storage business, as you said Dave, like our mainframe that's all was our flagship. I mean, you know, we continue to innovate there, particularly around the mainframe, things like, you know, copy services, et cetera, where we're driving a lot of innovation, and continue to lead there. But I think the more interesting, and the really exciting part is what we've done in an our what I would call our open system storage, our flash line up, where the team had got to a single core base, and a single hardware platform where we can scan right up and down the stack. And really innovating and driving very quickly there, is a critical part of what I'm driving right now and accelerate the work that has been done today. Then I think, you know, beyond, you know, the core storage platforms, if you start to look at some of the other areas like cyber resiliency, data protection, and really driving innovation there, but also leveraging other parts of IBM. I mean, we have a very strong base in security. I'm working very closely with our security teams, because I know from my days in security, you know, data protection, data recovery, real challenges for the CISO, I'm bringing those technologies and packaging those technologists so that they can help in those challenges critical for me. And last but not least, I mean, you know, as you look at things like getting an AI and I'm bringing AI to the enterprise. One of the big challenges is being able to identify where all the data is and to get an access to the data. And again, storage is a critical role to play there in terms of discovery services, et cetera, which again is a key innovation. So I think it comes down to those three things. making sure... Obviously you need a very strong product line-up, which I think we are very well equipped right now, and we have, based on the work the team have done over the last number of year. But then applying that to some of the critical problems around cyber resiliency, data protection, and also leveraging and enabling AI in the enterprise. >> So let's stay on cyber for a minute, It's an area obviously, you know, a lot about and we used to think, okay, what's the relationship between storage and cyber, and it was maybe it was encryption, you know, data at motion or data at rest, and now the lines between data protection, and cyber are really getting blurred. I mean, it's become a... Especially with COVID, it's become, >> Mhm >> A fundamental part of business resiliency, So how are you thinking about storage, and the intersection of cyber? >> Yeah, I mean, I think, you know, when, I had the, my security hat on, I mean, reality and security is, you know, the World is, you know, how you deal with a breach because at the end of the day, pretty much there is to be a security event. It's not a case of if, it's a case of when it happens, and you know, really how you respond to that, and that was where a lot of our focus was in terms of how you respond to those events, how you recover quickly, et cetera. Now, when you come across into storage, I mean, lately in the world we live in today, where at the end of the day, when there's a cyber attack, I mean, what is it that the nefarious actor is after, they are pretty much after your data assets. And, you know, things like ransomware now there's various different techniques. But how quickly your crew can respond or recover from those is really important. And that's where storage has a critical role to play. And a lot of what we are doing in the innovations, of course, things like base encryption and encryption everywhere, they are table stakes as far as IBM is concerned, we've had that for many years, within our mainframe and in our open systems, but now really thinking about how you actually recover very quickly when an event happens, and that's really where we see a lot of innovation, and where we want to talk to both sides of the house, both the storage I've been, but also on the CISO who have frankly, a big influence in terms of where investment dollars are put today and making sure that they have the capability in place to actually recover quickly when there's an attack. >> Well, as you well know for years it was, you know, security was the problem of the, you know, the Sec-Ops team, you know, >> Yes >> Not my Swim lane, but that has really changed. I mean, security has become a board level issue. Everybody's got to be involved. We're seeing more CISOs reporting into the CIO. We're also seeing CISOs have a seat at the table, they're reporting, you know, at quarterly board meetings, and so, we see every part of the IT stack, really focused on security, and even the lines of business as well. What do you say? >> Yeah, exactly, I mean, I think the CISO role has evolved over the last number of years, I mean, I think if I think back, you know, maybe five, 10 years ago, the CISO role was very much what I would call a compliance type role. So in other words, making sure we all the checks and balances in place that, you know, at the right time. putting the fast pace, changing world with Cloud and transformation, digital transformation, the CISO has to be an active part of that. We used to use the expression that the CISO was the doctor know, in other words, how to stop, you know, innovation, or how to stop things changing, That's, you know, yesterday's news, today, the CISO has to be much more pro-active, helping technology, helping transformation, and that's why you're seeing that, they have a seat at the top table right now, because they are critical, to all decisions that are made. The fact is that, you know, massive transformation is happening in every enterprise, but you're got to do that in a secure and safe manner, and the CISO is absolutely critical to that, and is influencing a lot of fine decisions around that as well. And by that we see that as a critical part of our strategy that we make sure that we have offerings and capabilities that addresses that need. >> Love to come back to the, the Cloud discussion, the hybrid Cloud, and multi-Cloud, you mentioned that early on, you guys obviously have a big play there with Red Hat and an open shift we've seen in our data that is becoming real multicloud and there used to be, you know, a lot of vendor talk, but now it's becoming a fundamental strategy. So you were saying it, you know, as a smaller portion of workloads, you know, are in the Cloud, it's all the, all the hard stuff has stayed on Prem. What's the motivation for your customers to move to a Cloud, or a hybrid Cloud strategy? What are they trying to achieve as an outcome? >> Well, I think when everything, you know, you got to stop at a business level, right? I mean, fundamentally what enterprises are doing is, especially in this cold world, everything is becoming increasingly digital going online, et cetera. So that transformation is accelerating that digital transformation, the rate and pace of that is accelerated. Now you actually stop to think about that and say, what does that mean in terms of your existing enterprise? In many cases, you know, especially for incumbents, right? They have existing systems that have existing data repositories, et cetera. So how do they leverage that and transform those to meet these new needs? And, and then of course back to the cyber concerns, right, you have security data privacy concerns, et cetera. So you have all these multiple variables going on, in our world, you know, and if you look at what has happened over the last, as I said, 15 plus years, you know, everybody said, you know, everything is moving to the public, game over, we're done. That hasn't actually happened. We really are in a multicloud world. When we talk about multicloud, that means, you know, you have the what we refer to as a traditional hyperscalers, but also the SAS properties, et cetera, that we see in every enterprise. And also you have to have a on-premise capability, but it's different than what it was traditionally, it has to have Cloud like economics. And what has been very good about the Cloud, a tremendous innovation is the elastic scaling, et cetera, on the economics that has come with the Cloud. But you have to bring that back on-premise. You can't just have one operating model in the Cloud and have something else on-premise, your infrastructure has to be flexible at scale and across border environments. And that is the true definition of what we call a hybrid multicloud. And one with critical technologies, will give you that consistency across that, and one of the reasons why, you know, we named the strategic pattern Red Hat, is containers, because from a number of years back, we could see that was the part of the technology that enabled a lot of these hybrid multicloud capabilities. IBM talked about hybrid Cloud long before, it was a popular thing to talk about a number of years back. But we could see that, you know, to enable that to happen, the critical technology was containers, and that because of both, combination of containers and Linux and hence the acquisition of Red Hat, and now we are actually leveraging that to actually drive footprint across the Hybrid Cloud environment, and everything we're doing is integrated into that container technology including storage. >> Yeah, well, of course we're here at VMworld again, virtually, but the big trends we're hearing from, from VMware and the ecosystem this week, they're, pounding on networking hybrid multicloud, as we've just talked about, you mentioned containers and Kubernetes, we're hearing a lot about security, which we just addressed the AI, ML, thinking about the points of commonality, you guys are big partners with VMworld. VMware have been for, for many, many years, a lot of open shift runs on VMware,We know that. a lot of your business critical, and mission critical workloads. So what are those points of commonality, and maybe what are some of the points of divergence in what you guys are doing? as part of >> Yeah, I mean, >> VMware tremendous partner of ours, I mean, a lot of VMware workload, as customers move to the cloud, moves to the IBM Cloud. We're probably their premier choice right now in terms of VMware workload. Also, I think in terms of, you know, I think if you look at VMware today, I think they also see a hybrid multicloud strategy, and I think there's the VMware, I would say a strategy has evolved over time. Clearly they have a huge installed base of virtual machines, which a lot of our container technology at Red Hat runs on top off. But VMware has also evolved into a container approach as well, with a lot of the announcements they've made. So I think we're on a very similar strategy when it comes to my own area on storage, in terms of how we integrate storage into that container world, there's a lot of commonality in how we approach that. I mean, developing CSI drivers, et cetera, into the container world, I think we're both doing that and doing that together. In areas, obviously we will compete and very much compete. I talked about that product lineup and obviously BMR, and obviously that relationship with Dell and others, is got to be areas where we will compete in the storage. But in terms of where we really will collaborate, I think is a lot around the hybrid multicloud strategy, and building an open ecosystem that everybody can play on. And they'll, you know, where we sit on them or they sit on us. I think you're going to see an open ecosystem across us in this hybrid multicloud World. >> Well, it seems as though from a storage standpoint, that you've got no choice, but to be open, you have to give clients as much optionality as possible. You can't say, okay, we're going to be all IBM Red Hat, you've been, you've got so many other opportunities for, term expansion. I wonder if you could talk about that, and maybe express your philosophy, just in terms of openness, and it's important in terms of competing in storage. >> I think that's been fundamental to storage since the very beginning of the storage industry. And of course, we absolutely, we have to be very open in terms of who we integrate with. And we go everywhere from like optical containers, to virtual machines to any system, all the ways for something as traditional as tape. I mean, tape, many have said, tape is dead. Tape is far from dead, even in the, hyperscaler world, where we're seeing a lot of the hyperscalers right now, are actually using tape technology and integrating tape into their environment. So there's an example, where you might not have thought about us, you know, it's something that we do, we do that in a very open fashion and continue to do that. Likewise, when it comes to security, when it comes to things like data and AI, you know, our philosophy is don't take another copy of the data, be able to access the data so that you can build your AI models, et cetera on top of that. we may have a lot to happen with some of our capabilities around spectrum scale, and we will integrate with backend arise from EMC, Hitachi, and others actually enable that to happen. So we're very open ecosystem, want to bring unique value, and if I'm making sure we can integrate both up and down the stack. >> Yeah. Well, I mean, you guys, of course, for those who have been around the storage industry, as much as I have the San volume controller, a hub was kind of the early days of storage virtualization, I think IBM was clearly one of the leaders there, and you've kind of taken that concept to data. We've seen that with Cloud packs, and so, you know, one IBM executive, you know, said to me one time, you know, we, learned our lesson many, many years ago about the importance of openness, and then you got the religion there. So I think it's pretty, >> pretty fundamental. >> I mean, >> Isn't it? >> It's pretty fundamental, I guess, we learned hard lesson many years ago, and I think, you know, when you talk about openness and something like Red Hat, I think we're definitely, putting our money where our mouth is in terms of being an open company, I'm really enabling something like Red Hat, and continue that ecosystem as you know, Red Hat is independent, was run independent of IBM, so that we want to drive that open ecosystem around Red Hat, and that is pretty fundamental to a lot of IBM, a lot of our platforms and our capability, I mean, you know, back for many years, we talked about the sound volume controller, but even if you go back far enough in history, which I can do in the storage World, and the storage API world, IBM was one of the leaders I'm building an open API around storage and storage access as well back then. So it's fundamental to the company has always been, and continues to be, I mean, we were one of the major contributors to things like Linux, That's not well known, but that, that is the truth, and, you know, things like that, what we have done over many, many years. >> Yeah, undoubtedly, I mean, I go back to Steve mills, epic decision to invest a billion dollars in Linux back in the day, and we've seen those billion dollar bets pay off in terms of flash and other areas. Dennis, what's your style going to be? I mean, again, I'm excited that you've got an engineering background, you're a product person at the end of the day, it's all about innovation, and getting that R&D out to market. What should we expect from your leadership style? >> I think you kind of said it there, I mean, I'm an engineer at heart, I really want to deliver value to our clients. You know, we have the big R&D spend in our storage unit, and I want to show value for that spend on IBM has given me a responsibility to deliver on. So to begin to deliver massive innovation and productivity from our engineering team. I mean, that's fundamentally what I do. So starting from day one, understanding our portfolio top to bottom, what are our strengths in the market, Where are our weaknesses, where we need to address some of the gaps, but also listening to our clients, which is very important to me and making sure that they see the innovation, the quality of the deliverables and that, you know, as a client or as a customer of IBM, you can be guaranteed that IBM would deliver and continues to deliver on innovation on a road map on storage. And that's really fundamental to my philosophy. I'm making sure that we can establish leadership, and continues to establish leadership, in the storage industry. So that we are a trusted partner, and a valued partner in your transformation journey. So that when you make investments with us, as a technology provider that we deliver on a roadmap and a vision that actually needs your needs going forward. I mean, that's fundamental to what, you know, my management style is about and making sure I have the right people that I can put in front of our clients and make sure they can deliver that value. >> I mean, I think that's critical, Dennis, and again, I keep hitting on your engineering background, because yes, while you have a big R&D budget, IBM probably spend $6 billion a year in R&D, you're fighting for that budget, with a lot of other divisions at IBM, so staying close to the customer is critical because you've got to place those bets. And I have firmly believed that with a strong technical background and product background, and staying close to the customer, you're going to have, you know, some big wins and more wins than losses, and you're going to be able to more efficiently deploy that capital in the form of R&D, and then quickly get it out into products. I see that as crucial today in terms of the innovation equation. >> Yeah. I mean, my philosophy, you know, fundamentally, you know, a lot of times, and I've been in engineering a long time, it's not about the size of the budget, be the dollar, be a $10, be it $100. It's how efficient we are with that dollar, and how innovative we are with that dollar. And sometimes, you know, you look at IBM and people look at a big company, maybe it doesn't move as quickly. I can guarantee you that, you know, that's fundamental that, you know, I run a startup within a small company, within a large company. I like to think of it that way and how we can innovate and move very quickly. And that's, you know, fundamental to my philosophy in terms of how I think, it's not about, okay, how can I get more budget to do exits? How can I be more efficient that I can drive more value? And then, you know, maybe then I get more budget, but you know, you got to think about detail more rather than saying, I don't want to have more inefficiency, I wanted to have more innovation, more creativity, entering new markets, looking at new capabilities, and being able to just create great new opportunities for IBM storage. >> Well, Dennis, again, congratulations on the new appointment, we look forward to at some point in the future of being able to meet face to face, but thanks so much for coming on the Cube and our coverage of VMworld. >> Thank you, Dave, and thanks for your time today, I appreciated the conversation. Thank you. >> All right, You're very welcome, and thank you for watching everybody, This is Dave Vellante for the Cube, again, wall to wall coverage of VMworld 2020, We'll be right back right after this short break. (soft music)

Published Date : Sep 29 2020

SUMMARY :

brought to you by VMware He's the newly minted, General Great to be here and great to talk. for you and of course for But since then I've had, you know, And, you know, the reality is of Clouds, and you see it in your now, I mean, you know, we and now the lines between data protection, and you know, really and even the lines of business as well. and balances in place that, you know, of workloads, you know, and one of the reasons why, you know, in what you guys are doing? Also, I think in terms of, you know, I wonder if you could talk about that, and others actually enable that to happen. said to me one time, you know, and continue that ecosystem as you know, and getting that R&D out to market. to what, you know, you know, some big wins And sometimes, you know, of being able to meet face to face, I appreciated the conversation. you for watching everybody,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DennisPERSON

0.99+

HitachiORGANIZATION

0.99+

IBMORGANIZATION

0.99+

DavePERSON

0.99+

Dave VellantePERSON

0.99+

Denis KennellyPERSON

0.99+

Dennis KennellyPERSON

0.99+

$10QUANTITY

0.99+

30 daysQUANTITY

0.99+

$100QUANTITY

0.99+

10QUANTITY

0.99+

EMCORGANIZATION

0.99+

DellORGANIZATION

0.99+

twoQUANTITY

0.99+

VMworldORGANIZATION

0.99+

15 yearsQUANTITY

0.99+

OneQUANTITY

0.99+

VMwareORGANIZATION

0.99+

todayDATE

0.99+

Red HatTITLE

0.99+

15 plus yearsQUANTITY

0.99+

two vendorsQUANTITY

0.99+

bothQUANTITY

0.99+

LinuxTITLE

0.99+

both sidesQUANTITY

0.99+

firstQUANTITY

0.98+

Steve millsPERSON

0.98+

yesterdayDATE

0.98+

oneQUANTITY

0.97+

singleQUANTITY

0.96+

this weekDATE

0.96+

billion dollarsQUANTITY

0.96+

MacCOMMERCIAL_ITEM

0.96+

almost 30 yearsQUANTITY

0.96+

three thingsQUANTITY

0.96+

10 years agoDATE

0.95+

Red HatORGANIZATION

0.95+

billion dollarQUANTITY

0.94+

about 20, 25%QUANTITY

0.94+

VMworld 2020EVENT

0.92+

VMworld 2020COMMERCIAL_ITEM

0.91+

$6 billion a yearQUANTITY

0.91+

Scott Buckles, IBM | Actifio Data Driven 2020


 

>> Narrator: From around the globe. It's theCUBE, with digital coverage of Actifio Data Driven 2020, brought to you by Actifio. >> Welcome back. I'm Stuart Miniman and this is theCUBE's coverage of Actifio Data Driven 2020. We wish everybody could join us in Boston, but instead we're doing it online this year, of course, and really excited. We're going to be digging into the value of data, how DataOps, data scientists are leveraging data. And joining me on the program, Scott Buckles, he's the North American Business Executive for database data science and DataOps with IBM, Scott, welcome to theCUBE. >> Thanks Stuart, thanks for having me, great to see you. >> Start with the Actifio-IBM partnership. Anyone that knows that Actifio knows that the IBM partnership is really the oldest one that they've had, either it's hardware through software, those joint solutions go together. So tell us about the partnership here in 2020. >> Sure. So it's been a fabulous partnership. In the DataOps world where we are looking to help, all of our customers gain efficiency and effectiveness in their data pipeline and getting value out of their data, Actifio really compliments a lot of the solutions that we have very well. So the folks from everybody from the up top, all the way through the engineering team, is a great team to work with. We're very, very fortunate to have them. How many or any specific examples or anonymized examples that you can share about joint (indistinct). >> I'm going to stay safe and go on the anonymized side. But we've had a lot of great wins, several significantly large wins, where we've had clients that have been struggling with their different data pipelines. And I say data pipeline, I mean getting value from understanding their data, to developing models and and doing the testing on that, and we can get into this in a minute, but those folks have really needed a solution where Actifio has stepped in and provided that solution. To do that at several of the largest banks in the world, including one that was a very recent merger down in the Southeast, where we were able to bring in the Actifio solution and address our, the customer's needs around how they were testing and how they were trying to really move through that testing cycle, because it was a very iterative process, a very sequential process, and they just weren't doing it fast enough, and Actifio stepped in and helped us deliver that in a much more effective way, in a much more efficient way, especially when you into a bank or two banks rather that are merging and have a lot of work to convert systems into one another and converge data, not an easy task. And that was one of the best wins that we've had in the recent months. And again, going back to the partnership, it was an awesome, awesome opportunity to work with them. >> Well, Scott, as I teed up for the beginning of the conversation, you've got data science and DataOps, help us understand how this isn't just a storage solution, when you're talking about BDP. How does DevOps fit into this? Talk a little bit about some of the constituents inside your customers that are engaging with the solution. >> Yeah. So we call it DataOps, and DataOps is both a methodology, which is really trying to combine the best of the way that we've transformed how we develop applications with DevOps and Agile Development. So going back 20 years ago, everything was a waterfall approach, everything was very slow , and then you had to wait a long time to figure out whether you had success or failure in the application that you had developed and whether it was the right application. And with the advent of DevOps and continuous delivery, the advent of things like Agile Development methodologies, DataOps is really converging that and applying that to our data pipelines. So when we look at the opportunity ahead of us, with the world exploding with data, we see it all the time. And it's not just structured data anymore, it's unstructured data, it's how do we take advantage of all the data that we have so that we can make that impact to our business. But oftentimes we are seeing where it's still a very slow process. Data scientists are struggling or business analysts are struggling to get the data in the right form so that they can create a model, and then they're having to go through a long process of trying to figure out whether that model that they've created in Python or R is an effective model. So DataOps is all about driving more efficiency, more speed to that process, and doing it in a much more effective manner. And we've had a lot of good success, and so it's part methodology, which is really cool, and applying that to certain use cases within the, in the data science world, and then it's also a part of how do we build our solutions within IBM, so that we are aligning with that methodology and taking advantage of it. So that we have the AI machine learning capabilities built in to increase that speed which is required by our customers. Because data science is great, AI is great, but you still have to have good data underneath and you have to do it at speed. Well, yeah, Scott, definitely a theme that I heard loud and clear read. IBM think this year, we do a lot of interviews with theCUBE there, it was helping with the tools, helping with the processes, and as you said, helping customers move fast. A big piece of IBM strategy there are the Cloud Paks. My understanding you've got an update with regards to BDP and Cloud Pak. So to tell us what the new releases here for the show. >> Yeah. So in our (indistinct) release that's coming up, we will be to launch BDP directly from Cloud Pak, so that you can take advantage of the Activio capabilities, which we call virtual data pipeline, straight from within Cloud Pak. So it's a native integration, and that's the first of many things to come with how we are tying those two capabilities and those two solutions more closely together. So we're excited about it and we're looking forward to getting it in our customer's hands. >> All right. And that's the Cloud Pak for Data, if I have that correct, right? >> That's called Cloud Pak for data, correct, sorry, yes. Absolutely, I should have been more clear. >> No, it's all right. It's, it's definitely, we've been watching that, those different solutions that IBM is building out with the Cloud Paks, and of course data, as we said, it's so important. Bring us inside a little bit, if you could, the customers. What are the use cases, those problems that you're helping your customers solve with these solution? >> Sure. So there's three primary use cases. One is about accelerating the development process. Getting into how do you take data from its raw form, which may or may not be usable, in a lot of cases it's not, and getting it to a business ready state, so that your data scientists, your business, your data models can take advantage of it, about speed. The second is about reducing storage costs. As data has exponentially grown so has storage costs. We've been in the test data management world for a number of years now. And our ability to help customers reduce that storage footprint is also tied to actually the acceleration piece, but helping them reduce that cost is a big part of it. And then the third part is about mitigating risk. With the amount of data security challenges that we've seen, customers are continuously looking for ways to mitigate their exposure to somebody manipulating data, accessing production data and manipulating production data, especially sensitive data. And by virtualizing that data, we really almost fully mitigate that risk of them being able to do that. Somebody either unintentionally or intentionally altering that data and exposing a client. >> Scott, I know IBM is speaking at the Data Driven event. I read through some of the pieces that they're talking about. It looks like really what you talk about accelerating customer outcomes, helping them be more productive, if you could, what, what are some of key measurements, KPIs that your customers have when they successfully deploy the solution? >> So when it comes to speed, it's really about, we're looking at about how are we reducing the time of that project, right? Are we able to have a material impact on the amount of time that we see clients get through a testing cycle, right? Are we taking them from months to days, are we taking them from weeks to hours? Having that type of material impact. The other piece on storage costs is certainly looking at what is the future growth? You're not necessarily going to reduce storage costs, but are you reducing the growth or the speed at which your storage costs are growing. And then the third piece is really looking at how are we minimizing the vulnerabilities that we have. And when you go through an audit, internally or externally around your data, understanding that the number of exposures and helping find a material impact there, those vulnerabilities are reduced. >> Scott, last question I have for you. You talk about making data scientists more efficient and the like, what are you seeing organizationally, have teams come together or are they planning together, who has the enablement to be able to leverage some of the more modern technologies out there? >> Well, that's a great question. And it varies. I think the organizations that we see that have the most impact are the ones that are most open to bringing their data science as close to the business as possible. The ones that are integrating their data organizations, either the CDO organization or wherever that may set it. Even if you don't have a CDO, that data organization and who owned those data scientists, and folding them and integrating them into the business so that they're an integral part of it, rather than a standalone organization. I think the ones that sort of weave them into the fabric of the business are the ones that get the most benefit and we've seen have the most success thus far. >> Well, Scott, absolutely. We know how important data is and getting full value out of those data scientists, critical initiative for customers. Thanks so much for joining us. Great to get the updates. >> Oh, thank you for having me. Greatly appreciated. >> Stay tuned for more coverage from Activio Data Driven 2020. I'm Stuart Miniman, and thank you for watching theCUBE. (upbeat music)

Published Date : Sep 16 2020

SUMMARY :

Narrator: From around the globe. And joining me on the thanks for having me, great to see you. is really the oldest one that they've had, the solutions that we have very well. To do that at several of the beginning of the conversation, in the application that you had developed and that's the first of And that's the Cloud Pak for Data, Absolutely, I should have been more clear. What are the use cases, and getting it to a business ready state, at the Data Driven event. on the amount of time that we see leverage some of the more are the ones that are most open to and getting full value out of Oh, thank you for having me. I'm Stuart Miniman, and thank

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
ScottPERSON

0.99+

StuartPERSON

0.99+

IBMORGANIZATION

0.99+

BostonLOCATION

0.99+

Scott BucklesPERSON

0.99+

Stuart MinimanPERSON

0.99+

2020DATE

0.99+

third pieceQUANTITY

0.99+

ActifioORGANIZATION

0.99+

two banksQUANTITY

0.99+

OneQUANTITY

0.99+

Cloud PakTITLE

0.99+

two solutionsQUANTITY

0.99+

PythonTITLE

0.99+

DevOpsTITLE

0.99+

third partQUANTITY

0.99+

secondQUANTITY

0.99+

firstQUANTITY

0.99+

Actifio Data Driven 2020TITLE

0.98+

oneQUANTITY

0.98+

theCUBEORGANIZATION

0.98+

two capabilitiesQUANTITY

0.98+

Cloud PaksTITLE

0.97+

20 years agoDATE

0.97+

this yearDATE

0.96+

three primary use casesQUANTITY

0.96+

bothQUANTITY

0.95+

DataOpsORGANIZATION

0.95+

DataOpsTITLE

0.94+

SoutheastLOCATION

0.94+

AgileTITLE

0.94+

Agile DevelopmentTITLE

0.92+

RTITLE

0.88+

North AmericanPERSON

0.78+

Activio Data Driven 2020TITLE

0.74+

CloudCOMMERCIAL_ITEM

0.74+

BDPTITLE

0.7+

Data DrivenEVENT

0.67+

BDPORGANIZATION

0.53+

PaksTITLE

0.52+

minuteQUANTITY

0.52+

Inderpal Bhandari, IBM | MIT CDOIQ 2020


 

>>from around the globe If the cube with digital coverage of M I t. Chief data officer and Information quality symposium brought to you by Silicon Angle Media >>Hello, everyone. This is Day Volonte and welcome back to our continuing coverage of the M I t. Chief Data Officer CDO I Q event Interpol Bhandari is here. He's a leading voice in the CDO community and a longtime Cubillan Interpol. Great to see you. Thanks for coming on for this. Especially >>program. My pleasure. >>So when you you and I first met, you laid out what I thought was, you know, one of the most cogent frameworks to understand what a CDO is job was where the priority should be. And one of those was really understanding how, how, how data contributes to the monetization of station aligning with lines of business, a number of other things. And that was several years ago. A lot of change since then. You know, we've been doing this conference since probably twenty thirteen and back then, you know, Hadoop was coming on strong. A lot of CEOs didn't want to go near the technology that's beginning to change. CDOs and cto Zehr becoming much more aligned at the hip. The reporting organizations have changed. But I love your perspective on what you've observed as changing in the CDO roll over the last half decade or so. >>Well, did you know that I became chief data officer in two thousand six? December two thousand and six And I have done this job four times four major overnight have created of the organization from scratch each time. Now, in December of two thousand six, when I became chief data officer, there were only four. Chief Data Officer, uh, boom and I was the first in health care, and there were three, three others, you know, one of the Internet one and credit guns one and banking. And I think I'm the only one actually left standing still doing this job. That's a good thing or a bad thing. But like, you know, it certainly has allowed me to love the craft and then also scripted down to the level that, you know, I actually do think of it purely as a craft. That is. I know, going into a mutual what I'm gonna do. They were on the central second. No, the interesting things that have unfolded. Obviously, the professions taken off There are literally thousands off chief data officers now, and there are plenty off changes. I think the main change, but the job is it's, I think, a little less daunting in terms off convincing the senior leadership that it's need it because I think the awareness at the CEO level is much, much, much better than what it waas in two thousand six. Across the world. Now, having said that, I think it is still only awareness and don't think that there's really a deep understanding of those levels. And so there's a lot off infusion, which is why you will. You kind of think this is my period. But you saw all these professions take off with C titles, right? Chief Data officer, chief analytics officer, chief digital officer and chief technology officer. See, I off course is being there for a long time. And but I think these newer see positions. They're all very, very related, and they all kind of went to the same need which had to do with enterprise transformation, digital transformation, that enterprises chief digital officer, that's another and and people were all trying to essentially feel the elephants and they could only see part of it at the senior levels, and they came up with which have a role you know, seemed most meaningful to them. But really, all of us are trying to do the same job, which is to accelerate digital transformation in the enterprise. Your comment about you kind of see that the seat eels and sea deals now, uh, partnering up much more than in the past, and I think that's in available the major driving force full. That is, in my view, anyway. It's is artificial intelligence as people try to infuse artificial intelligence. Well, then it's very technical field. Still, it's not something that you know you can just hand over to somebody who has the business jobs, but not the deep technical chops to pull that off. And so, in the case off chief data officers that do have the technical jobs, you'll see them also pretty much heading up the I effort in total and you know, as I do for the IBM case, will be building the Data and AI Enablement internal platform for for IBM. But I think in other cases you you've got Chief date officers who are coming in from a different angle. You know, they built Marghera but the CTO now, because they have to. Otherwise you cannot get a I infused into the organization. >>So there were a lot of other priorities, obviously certainly digital transformation. We've been talking about it for years, but still in many organisations, there was a sense of, well, not on my watch, maybe a sense of complacency or maybe just other priorities. Cove. It obviously has changed that now one hundred percent of the companies that we talked to are really putting this digital transformation on the front burner. So how has that changed the role of CDO? Has it just been interpolate an acceleration of that reality, or has it also somewhat altered the swim lanes? >>I think I think it's It's It's Bolt actually, so I have a way of looking at this in my mind, the CDO role. But if you look at it from a business perspective, they're looking for three things. The CEO is looking for three things from the CDO. One is you know this person is going to help with the revenue off the company by enabling the production of new products, new products of resulting in new revenue and so forth. That's kind of one aspect of the monetization. Another aspect is the CEO is going to help with the efficiency within the organization by making data a lot more accessible, as well as enabling insights that reduce into and cycle time for major processes. And so that's another way that they have monitor. And the last one is a risk reduction that they're going to reduce the risk, you know, as regulations. And as you have cybersecurity exposure on incidents that you know just keep keep accelerating as well. You're gonna have to also step in and help with that. So every CDO, the way their senior leadership looks at them is some mix off three. And in some cases, one has given more importance than the other, and so far, but that's how they are essentially looking at it now. I think what digital transformation has done is it's managed to accelerate, accelerate all three off these outcomes because you need to attend to all three as you move forward. But I think that the individual balance that's struck for individuals reveals really depends on their ah, their company, their situation, who their peers are, who is actually leading the transformation and so >>forth, you know, in the value pie. A lot of the early activity around CDO sort of emanated from the quality portions of the organization. It was sort of a compliance waited roll, not necessarily when you started your own journey here. Obviously been focused on monetization how data contributes to that. But But you saw that generally, organizations, even if they didn't have a CDO, they had this sort of back office alliance thing that has totally changed the the in the value equation. It's really much more about insights, as you mentioned. So one of the big changes we've seen in the organization is that data pipeline you mentioned and and cycle time. And I'd like to dig into that a little bit because you and I have talked about this. This is one of the ways that a chief data officer and the related organizations can add the most value reduction in that cycle time. That's really where the business value comes from. So I wonder if we could talk about that a little bit and how that the constituents in the stakeholders in that in that life cycle across that data pipeline have changed. >>That's a very good question. Very insightful questions. So if you look at ah, company like idea, you know, my role in totally within IBM is to enable Ibn itself to become an AI enterprise. So infuse a on into all our major business processes. You know, things like our supply chain lead to cash well, process, you know, our finance processes like accounts receivable and procurement that soulful every major process that you can think off is using Watson mouth. So that's the That's the That's the vision that's essentially what we've implemented. And that's how we are using that now as a showcase for clients and customers. One of the things that be realized is the data and Ai enablement spots off business. You know, the work that I do also has processes. Now that's the pipeline you refer to. You know, we're setting up the data pipeline. We're setting up the machine learning pipeline, deep learning blank like we're always setting up these pipelines, And so now you have the opportunity to actually turn the so called EI ladder on its head because the Islander has to do with a first You collected data, then you curated. You make sure that it's high quality, etcetera, etcetera, fit for EI. And then eventually you get to applying, you know, ai and then infusing it into business processes. And so far, But once you recognize that the very first the earliest creases of work with the data those themselves are essentially processes. You can infuse AI into those processes, and that's what's made the cycle time reduction. And although things that I'm talking about possible because it just makes it much, much easier for somebody to then implement ai within a lot enterprise, I mean, AI requires specialized knowledge. There are pieces of a I like deep learning, but there are, you know, typically a company's gonna have, like a handful of people who even understand what that is, how to apply it. You know how models drift when they need to be refreshed, etcetera, etcetera, and so that's difficult. You can't possibly expect every business process, every business area to have that expertise, and so you've then got to rely on some core group which is going to enable them to do so. But that group can't do it manually because I get otherwise. That doesn't scale again. So then you come down to these pipelines and you've got to actually infuse AI into these data and ai enablement processes so that it becomes much, much easier to scale across another. >>Some of the CEOs, maybe they don't have the reporting structure that you do, or or maybe it's more of a far flung organization. Not that IBM is not far flung, but they may not have the ability to sort of inject AI. Maybe they can advocate for it. Do you see that as a challenge for some CEOs? And how do they so to get through that, what's what's the way in which they should be working with their constituents across the organization to successfully infuse ai? >>Yeah, that's it's. In fact, you get a very good point. I mean, when I joined IBM, one of the first observations I made and I in fact made it to a senior leadership, is that I didn't think that from a business standpoint, people really understood what a I met. So when we talked about a cognitive enterprise on the I enterprise a zaydi em. You know, our clients don't really understand what that meant, which is why it became really important to enable IBM itself to be any I enterprise. You know that. That's my data strategy. Your you kind of alluded to the fact that I have this approach. There are these five steps, while the very first step is to come up with the data strategy that enables a business strategy that the company's on. And in my case, it was, Hey, I'm going to enable the company because it wants to become a cloud and cognitive company. I'm going to enable that. And so we essentially are data strategy became one off making IBM. It's something I enterprise, but the reason for doing that the reason why that was so important was because then we could use it as a showcase for clients and customers. And so But I'm talking with our clients and customers. That's my role. I'm really the only role I'm playing is what I call an experiential selling there. I'm saying, Forget about you know, the fact that we're selling this particular product or that particular product that you got GPU servers. We've got you know what's an open scale or whatever? It doesn't really matter. Why don't you come and see what we've done internally at scale? And then we'll also lay out for you all the different pain points that we have to work through using our products so that you can kind of make the same case when you when you when you apply it internally and same common with regard to the benefit, you know the cycle, time reduction, some of the cycle time reductions that we've seen in my process is itself, you know, like this. Think about metadata business metadata generating that is so difficult. And it's again, something that's critical if you want to scale your data because you know you can't really have a good catalogue of data if you don't have good business, meditate. Eso. Anybody looking at what's in your catalog won't understand what it is. They won't be able to use it etcetera. And so we've essentially automated business metadata generation using AI and the cycle time reduction that was like ninety five percent, you know, haven't actually argue. It's more than that, because in the past, most people would not. For many many data sets, the pragmatic approach would be. Don't even bother with the business matter data. Then it becomes just put somewhere in the are, you know, data architecture somewhere in your data leg or whatever, you have data warehouse, and then it becomes the data swamp because nobody understands it now with regard to our experience applying AI, infusing it across all our major business processes are average cycle time reduction is seventy percent, so just a tremendous amount of gains are there. But to your point, unless you're able to point to some application at scale within the enterprise, you know that's meaningful for the enterprise, Which is kind of what the what the role I play in terms of bringing it forward to our clients and customers. It's harder to argue. I'll make a case or investment into A I would then be enterprise without actually being able to point to those types of use cases that have been scaled where you can demonstrate the value. So that's extremely important part of the equation. To make sure that that happens on a regular basis with our clients and customers, I will say that you know your point is vomited a lot off. Our clients and customers come back and say, Tell me when they're having a conversation. I was having a conversation just last week with major major financial service of all nations, and I got the same point saying, If you're coming out of regulation, how do I convince my leadership about the value of a I and you know, I basically responded. He asked me about the scale use cases You can show that. But perhaps the biggest point that you can make as a CDO after the senior readership is can we afford to be left up? That is the I think the biggest, you know, point that the leadership has to appreciate. Can you afford to be left up? >>I want to come back to this notion of seventy percent on average, the cycle time reduction. That's astounding. And I want to make sure people understand the potential impacts. And, I would say suspected many CEOs, if not most understand sort of system thinking. It's obviously something that you're big on but often times within organisations. You might see them trying to optimize one little portion of the data lifecycle and you know having. Okay, hey, celebrate that success. But unless you can take that systems view and reduce that overall cycle time, that's really where the business value is. And I guess my we're real question around. This is Every organization has some kind of Northstar, many about profit, and you can increase revenue are cut costs, and you can do that with data. It might be saving lives, but ultimately to drive this data culture, you've got to get people thinking about getting insights that help you with that North Star, that mission of the company, but then taking a systems view and that's seventy percent cycle time reduction is just the enormous business value that that drives, I think, sometimes gets lost on people. And these air telephone numbers in the business case aren't >>yes, No, absolutely. It's, you know, there's just a tremendous amount of potential on, and it's it's not an easy, easy thing to do by any means. So we've been always very transparent about the Dave. As you know, we put forward this this blueprint right, the cognitive enterprise blueprint, how you get to it, and I kind of have these four major pillars for the blueprint. There's obviously does this data and you're getting the data ready for the consummation that you want to do but also things like training data sets. How do you kind of run hundreds of thousands of experiments on a regular basis, which kind of review to the other pillar, which is techology? But then the last two pillars are business process, change and the culture organizational culture, you know, managing organizational considerations, that culture. If you don't keep all four in lockstep, the transformation is usually not successful at an end to end level, then it becomes much more what you pointed out, which is you have kind of point solutions and the role, you know, the CEO role doesn't make the kind of strategic impact that otherwise it could do so and this also comes back to some of the only appointee of you to do. If you think about how do you keep those four pillars and lock sync? It means you've gotta have the data leader. You also gotta have the technology, and in some cases they might be the same people. Hey, just for the moment, sake of argument, let's say they're all different people and many, many times. They are so the data leader of the technology of you and the operations leaders because the other ones own the business processes as well as the organizational years. You know, they've got it all worked together to make it an effective conservation. And so the organization structure that you talked about that in some cases my peers may not have that. You know, that's that. That is true. If the if the senior leadership is not thinking overall digital transformation, it's going to be difficult for them to them go out that >>you've also seen that culturally, historically, when it comes to data and analytics, a lot of times that the lines of business you know their their first response is to attack the quality of the data because the data may not support their agenda. So there's this idea of a data culture on, and I want to ask you how self serve fits into that. I mean, to the degree that the business feels as though they actually have some kind of ownership in the data, and it's largely, you know, their responsibility as opposed to a lot of the finger pointing that has historically gone on. Whether it's been decision support or enterprise data, warehousing or even, you know, Data Lakes. They've sort of failed toe live up to that. That promise, particularly from a cultural standpoint, it and so I wonder, How have you guys done in that regard? How did you get there? Many Any other observations you could make in that regard? >>Yeah. So, you know, I think culture is probably the hardest nut to crack all of those four pillars that I back up and you've got You've got to address that, Uh, not, you know, not just stop down, but also bottom up as well. As you know, period. Appear I'll give you some some examples based on our experience, that idea. So the way my organization is set up is there is a obviously a technology on the other. People who are doing all the data engineering were kind of laying out the foundational technical elements or the transformation. You know, the the AI enabled one be planning networks, and so so that are those people. And then there is another senior leader who reports directly to me, and his organization is all around adoptions. He's responsible for essentially taking what's available in the technology and then working with the business areas to move forward and make this make and infuse. A. I do the processes that the business and he is looking. It's done in a bottom upwards, deliberately set up, designed it to be bottom up. So what I mean by that is the team on my side is fully empowered to move forward. Why did they find a like minded team on the other side and go ahead and do it? They don't have to come back for funding they don't have, You know, they just go ahead and do it. They're basically empowered to do that. And that particular set up enabled enabled us in a couple of years to have one hundred thousand internal users on our Central data and AI enabled platform. And when I mean hundred thousand users, I mean users who were using it on a monthly basis. We company, you know, So if you haven't used it in a month, we won't come. So there it's over one hundred thousand, even very rapidly to that. That's kind of the enterprise wide storm. That's kind of the bottom up direction. The top down direction Waas the strategic element that I talked with you about what I said, Hey, be our data strategy is going to be to create, make IBM itself into any I enterprise and then use that as a showcase for plants and customers That kind of and be reiterated back. And I worked the senior leadership on that view all the time talking to customers, the central and our senior leaders. And so that's kind of the air cover to do this, you know, that mix gives you, gives you that possibility. I think from a peer to peer standpoint, but you get to these lot scale and to end processes, and that there, a couple of ways I worked that one way is we've kind of looked at our enterprise data and said, Okay, therefore, major pillars off data that we want to go after data, tomato plants, data about our offerings, data about financial data, that s and then our work full student and then within that there are obviously some pillars, like some sales data that comes in and, you know, been workforce. You could have contractors. Was his employees a center But I think for the moment, about these four major pillars off data. And so let me map that to end to end large business processes within the company. You know, the really large ones, like Enterprise Performance Management, into a or lead to cash generation into and risk insides across our full supply chain and to and things like that. And we've kind of tied these four major data pillars to those major into and processes Well, well, yes, that there's a mechanism they're obviously in terms off facilitating, and to some extent one might argue, even forcing some interaction between teams that are the way they talk. But it also brings me and my peers much closer together when you set it up that way. And that means, you know, people from the HR side people from the operation side, the data side technology side, all coming together to really move things forward. So all three tracks being hit very, very hard to move the culture fall. >>Am I also correct that you have, uh, chief data officers that reporting to you whether it's a matrix or direct within the division's? Is that right? >>Yeah, so? So I mean, you know, for in terms off our structure, as you know, way our global company, we're also far flung company. We have many different products in business units and so forth. And so, uh, one of the things that I realized early on waas we are going to need data officers, each of those business units and the business units. There's obviously the enterprise objective. And, you know, you could think of the enterprise objectives in terms of some examples based on what I said in the past, which is so enterprise objective would be We've gotta have a data foundation by essentially making data along these four pillars. I talked about clients offerings, etcetera, you know, very accessible self service. You have mentioned south, so thank you. This is where the South seven speaks. Comes it right. So you can you can get at that data quickly and appropriately, right? You want to make sure that the access control, all that stuff is designed out and you're able to change your policies and you'd swap manual. But, you know, those things got implemented very rapidly and quickly. And so you've got you've got that piece off off the off the puzzle due to go after. And then I think the other aspect off off. This is, though, when you recognize that every business unit also has its own objectives and they are looking at some of those things somewhat differently. So I'll give you an example. We've got data any our product units. Now, those CEOs right there, concern is going to be a lot more around the products themselves And how were monetizing those box and so they're not per se concerned with, You know, how you reduce the enter and cycle time off IBM in total supply chain so that this is my point. So they but they're gonna have substantial considerations and objectives that they want to accomplish. And so I recognize that early on, and we came up with this notion off a data officer council and I helped staff the council s. So this is why that's the Matrix to reporting that we talked about. But I selected some of the key Blair's that we have in those units, and I also made sure they were funded by the unit. So they report into the units because their paycheck is actually determined. Pilot unit and which makes them than aligned with the objectives off the unit, but also obviously part of my central approach so that I can disseminate it out to the organization. It comes in very, very handy when you are trying to do things across the company as well. So when we you know GDP our way, we have to get the company ready for Judy PR, I would say that this mechanism became a key key aspect of what enabled us to move forward and do it rapidly. Trouble them >>be because you had the structure that perhaps the lines of business weren't. Maybe is concerned about GDP are, but you had to be concerned with it overall. And this allowed you to sort of hiding their importance, >>right? Because think of in the case of Jeannie PR, they have to be a company wide policy and implementation, right? And if he did not have that structure already in place, it would have made it that much harder. Do you get that uniformity and consistency across the company, right, You know, So you will have to in the weapon that structure, but we already have it because way said Hey, this is around for data. We're gonna have these types of considerations that they are. And so we have this thing regular. You know, this man network that meat meets regularly every month, actually, and you know, when things like GDP are much more frequently than that, >>right? So that makes sense. We're out of time. But I wonder if we could just close if you could address the M I t CDO audience that probably this is the largest audience, Believe or not, now that it's that's virtual definitely expanded the audience, but it's still a very elite group. And the reason why I was so pleased that you agreed to do this is because you've got one of the more complex organizations out there and you've succeeded. And, ah, a lot of the hard, hard work. So what? What message would you leave the M I t CDO audience Interpol? >>So I would say that you know, it's it's this particular professional. Receiving a profession is, uh, if I have to pick one trait of let me pick two traits, I think what is your A change agent? So you have to be really comfortable with change things are going to change, the organization is going to look to you to make those changes. And so that's what aspect off your job, you know, may or may not be part of me immediately. But the those particular set of skills and characteristics and something that you know, one has to, uh one has to develop or time, And I think the other thing I would say is it's a continuous looming jaw. So you continue sexism and things keep changing around you and changing rapidly. And, you know, if you just even think just in terms off the subject areas, I mean this Syria today you've got to understand technology. Obviously, you've gotta understand data you've got to understand in a I and data science. You've got to understand cybersecurity. You've gotta understand the regulatory framework, and you've got to keep all that in mind, and you've got to distill it down to certain trends. That's that's happening, right? I mean, so this is an example of that is that there's a trend towards more regulation around privacy and also in terms off individual ownership of data, which is very different from what's before the that's kind of weather. Bucket's going and so you've got to be on top off all those things. And so the you know, the characteristic of being a continual learner, I think is a is a key aspect off this job. One other thing I would add. And this is All Star Coleman nineteen, you know, prik over nineteen in terms of those four pillars that we talked about, you know, which had to do with the data technology, business process and organization and culture. From a CDO perspective, the data and technology will obviously from consent, I would say most covert nineteen most the civil unrest. And so far, you know, the other two aspects are going to be critical as we move forward. And so the people aspect of the job has never bean, you know, more important down it's today, right? That's something that I find myself regularly doing the stalking at all levels of the organization, one on a one, which is something that we never really did before. But now we find time to do it so obviously is doable. I don't think it's just it's a change that's here to stay, and it ships >>well to your to your point about change if you were in your comfort zone before twenty twenty two things years certainly taking you out of it into Parliament. All right, thanks so much for coming back in. The Cuban addressing the M I t CDO audience really appreciate it. >>Thank you for having me. That my pleasant >>You're very welcome. And thank you for watching everybody. This is Dave a lot. They will be right back after this short >>break. You're watching the queue.

Published Date : Sep 3 2020

SUMMARY :

to you by Silicon Angle Media Great to see you. So when you you and I first met, you laid out what I thought was, you know, one of the most cogent frameworks and they came up with which have a role you know, seemed most meaningful to them. So how has that changed the role of CDO? And the last one is a risk reduction that they're going to reduce the risk, you know, So one of the big changes we've seen in the organization is that data pipeline you mentioned and and Now that's the pipeline you refer that you do, or or maybe it's more of a far flung organization. That is the I think the biggest, you know, and you know having. and the role, you know, the CEO role doesn't make the kind of strategic impact and it's largely, you know, their responsibility as opposed to a lot of the finger pointing that has historically gone And that means, you know, people from the HR side people from the operation side, So I mean, you know, for in terms off our structure, as you know, And this allowed you to sort of hiding their importance, and consistency across the company, right, You know, So you will have to in the weapon that structure, And the reason why I was so pleased that you agreed to do this is because you've got one And so the you know, the characteristic of being a two things years certainly taking you out of it into Parliament. Thank you for having me. And thank you for watching everybody. You're watching the queue.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
IBMORGANIZATION

0.99+

seventy percentQUANTITY

0.99+

DecemberDATE

0.99+

Inderpal BhandariPERSON

0.99+

seventy percentQUANTITY

0.99+

threeQUANTITY

0.99+

first stepQUANTITY

0.99+

five stepsQUANTITY

0.99+

ninety five percentQUANTITY

0.99+

two thousandQUANTITY

0.99+

Silicon Angle MediaORGANIZATION

0.99+

hundred thousand usersQUANTITY

0.99+

last weekDATE

0.99+

DavePERSON

0.99+

thousandsQUANTITY

0.99+

one hundred thousandQUANTITY

0.99+

OneQUANTITY

0.99+

one hundred percentQUANTITY

0.99+

fourQUANTITY

0.99+

firstQUANTITY

0.99+

oneQUANTITY

0.99+

two traitsQUANTITY

0.98+

eachQUANTITY

0.98+

NorthstarORGANIZATION

0.98+

two aspectsQUANTITY

0.98+

todayDATE

0.98+

four pillarsQUANTITY

0.97+

first responseQUANTITY

0.97+

North StarORGANIZATION

0.97+

SyriaLOCATION

0.97+

three thingsQUANTITY

0.97+

secondQUANTITY

0.96+

over one hundred thousandQUANTITY

0.95+

several years agoDATE

0.95+

one traitQUANTITY

0.94+

sixQUANTITY

0.93+

yearsQUANTITY

0.93+

nineteenQUANTITY

0.93+

one wayQUANTITY

0.93+

four major pillarsQUANTITY

0.92+

last half decadeDATE

0.92+

IbnORGANIZATION

0.92+

InterpolPERSON

0.91+

BhandariPERSON

0.91+

first observationsQUANTITY

0.91+

each timeQUANTITY

0.9+

MITORGANIZATION

0.9+

hundreds of thousands of experimentsQUANTITY

0.89+

CDOTITLE

0.89+

two pillarsQUANTITY

0.87+

a monthQUANTITY

0.86+

one aspectQUANTITY

0.86+

twenty thirteenDATE

0.85+

JeanniePERSON

0.84+

two thingsQUANTITY

0.83+

four pillarsQUANTITY

0.82+

2020DATE

0.8+

Sam Werner, IBM & Brent Compton, Red Hat | KubeCon + CloudNativeCon Europe 2020 – Virtual


 

>>from around the globe. It's the Cube with coverage of Coop Con and Cloud, Native Con Europe 2020 Virtual brought to You by Red Hat, The Cloud Native Computing Foundation and its Ecosystem Partners. >>And welcome back to the Cube's coverage of Cube Con Cloud, Native Con Europe 20 twenties Virtual event. I'm Stew Minimum and and happy to Welcome back to the program, two of our Cube alumni. We're gonna be talking about storage in this kubernetes and container world. First of all, we have Sam Warner. He is the vice president of storage, offering management at IBM, and joining him is Brent Compton, senior director of storage and data architecture at Red Hat and Brent. Thank you for joining us, and we get to really dig in. It's the combined IBM and red hat activity in this space, of course, both companies very active in the space of the acquisition, and so we're excited to hear about what's going going. Ford. Sam. Maybe if we could start with you as the tee up, you know, Both Red Hat and IBM have had their conferences this year. We've heard quite a bit about how you know, Red Hat the solutions they've offered. The open source activity is really a foundational layer for much of what IBM is doing when it comes to storage, you know, What does that mean today? >>First of all, I'm really excited to be virtually at Cube Con this year, and I'm also really excited to be with my colleague Brent from Red Hat. This is, I think, the first time that IBM storage and Red Hat Storage have been able to get together and really articulate what we're doing to help our customers in the context of kubernetes and and also with open shift, the things we're doing there. So I think you'll find, ah, you know, as we talked today, that there's a lot of work we're doing to bring together the core capabilities of IBM storage that been helping enterprises with there core applications for years alongside, Ah, the incredible open source capabilities being developed, you know, by red Hat and how we can bring those together to help customers, uh, continue moving forward with their initiatives around kubernetes and rebuilding their applications to be develop once, deploy anywhere, which runs into quite a few challenges for storage. So, Brennan, I'm excited to talk about all the great things we're doing. Excited about getting to share it with everybody else. A cube con? >>Yes. So of course, containers When they first came out well, for stateless environments and we knew that, you know, we've seen this before. You know, those of us that live through that wave of virtualization, you kind of have a first generation solution. You know what application, What environment and be used. But if you know, as we've seen the huge explosion of containers and kubernetes, there's gonna be a maturation of the stack. Storage is a critical component of that. So maybe upfront if you could bring us up to speed you're steeped in, you know, a long history in this space. You know, the challenges that you're hearing from customers. Uhm And where are we today in 2020 for this? >>Thanks to do the most basic caps out there, I think are just traditional. I'm databases. APS that have databases like a post press, a longstanding APS out there that have databases like DB two so traditional APs that are moving towards a more agile environment. That's where we've seen in fact, our collaboration with IBM and particularly the DB two team. And that's where we've seen is they've gone to a micro services container based architecture we've seen pull from the market place. Say, you know, in addition to inventing new Cloud native APS, we want our tried true and tested perhaps I mean such as DB two, such as MQ. We want those to have the benefits of a red hat, open shift, agile environment. And that's where the collaboration between our group and Sam's group comes in together is providing the storage and data services for those state labs. >>Great, Sam, you know I IBM. You've been working with the storage administrator for a long time. What challenges are they facing when we go to the new architectures is it's still the same people it might There be a different part of the organization where you need to start in delivering these solutions. >>It's a really, really good question, and it's interesting cause I do spend a lot of time with storage administrators and the people who are operating the I T infrastructure. And what you'll find is that the decision maker isn't the i t operations or storage operations. People These decisions about implementing kubernetes and moving applications to these new environments are actually being driven by the business lines, which is, I guess, not so different from any other major technology shift. And the storage administrators now are struggling to keep up. So the business lines would like to accelerate development. They want to move to a developed, once deploy anywhere model, and so they start moving down the path of kubernetes. In order to do that, they start, you know, leveraging middleware components that are containerized and easy to deploy. And then they're turning to the I T infrastructure teams and asking them to be able to support it. And when you talk to the storage administrators, they're trying to figure out how to do some of the basic things that are absolutely core to what they do, which is protecting the data in the event of a disaster or some kind of a cyber attack, being able to recover the data, being able to keep the data safe, ensuring governance and privacy of the data. These things are difficult in any environment, but now you're moving to a completely new world and the storage administrators have ah tough challenge out of them. And I think that's where IBM and Red Hat can really come together with all of our experience and are very broad portfolio with incredibly enterprise hardened storage capabilities to help them move from their more traditional infrastructure to a kubernetes environment. >>Maybe if you could bring us up to date when we look back, it, like open stack of red hat, had a few projects from an open source standpoint to help bolster the open source or storage world in the container world. We saw some of those get boarded over. There's new projects. There's been a little bit of argument as to the various different ways to do storage. And of course, we know storage has never been a single solution. There's lots of different ways to do things, but, you know, where are we with the options out there? What's that? What's what's the recommendation from Red Hat and IBM as to how we should look at that? >>I wanna Bridget question to Sam's earlier comments about the challenges facing the storage admin. So if we start with the word agility, I mean, what is agility mean for it in the data world. We're conscious for agility from an application development standpoint. But if you use the term, of course, we've been used to the term Dev ops. But if we use the term data ops, what does that mean? What does that mean to you in the past? For decades, when a developer or someone deploying production wanted to create new storage or data, resource is typically typically filed a ticket and waited. So in the agile world of open shift in kubernetes, it's everything is self service and on demand or what? What kind of constraints and demands that place on the storage and data infrastructure. So now I'll come back to your questions. Do so yes. At the time, that red hat was, um, very heavily into open stack, Red Hat acquired SEF well acquired think tank and and a majority of the SEF developers who are most active in the community. And now so and that became the de facto software defying storage for open stack. But actually for the last time that we spoke at Coop Con and the Rook project has become very popular there in the CN CF as away effectively to make software defined storage systems like SEF. Simple so effectively. The power of SEF, made simple by rook inside of the open shift operator frame where people want that power that SEF brings. But they want the simplicity of self service on demand. And that's kind of the diffusion. The coming together of traditional software defined storage with agility in a kubernetes world. So rook SEF, open shift container storage. >>Wonderful. And I wonder if we could take that a little bit further. A lot of the discussion these days and I hear it every time I talk to IBM and Red Hat is customers air using hybrid clouds. So obviously that has to have an impact on storage. You know, moving data is not easy. There's a little bit of nuance there. So, you know, how do we go from what you were just talking about into a hybrid environ? >>I guess I'll take that one to start and Brent, please feel free to chime in on it. So, um, first of all, from an IBM perspective, you really have to start at a little bit higher level and at the middleware layer. So IBM is bringing together all of our capabilities everything from analytics and AI. So application, development and, uh, in all of our middleware on and packaging them up in something that we call cloud packs, which are pre built. Catalogs have containerized capabilities that can be easily deployed. Ah, in any open shift environment, which allows customers to build applications that could be deployed both on premises and then within public cloud. So in a hybrid multi cloud environment, of course, when you build that sort of environment, you need a storage and data layer, which allows you to move those applications around freely. And that's where the IBM storage suite for cloud packs was. And we've actually taken the core capabilities of the IBM storage software to find storage portfolio. Um, which give you everything you need for high performance block storage, scale out, um, file storage and object storage. And then we've combined that with the capabilities, uh, that we were just discussing from Red Hat, which including a CS on SEF, which allow you, ah, customer to create a common, agile and automated storage environment both on premises and the cloud giving consistent deployment and the ability to orchestrate the data to where it's needed >>I'll just add on to that. I mean that, as Sam noted and is probably most of you are aware. Hybrid Cloud is at the heart of the IBM acquisition of Red Hat with red hat open shift. The stated intent of red hat open shift is to be to become the default operating environment for the hybrid cloud, so effectively bring your own cloud wherever you run. So that that is at the very heart of the synergy between our companies and made manifest by the very large portfolios of software, which would be at which have been, um, moved to many of which to run in containers and embodied inside of IBM cloud packs. So IBM cloud packs backed by red hat open shift on wherever you're running on premises and in a public cloud. And no, with this storage suite for cloud packs that Sam referred to also having a deterministic experience. That's one of the things as we work, for instance, deeply with the IBM DB two team. One of the things that was critical for them, as they couldn't have they couldn't have their customers when they run on AWS have a completely different experience than when they ran on premises, say, on VM, where our on premises on bare metal critical to the DB two team t give their customers deterministic behavior wherever they can. >>Right? So, Sam, I I think any of our audience that it followed this space have heard Red House story about open shift in how it lives across multiple cloud environments. I'm not sure that everybody is familiar with how much of IBM storage solutions today are really this software driven. So ah, And therefore, you know, if I think about IBM, it's like, okay, and by storage or yes, it can live in the IBM Cloud. But from what I'm hearing from Brent in you and from what I know from previous discussion, this is independent and can live in multiple clouds, leveraging this underlying technology and can leverage the capabilities from those public cloud offers. That right, Sam? >>Yeah, that's right. And you know, we have the most comprehensive portfolio of software defined storage in the industry. Maybe to some, it's ah, it's a well kept secret, but those that use it No, the breadth of the portfolio. We have everything from the highest performing scale out file System Teoh Object store that can scale into the exabytes. We have our block storage as well, which runs within the public clouds and can extend back to your private cloud environment. When we talk to customers about deploying storage for hybrid multi cloud in a container environment, we give them a lot of houses to get there. We give them the ability to leverage their existing san infrastructure through the CS I drivers container storage interface. So our whole, uh, you know, physical on Prem infrastructure supports CS I today and then all the software that runs on our arrays also supports running on top of the public clouds, giving customers then the ability to extend that existing san infrastructure into a cloud environment. And now, with storage suite for cloud packs a sprint described earlier, we give you the ability to build a really agile infrastructure, leveraging the capabilities from Red Hat to give you a fully extensible environment and a common way of managing and deploying both on Prem and in the cloud. So we give you a journey with our portfolio to get from your existing infrastructure. Today, you don't have to throw it out it started with that and build out an environment that goes both on Prem and in the cloud. >>Yeah, Brent, I'm glad that you started with database, cause it's not something that I think most people would think about. You know, in a kubernetes environment, you Do you have any customer examples you might be able to give? Maybe Anonymous? Of course. Just talking about how those mission critical applications can fit into the new modern architect. The >>big banks. I mean, just full stop the big banks. But what I'd add to that So that's kind of frequently they start because applications based on structured data remain at the heart of a lot of enterprises. But I would say workload, category number two, our is all things machine Learning Analytics ai and we're seeing an explosion of adoption within the open shift. And, of course, cloud pack. IBM Cloud private for data, is a key market participant in that machine learning analytic space. So an explosion of the usage of of open shift for those types of workloads I was gonna touch just briefly on an example, going back to our kind of data data pipeline and how it started with databases, but it just it explodes. For instance, data pipeline automation, where you have data coming into your APS that are kubernetes based that our open shift based well, maybe we'll end up inside of Watson Studio inside of IBM ah, cloud pack for data. But along the way, there are a variety of transformations that need to occur. Let's say that you're a big bank. You need Teoh effectively as it comes in. You need to be able to run a CRC to ensure to a test that when when you modify the data, for instance, in a real time processing pipeline that when you pass it on to the next stage that you can guarantee well that you can attest that there's been no tampering of the data. So that's an illustration where it began, very with the basics of basic applications running with structured data with databases. Where we're seeing the state of the industry today is tremendous use of these kubernetes and open shift based architectures for machine learning. Analytics made more simple by data pay data pipeline automation through things like open shift container storage through things like open shift server lis or you have scale double functions and what not? So yeah, it began there. But boy, I tell you what. It's exploded since then. >>Yeah, great to hear not only traditional applications, but as you said so, so much interest. And the need for those new analytics use cases s so it's absolutely that's where it's going. Someone. One other piece of the storage story, of course, is not just that we have state full usage, but talk about data protection, if you could, on how you know things that I think of traditionally my backup restore and like, how does that fit into the whole discussion we've been having? >>You know, when you talk to customers, it's one of the biggest challenges they have honestly. And moving to containers is how do I get the same level of data protection that I use today? Ah, the environments are in many cases, more complex from a data and storage perspective. You want Teoh be able to take application consistent copies of your data that could be recovered quickly, Uh, and in some cases even reused. You can reuse the copies, for they have task for application migration. There's there's lots of or for actually AI or analytics. There's lots of use cases for the data, but a lot of the tools and AP eyes are still still very new in this space. IBM has made, uh, prior, uh, doing data protection for containers. Ah, top priority for our spectrum protect suite. And we provide the capabilities to do application aware snapshots of your storage environment so that a kubernetes developer can actually build in the resiliency they need. As they build applications in a storage administrator can get a pane of glass Ah, and visibility into all of the data and ensure that it's all being protected appropriately and provide things like S L A. So I think it's about, you know, the fact that the early days of communities tended to be stateless. Now that people are moving some of the more mission critical workloads, the data protection becomes just just critical as anything else you do in the environment. So the tools have to catch up. So that's a top priority of ours. And we provide a lot of those capabilities today and you'll see if you watch what we do with our spectrum. Protect suite will continue to provide the capabilities that our customers need to move their mission. Critical applications to a kubernetes environment. >>Alright And Brent? One other question. Looking forward a little bit. We've been talking for the last couple of years about how server lists can plug into this. Ah, higher kubernetes ecosystem. The K Native project is one that I, IBM and Red Hat has been involved with. So for open shift and server lis with I'm sure you're leveraging k native. What is the update? That >>the update is effectively adoption inside of a lot of cases like the big banks, but also other in the talk, uh, the largest companies in other industries as well. So if you take the words event driven architecture, many of them are coming to us with that's kind of top of mind of them is the need to say, you know, I need to ensure that when data first hits my environment, I can't wait. I can't wait for a scheduled batch job to come along and process that data and maybe run an inference. I mean, the classic cases you're ingesting a chest X ray, and you need to immediately run that against an inference model to determine if the patient has pneumonia or code 19 and then kick off another serverless function to anonymous data. Just send back in to retrain your model. So the need. And so you mentioned serverless. And of course, people say, Well, I could I could handle that just by really smart batch jobs, but kind of one of the other parts of server less that sometimes people forget but smart companies are aware of is that server lists is inherently scalable, so zero to end scalability. So as data is coming in, hitting your Kafka bus, hitting your object store, hitting your database and that if you picked up the the community project to be easy, Um, where something hits your relational database and I can automatically trigger an event onto the Kafka bus so that your entire our architecture becomes event >>driven. All right. Well, Sam, let me give you the funding. Let me let you have the final word. Excuse me on the IBM in this space and what you want them to have his takeaways from Cube con 2020 Europe. >>I'm actually gonna talk to I think, the storage administrators, if that's OK, because if you're not involved right now in the kubernetes projects that are happening within your enterprise, uh, they are happening and there will be new challenges. You've got a lot of investments you've made in your existing storage infrastructure. We had IBM and Red Hat can help you take advantage of the value of your existing infrastructure. Uh, the capabilities, the resiliency, the security of built into it with the years. And we can help you move forward into a hybrid, multi cloud environment built on containers. We've got the experience and the capabilities between Red Hat and IBM to help you be successful because it's still a lot of challenges there. But But our experience can help you implement that with the greatest success. Appreciate it. >>Alright, Sam and Brent, Thank you so much for joining. It's been excellent to be able to watch the maturation in this space of the last couple of years. >>Thank you. >>Alright, we'll be back with lots more coverage from Cube Con Cloud, native con Europe 2020 the virtual event. I'm stew Minimum And thank you for watching the Cube. Yeah, yeah, yeah, yeah

Published Date : Aug 18 2020

SUMMARY :

It's the Cube with coverage of Coop Con Maybe if we could start with you as the tee up, you know, Both Red Hat and IBM have the context of kubernetes and and also with open shift, and we knew that, you know, we've seen this before. Say, you know, in addition to inventing it's still the same people it might There be a different part of the organization where you need to start In order to do that, they start, you know, leveraging middleware components help bolster the open source or storage world in the container world. What kind of constraints and demands that place on the storage and data infrastructure. A lot of the discussion these deployment and the ability to orchestrate the data to where it's needed So that that is at the very heart of the synergy between our companies and But from what I'm hearing from Brent in you and from what I leveraging the capabilities from Red Hat to give you a fully extensible environment Yeah, Brent, I'm glad that you started with database, cause it's not something that So an explosion of the usage of of open shift for those types Yeah, great to hear not only traditional applications, but as you said so, so much interest. but a lot of the tools and AP eyes are still still very new in this space. for the last couple of years about how server lists can plug into this. of them is the need to say, you know, I need to ensure that when in this space and what you want them to have his takeaways from Cube con 2020 Europe. Hat and IBM to help you be successful because it's still a lot Alright, Sam and Brent, Thank you so much for joining. 2020 the virtual event.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
IBMORGANIZATION

0.99+

Sam WarnerPERSON

0.99+

BrentPERSON

0.99+

BrennanPERSON

0.99+

SamPERSON

0.99+

Red HatORGANIZATION

0.99+

twoQUANTITY

0.99+

AWSORGANIZATION

0.99+

Sam WernerPERSON

0.99+

OneQUANTITY

0.99+

2020DATE

0.99+

Red HatORGANIZATION

0.99+

Brent ComptonPERSON

0.99+

CubeORGANIZATION

0.99+

todayDATE

0.99+

oneQUANTITY

0.99+

red hatORGANIZATION

0.99+

BothQUANTITY

0.98+

TodayDATE

0.98+

Coop ConORGANIZATION

0.98+

both companiesQUANTITY

0.98+

first generationQUANTITY

0.98+

this yearDATE

0.98+

KubeConEVENT

0.98+

this yearDATE

0.97+

red hatTITLE

0.97+

firstQUANTITY

0.96+

bothQUANTITY

0.96+

KafkaTITLE

0.96+

BridgetPERSON

0.96+

FirstQUANTITY

0.96+

single solutionQUANTITY

0.96+

SEFTITLE

0.95+

red HatORGANIZATION

0.95+

Stew MinimumPERSON

0.95+

CS ITITLE

0.94+

Simon Kofkin-Hansen, IBM | VeeamON 2020


 

>> From around the globe, it's theCUBE with digital coverage of VeeamON 2020 brought to you by Veeam. >> Welcome back, I'm Stu Miniman, and this is theCUBE's coverage of VeeamON 2020 online. Of course, instead of all gathering together in Las Vegas, we were getting to talk to participants of the community where they are around the globe. Happy to welcome to the program, first time guest on the program, he's part of the opening keynote I'm sure most of you saw, Simon Kofkin-Hansen, chief technology officer for VMware Solutions inside of IBM. Simon, thanks so much for joining us. >> Thank you Stu, it's a pleasure to be here. >> All right, so you know, obviously we know IBM quite well. We at theCUBE at you know, the virtual events, both RedHat Summit and IBM Think not too long in the past there. Talking a lot about you know, the open hybrid cloud many of the messages that I hear from Veeam remind me of what I heard at their environments you know, it, multicloud environment, we need flexibility in what we're doing, we, you know, need to of course you know, data is such an important piece of what's going on. Maybe before we get into it too much, give us a little bit about you know, your role there, where you fit into that whole discussion of what IBM is with Cloud. >> So Stu, yeah, I'm the chief technology officer of IBM, of Veeam solutions on the IBM cloud. Primarily involved and helped create the partnership that exists between IBM and VMware today. Basically, I'm providing automated solutions for our clients. Automated, secure solutions for our clients around the VMware and the IBM Cloud infrastructure space. >> Yeah, well, Simon, it's interesting stuff, you've got some good history there, maybe you might remind our audience you know, I remember at VMWorld, before there was a big partnership, that VMware made with a certain public cloud provider that gets talked about a lot, IBM was the first and if I saw you know, correctly, I'd love for you to be able to provide the data behind it. There are more VMware customers on the IBM Cloud than any other cloud is what I believe is the data I saw, I think. So bring us in little bit more, explain that relationship. >> So yes, we were, as IBM, beginning of all of this, I mean VMware and IBM have had a long relationship. And in fact, IBM manages over 850,000 predominantly VMware workloads on-prems, and have done for the last 10+ years. But in the latest iteration of this partnership, we brought together our automation and our codified experience from dealing with these, our client accounts around the world and brought that expertise along with VMware's product side to align this automated stdc stack on cloud platforms. And first to market with that automated stdc stack called VMware Cloud Foundation. First to market out and we've had a great ongoing relationship since then. It's really resonated with many of our clients and our enterprise clients out there. >> All right well Simon, one of the most important pieces of that, you know, VMware stdc message is that I have VMware, I know how, I manage that environment, and it's got a really robust ecosystem, so, of course Veeam started exclusively in the VMware environments, now lives across many environments, but you know the comment I've made on some of these interviews for VeeamON is, wherever the VMware solution and VMware Cloud goes, Veeam could just go along for the ride, really, if it were. There's obviously some integration work and testing, but help dig into a little bit, what that means for you know, solutions like Veeam tying into what VMware is doing, and what VMware is doing in the IBM Cloud. >> Well particularly at the beginning of this relationship, part of this partnership with VMware was its rich partner ecosystem. And I was given the remit and had the luxury to choose the best of the best products that's out there. Which wasn't necessarily IBM's products in this particular space. Obviously we chose Veeam for backup. I mean Veeam's reputation out there's the backup, it's known as the market leader for the backup of its actual workloads. So it was very important for us to embrace that ecosystem. And it's been a great partnership from the very, very beginning. Getting the backup products out into our platform and as we've done more recently, bringing in the new enhancements like Veeam Cloud Connect to deal with data replication and more use cases around migration and the movement of data in a hybrid cloud sense. And Veeam has been right there with us every step of the way. >> Yeah, so Simon, you're a CTO, so bring us in a little bit architecturally because when I think about hybrid cloud or even you know having to move my data between you know different data centers, you know there are, you know, the physics challenges, and you know sometimes I can, you know, get closer, I can (microphone cuts out) through there, and then there's the financial considerations. So give us to how we have to think about that, what is data movement in 2020, you know, what considerations do we have to have here, and how does IBM maybe differentiate a little bit from some others? >> So I'll answer your first question, I'll answer some of the last questions first. What does data movement in 2020 look like? Well, to be perfectly honest, Stu, we never imagined what would happen this year, but data mobility and the movement of data in a hybrid scenario has never been more acute or prevalent because of the stage that the world is currently in and the conditions that we're living in today. Being able to use familiar based tooling that represents what is used in an on-premises state, over in the cloud, enabling Veeam, or people who have existing investments in Veeam, to use that tooling for multiple different use cases. Not just backup, but that actual data replication functionality has become ever more prevalent in these cases. I was saying similar messages back in 2019 and 2018 and as long as back in 2010. I feel as though, I look at that, it's been almost a decade now, talking about the need or the capabilities of hybrid cloud and this movement of data. But I've absolutely seen an absolute increase in it over the last few years and particularly in 2020 in this current situation. The major difference from an IMB perspective is I would say, is our openness, and our, how we're dealing with the openness in the community, and our commitment to open source. Our flexibility, our security, and the way we actually deal with the enterprise. And one of the major differentiations is the security to the core. Actually building up the security, looking at the secure elements, making sure their data is safe from tampering, it's encrypted both in transit and at rest. And these are many of the factors that our enterprise clients actually demand of us and particularly when we look at the regulated industries with their heavy focus on the financial services sector. And Veeam, with its capabilities and its ability to both do the backup and migration functionality, sort of clients are expecting a two-for-one deal, in these days when they're trying to cut costs, and get out of their own data centers in an effort to cut their costs. >> Excellent. Well, Simon, you know you laid out really the imperative for enterprises, you know today and how they're dealing with that, bring us in as to what differentiates the IBM-Veeam relationship versus just IBM is open and flexible, so there are a lot of options. You know what particularly is there about Veeam that makes that relationship special? >> Well, I think it all down to the partnership and the deep willingness to work together. The research that we're doing in the products, yeah? Looking at ways that we can take Veeam beyond the VMware space and into bare metals and containers. But maintaining that level of security and flexibility that clients demand. I mean, many clients, if they've invested in a particular technology to do their backups, back up and DR, because of the heavy data requirements are still one of the most important if not the most important use case that many cloud users or many of our clients actually go for. So having that partnership with Veeam, in not only dealing with the traditional base, which is the VMware backups, but really pushing the boundaries and looking how we can extend that into migrations, into containers, and bare metal, by still keeping that level of security and flexibility. It's a difficult balance. Sometimes to make it more secure, you have to make things less flexible. And vise-versa, having things more flexible, they become less secure. So being willing to work us and actually define that difficult balance, and still provide the level of the user experience and the level of functionality that our clients demand, and keeping both client sets happy, both IBM and Veeam. It's challenging at times, but I guess it's what makes the job interesting and exciting. >> Yeah Simon, I'm actually glad you mentioned containers as one of the you know, modernization efforts going on there. Of course from Veeam's standpoint, when vSphere 7 rolls out, that they are being supported in you know one of the first work in that. I'd love to hear your viewpoint, what you're hearing from customers, how you expect, as a VMware partner for cloud, that movement of VMs and containers and how they're going together. What should we be looking for as that kind of matures and progresses? >> So I would absolutely watch this space. Particularly as we move into this. Containers and VMs living very much side-by-side. With VMware's announcements around Project Pacific and tanzu, it's very interesting. It's certainly a furor around the market. And we as IBM are very closely working with them with our acquisition last year of RedHat and its containerization platform. All while maintaining our ability in the OpenShift community around Kubernetes. So Stu, obviously I'm privy to a lot more information which I really can't really say and dig into too much detail around this particular angle but just to say that, watch this space. There's a lot going to happen. You're going to see a lot of announcements in the back half of 2020 and in the first few halves of 2021, particularly around the carburetions between containers and VMs and seeing how the different offerings from the different companies shape-- (mic cuts out) interesting times ahead. >> Yeah, absolutely. Simon, maybe you're right, don't want to get you in trouble as looking too much into the future, but maybe bring us into, I'm sure you're having lots of conversations with customers, what's their mindset, you talked about, you know, there's bare metals, virtualization, containers, you know application modernization, I've always said the long haul of the dent in any transformation and modernization (mic stutters) doing, so you know, 'cause some of the challenges and opportunities that you're hearing from customers that you and your partner are helping to solve? >> So some of the challenges around this containerization is containerization (mic stutters) is taking a lot longer and its taking a lot more time than we originally anticipated or expected. So the realization is actually hitting that VMware is going to be around for a while. I mean, the idea that people are thinking that they're just going to transform their applications, or all their VMs over a six or 12-month period, is just not reality. So we're living in this hybrid platform way, where you have VMware, you have virtual machines, and containers coexisting. Certain parts of the application, namely the, if I take the three-tier web app as an example, consisting of a http server, an application server, and a database. When you containerize that, or modernize that, it's very easy to modernize the http server, which turns into the ingress/egress servers on the container. It's very easy to modernize the application server, which is fairly static and you can just put a container. But as we know, Stu, data is sticky. So what many enterprises the data migration, or the way that the database is transformed, is the thing that takes the longest. So we're seeing out there in the enterprises people who are running their apps both with the ingress/egress service, the application server container containerized, but the database still living on a virtual machine, for a extended period of time. And until that made the final jump or chone their data service, they make that move. I do see this being, I personally, I honestly don't believe in my lifetime VMs will actually disappear. Because we're seeing that in some cases it's actually too costly for organizations to actually transform their applications or there's no real business case. It works perfectly well with the existing process. There's no need to modernize. But they're looking at ways and what parts of the architecture can be modernized, and containers are definitely the future for all the attributes that we know and love. But there is going to be this hybrid world. So having tools and partners like Veeam, who are willing to cross the ecosphere of the different platforms, is critical for our clients today and critical for partnerships that we have. Like the one we have with Veeam. >> All right well Simon, it goes back to one of those IT maxims, you know, is IT always additive. We almost never really get rid of anything, we just keep adding to it and changing it and as you said, data is that critical component and I think you highlighted nicely how you know, Veeam fits in you know, very much for that story. So Simon, thank you so much for joining us, pleasure having you on the program, glad to have you in theCUBE alumni ranks at this point. >> Thank you Stu, and thank you, it was a pleasure. Take care. >> All right stay tuned for lots more coverage from VeeamON 2020 online, I'm Stu Miniman, and thanks for watching theCUBE. (calm music)

Published Date : Jun 17 2020

SUMMARY :

From around the globe, it's theCUBE of the community where Thank you Stu, it's many of the messages around the VMware and the IBM is the data I saw, I think. and have done for the last 10+ years. of the most important pieces and the movement of data and you know sometimes I can, you know, and the way we actually the imperative for enterprises, and still provide the level as one of the you know, and in the first few halves I've always said the long haul of the dent and containers are definitely the future and as you said, data is Thank you Stu, and thank I'm Stu Miniman, and thanks

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
IBMORGANIZATION

0.99+

SimonPERSON

0.99+

2019DATE

0.99+

2020DATE

0.99+

2010DATE

0.99+

VMwareORGANIZATION

0.99+

2018DATE

0.99+

VeeamORGANIZATION

0.99+

Las VegasLOCATION

0.99+

first questionQUANTITY

0.99+

StuPERSON

0.99+

VMware Cloud FoundationORGANIZATION

0.99+

bothQUANTITY

0.99+

Simon Kofkin-HansenPERSON

0.99+

three-tierQUANTITY

0.99+

12-monthQUANTITY

0.99+

VMware SolutionsORGANIZATION

0.99+

VMWorldORGANIZATION

0.99+

firstQUANTITY

0.99+

Stu MinimanPERSON

0.99+

oneQUANTITY

0.99+

last yearDATE

0.98+

this yearDATE

0.98+

over 850,000QUANTITY

0.98+

FirstQUANTITY

0.98+

first timeQUANTITY

0.98+

vSphere 7TITLE

0.98+

2021DATE

0.97+

RedHat SummitEVENT

0.97+

todayDATE

0.96+

VMware CloudTITLE

0.94+

theCUBEORGANIZATION

0.93+

Michelle Peluso, IBM | IBM Think 2020 Afterthoughts


 

>> Narrator: From theCUBE's studios in Palo Alto in Boston, connecting with thought leaders all around the world, this is a CUBE Conversation. >> Hi and welcome to a special CUBE Conversation, I'm Stu Miniman and happy to welcome back to the program, Michelle Peluso. She is the Senior Vice President of Digital Sales as well as the Chief Marketing Officer for IBM. Michelle, thanks so much for joining us. >> Hey Stu, great to see you again. Boy we had fun at Think, thank you so much for your help. >> Yeah, well Michelle, I'm really excited to, you know, get a little bit of the inside what happened from your end. Got to talk to you, you know, at the show, instead of 20,000 people, you know, dealing with San Francisco and Moscone and everything there. You had, if I read right, 100,000 people at least registered for the digital event, you know, bring us inside a little bit the control center, what was it like being part of that event, your team, of course, all distributed, and you know, anything surprise you during that event, >> Well it was nerve wracking. (laughing) Look, what an exciting thing, and kudos to the team for so much innovation. I mean, we had in 60 days to build a platform. Of course, using IBM technology, lots of media, the IBM Cloud, integrate some third parties, build a reporting suite. We make all of the content because in this world, of course, there are different things front and center on our clients minds, and not only that, but we had to film it all in remote locations in peoples homes, and make it all work, and so the team did an extraordinary job, and on the really positive side, you mentioned we had over 100,000 clients and business partners register, but it was still even more than three times any audience we've ever had come to our physical events at Think. So it was really extraordinary, and now of course, we're following up. We have a treasure trove of information about what clients are interested in, and what our business partners are interested in. We have a great opportunity to leverage the on demand content to continue the conversation. >> It's great. It's really interesting to time shift things instead of okay I'm going to dedicate however many days to do the event. Now, I love that mix of you can watch it live, you can watch it on demand, you can follow up. You know, how are you any trends that you're seeing as to where people are going, or how you're making sure that there are people to support and engage, not just say, you know, hey, here's a lot of content, you know, go watch our breakouts, go watch the cube stuff. >> Yeah, yeah. Well this is a huge thing, right? So both in terms of what we actually had to say, we really took our time to say, we interviewed clients, we look at search, you know, what's happening, what are our clients searching for, and PS data. So our big seven conversations, things like supply chain resiliency, things like engaging customers virtually, things like virtual work and return to work. We knew that those were really pertinent conversations, and now we have, you know, a couple things happening. One, all of our sellers are reaching out to people. Their clients, their business partners to talk about what they liked, what they didn't like, where they had to go deep in that conversation to progress, that conversation. For those that maybe registered and didn't attend, we're sending them on demand sessions based on what they said they were interested in, so they can consume at their own pace, and for many, we know that there are real opportunities that have emerged. So real business opportunity if they want IBM's help with, and there, of course, we're accelerating the conversations with those clients. >> Yeah, Michelle, your team actually sent over a few questions that some of the audience gave, and one of them talked about that there is, you know, no shortage of data out there. But what they put in the question is often there's not enough people that can curate or help you sort through. So you know, I think with the digital experience, right? How are you helping people curate the information? How are you making sure that people get from, you know, the data down that path towards you know, knowledge and you know, turn data into results eventually. >> Sure, well you have to ask good questions, you know? There's got to be great data standards, and governance, and you have to ask good questions, and that's really the simple thing. And you know, for us, we can ask some very simple questions. What are the signals we have on some clients that tend to think that they're interested in going deeper? You know, the clients where, you know, we had maybe 20, 30, 40 attendees. We had some clients attend over 1,000 sessions, and you know, really, maybe they're majoring on AI, or maybe they majored on cloud, and so how do we pair up our our sellers, our client execs with those clients to talk about taking that conversation to the next phase, right? To the next opportunity. Maybe doing demos, maybe doing a virtual garage, et cetera. Secondly, we had a lot of clients actually sign up for things like virtual garages, throughout Think there were these calls to action, and so we had many clients say, "Hey, I want to start "a virtual garage. I'll take advantage of that to our "free consulting." So for them, we know that we've got to go down a very specific path very quickly. And then there are other clients where the data said you know, there's a late, maybe a little bit of interest, but we have to nurture that they're not ready for the next step. So I think it always starts with just asking great questions. We're a very data driven organization in IBM marketing. We're really passionate about what we can learn. And, you know, beyond, of course, the data and things like Think we're passionate about things like Net Promoter Score. We get a million data points every year from our clients about how they're feeling about IBM. So all this enriches our ability to make sense of this world for our clients. >> Yeah, so Michelle, what one of the things I found really interesting is we've had online events for quite a long time now. You know, we've worked with IBM on that hybrid model, in physical and online events before, but there's a real thirst for you know, what are best practices now? What can you learn? So, you know, when your peers are reaching out for you, and saying, "Hey, Michelle, you did this." Other than not trying to do it all in from you know, from start to finish in six weeks, what other tips would you give, or lessons learned that you have? >> Well, I think, first of all, the platform makes a huge decision, right? We really have to have a flawless technical experience. And so we were very lucky to have Watson Media and hosting on the IBM Cloud. But we integrated some really good third party tooling before you know, analytics, real time analytics, and things like chat, et cetera. Secondly, I think you really have to think about how to make this engaging for the audience. It can't feel like a streaming event. And so for us that meant things like chat of course, then things like moderated live expert sessions mean things like going off platforms, Reddit and hosting sessions on Reddit, things like one on one client executive briefing room. So the second part is really about engaging the audience, and making sure it doesn't just feel like streaming third, shorter is better. You know, people's attention spans are small and no one can sit for five or six hours in front of a computer and consume. So we really cut down and tightened up our key messages. That I think was critical. I think the mix of live and on demand was really powerful and something to think about, but the last thing I would say is that how you progress and follow up on that interest, we all know how to do it in the event. You know, you sit down with your client, and you just watch today in sessions, you have a beer, you're probably watching some 80's band play, and you're talking about what you like, what you think what's exciting to you. What are your challenges? In a digital world that's harder for our client reps and our sellers, and so really thinking of the onset, and how do we make sure we create the space for those conversations after the event is critical. >> Great. Well, Michelle, so where do you and the IBM team take all those learnings? You know, engagement absolutely critical as you talked? What What should we expect to be seeing from IBM through the rest of 2020 when it comes to digital apps? >> I think we'll do things really differently from here on out. I mean, I think that, you know, of course we'll go back to live physical experiences at some point when it's safe for all of us. It is in certain parts of the world already, but we have a series of Think summits coming up all around the world, that idea that you can really engage bigger audiences, we can give them time to make the most of this. They don't have to spend money flying somewhere to really go deep. That's exciting to me. I think we've learned so much. So stay tuned for the Think regional summits happening all around the world, and and I hope we continue to innovate and bring the best of physical and digital into a new brand of experiences and events. >> Yeah, it's really fascinating stuff, Michelle, right? Not only do you get to reach a global audience, but you have the opportunity to personalize things a little bit more. >> Yeah. >> So, thank you so much for joining us. Definitely... >> It's always great to see you. >> Hope to see more and more on the summit's going forward. >> Terrific, always great to see you, and always thank you for your partnership. >> All right. Thank you for watching. I'm Stu Miniman, and as always, thank you for watching theCUBE. (calming music)

Published Date : Jun 3 2020

SUMMARY :

leaders all around the world, I'm Stu Miniman and happy to Hey Stu, great to see you again. and you know, anything and kudos to the team and engage, not just say, you know, hey, and now we have, you know, that path towards you know, You know, the clients where, you know, and saying, "Hey, Michelle, you did this." and you just watch today in so where do you and the IBM I mean, I think that, you know, but you have the opportunity So, thank you so much for joining us. to see you. and more on the summit's going forward. and always thank you for your partnership. thank you for watching theCUBE.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
IBMORGANIZATION

0.99+

Stu MinimanPERSON

0.99+

Michelle PelusoPERSON

0.99+

MichellePERSON

0.99+

Palo AltoLOCATION

0.99+

fiveQUANTITY

0.99+

StuPERSON

0.99+

20QUANTITY

0.99+

100,000 peopleQUANTITY

0.99+

30QUANTITY

0.99+

second partQUANTITY

0.99+

oneQUANTITY

0.99+

60 daysQUANTITY

0.99+

20,000 peopleQUANTITY

0.99+

six hoursQUANTITY

0.99+

Watson MediaORGANIZATION

0.99+

over 100,000 clientsQUANTITY

0.99+

todayDATE

0.99+

six weeksQUANTITY

0.98+

2020DATE

0.98+

bothQUANTITY

0.98+

80DATE

0.98+

BostonLOCATION

0.98+

over 1,000 sessionsQUANTITY

0.98+

OneQUANTITY

0.96+

SecondlyQUANTITY

0.96+

thirdQUANTITY

0.96+

RedditORGANIZATION

0.95+

ThinkORGANIZATION

0.93+

San FranciscoLOCATION

0.91+

CUBE ConversationEVENT

0.89+

40 attendeesQUANTITY

0.88+

Think regionalEVENT

0.85+

more than three timesQUANTITY

0.82+

a million data pointsQUANTITY

0.78+

MosconeLOCATION

0.73+

firstQUANTITY

0.71+

themQUANTITY

0.68+

seven conversationsQUANTITY

0.66+

theCUBEORGANIZATION

0.63+

Think 2020EVENT

0.59+

everyQUANTITY

0.55+

CloudTITLE

0.52+

ThinkEVENT

0.51+

IBM DataOps in Action Panel | IBM DataOps 2020


 

from the cube studios in Palo Alto in Boston connecting with thought leaders all around the world this is a cube conversation hi buddy welcome to this special noob digital event where we're focusing in on data ops data ops in Acton with generous support from friends at IBM let me set up the situation here there's a real problem going on in the industry and that's that people are not getting the most out of their data data is plentiful but insights perhaps aren't what's the reason for that well it's really a pretty complicated situation for a lot of organizations there's data silos there's challenges with skill sets and lack of skills there's tons of tools out there sort of a tools brief the data pipeline is not automated the business lines oftentimes don't feel as though they own the data so that creates some real concerns around data quality and a lot of finger-point quality the opportunity here is to really operationalize the data pipeline and infuse AI into that equation and really attack their cost-cutting and revenue generation opportunities that are there in front of you think about this virtually every application this decade is going to be infused with AI if it's not it's not going to be competitive and so we have organized a panel of great practitioners to really dig in to these issues first I want to introduce Victoria Stassi with who's an industry expert in a top at Northwestern you two'll very great to see you again thanks for coming on excellent nice to see you as well and Caitlin Alfre is the director of AI a vai accelerator and also part of the peak data officers organization at IBM who has actually eaten some of it his own practice what a creep let me say it that way Caitlin great to see you again and Steve Lewis good to see you again see vice president director of management associated a bank and Thompson thanks for coming on thanks Dave make speaker alright guys so you heard my authority with in terms of operationalizing getting the most insight hey data is wonderful insights aren't but getting insight in real time is critical in this decade each of you is a sense as to where you are on that journey or Victoria your taste because you're brand new to Northwestern Mutual but you have a lot of deep expertise in in health care and manufacturing financial services but where you see just the general industry climate and we'll talk about the journeys that you are on both personally and professionally so it's all fair sure I think right now right again just me going is you need to have speech insight right so as I experienced going through many organizations are all facing the same challenges today and a lot of those pounds is hard where do my to live is my data trust meaning has a bank curated has been Clinton's visit qualified has a big a lot of that is ready what we see often happen is businesses right they know their KPIs they know their business metrics but they can't find where that data Linda Barragan asked there's abundant data disparity all over the place but it is replicated because it's not well managed it's a lot of what governance in the platform of pools that governance to speak right offer fact it organizations pay is just that piece of it I can tell you where data is I can tell you what's trusted that when you can quickly access information and bring back answers to business questions that is one answer not many answers leaving the business to question what's the right path right which is the correct answer which which way do I go at the executive level that's the biggest challenge where we want the industry to go moving forward right is one breaking that down along that information to be published quickly and to an emailing data virtualization a lot of what you see today is most businesses right it takes time to build out large warehouses at an enterprise level we need to pivot quicker so a lot of what businesses are doing is we're leaning them towards taking advantage of data virtualization allowing them to connect to these data sources right to bring that information back quickly so they don't have to replicate that information across different systems or different applications right and then to be able to provide that those answers back quickly also allowing for seamless access to from the analysts that are running running full speed right try and find the answers as quickly as they find great okay and I want to get into that sort of how news Steve let me go to you one of the things that we talked about earlier was just infusing this this mindset of a data cult and thinking about data as a service so talk a little bit about how you got started what was the starting NICUs through that sure I think the biggest thing for us there is to change that mindset from data being just for reporting or things that have happened in the past to do some insights on us and some data that already existed well we've tried to shift the mentality there is to start to use data and use that into our actual applications so that we're providing those insight in real time through the applications as they're consumed helping with customer experience helping with our personalization and an optimization of our application the way we've started down that path or kind of the journey that we're still on was to get the foundation laid birch so part of that has been making sure we have access to all that data whether it's through virtualization like vic talked about or whether it's through having more of the the data selected in a data like that that where we have all of that foundational data available as opposed to waiting for people to ask for it that's been the biggest culture shift for us is having that availability of data to be ready to be able to provide those insights as opposed to having to make the businesses or the application or asked for that day Oh Kailyn when I first met into pulp andari the idea wobble he paid up there yeah I was asking him okay where does a what's the role of that at CBO and and he mentioned a number of things but two of the things that stood out is you got to understand how data affect the monetization of your company that doesn't mean you know selling the data what role does it play and help cut cost or ink revenue or productivity or no customer service etc the other thing he said was you've got a align with the lines of piss a little sounded good and this is several years ago and IBM took it upon itself Greek its own champagne I was gonna say you know dogfooding whatever but it's not easy just flip a switch and an infuse a I and automate the data pipeline you guys had to go you know some real of pain to get there and you did you were early on you took some arrows and now you're helping your customers better on thin debt but talk about some of the use cases that where you guys have applied this obviously the biggest organization you know one of the biggest in the world the real challenge is they're sure I'm happy today you know we've been on this journey for about four years now so we stood up our first book to get office 2016 and you're right it was all about getting what data strategy offered and executed internally and we want to be very transparent because as you've mentioned you know a lot of challenges possible think differently about the value and so as we wrote that data strategy at that time about coming to enterprise and then we quickly of pivoted to see the real opportunity and value of infusing AI across all of our needs were close to your question on a couple of specific use cases I'd say you know we invested that time getting that platform built and implemented and then we were able to take advantage of that one particular example that I've been really excited about I have a practitioner on my team who's a supply chain expert and a couple of years ago he started building out supply chain solution so that we can better mitigate our risk in the event of a natural disaster like the earthquake hurricane anywhere around the world and be cuz we invest at the time and getting the date of pipelines right getting that all of that were created and cleaned and the quality of it we were able to recently in recent weeks add the really critical Kovach 19 data and deliver that out to our employees internally for their preparation purposes make that available to our nonprofit partners and now we're starting to see our first customers take advantage too with the health and well-being of their employees mine so that's you know an example I think where and I'm seeing a lot of you know my clients I work with they invest in the data and AI readiness and then they're able to take advantage of all of that work work very quickly in an agile fashion just spin up those out well I think one of the keys there who Kaelin is that you know we can talk about that in a covet 19 contact but it's that's gonna carry through that that notion of of business resiliency is it's gonna live on you know in this post pivot world isn't it absolutely I think for all of us the importance of investing in the business continuity and resiliency type work so that we know what to do in the event of either natural disaster or something beyond you know it'll be grounded in that and I think it'll only become more important for us to be able to act quickly and so the investment in those platforms and approach that we're taking and you know I see many of us taking will really be grounded in that resiliency so Vic and Steve I want to dig into this a little bit because you know we use this concept of data op we're stealing from DevOps and there are similarities but there are also differences now let's talk about the data pipeline if you think about the data pipeline as a sort of quasi linear process where you're investing data and you might be using you know tools but whether it's Kafka or you know we have a favorite who will you have and then you're transforming that that data and then you got a you know discovery you got to do some some exploration you got to figure out your metadata catalog and then you're trying to analyze that data to get some insights and then you ultimately you want to operationalize it so you know and and you could come up with your own data pipeline but generally that sort of concept is is I think well accepted there's different roles and unlike DevOps where it might be the same developer who's actually implementing security policies picking it the operations in in data ops there might be different roles and fact very often are there's data science there's may be an IT role there's data engineering there's analysts etc so Vic I wonder if you could you could talk about the challenges in in managing and automating that data pipeline applying data ops and how practitioners can overcome them yeah I would say a perfect example would be a client that I was just recently working for where we actually took a team and we built up a team using agile methodologies that framework right we're rapidly ingesting data and then proving out data's fit for purpose right so often now we talk a lot about big data and that is really where a lot of industries are going they're trying to add an enrichment to their own data sources so what they're doing is they're purchasing these third-party data sets so in doing so right you make that initial purchase but what many companies are doing today is they have no real way to vet that so they'll purchase the information they aren't going to vet it upfront they're going to bring it into an environment there it's going to take them time to understand if the data is of quality or not and by the time they do typically the sales gone and done and they're not going to ask for anything back but we were able to do it the most recent claim was use an instructure data source right bring that and ingest that with modelers using this agile team right and within two weeks we were able to bring the data in from the third-party vendor what we considered rapid prototyping right be able to profile the data understand if the data is of quality or not and then quickly figure out that you know what the data's not so in doing that we were able to then contact the vendor back tell them you know it sorry the data set up to snuff we'd like our money back we're not gonna go forward with it that's enabling businesses to be smarter with what they're doing with 30 new purchases today as many businesses right now um as much as they want to rely on their own data right they actually want to rely on cross the data from third-party sources and that's really what data Ops is allowing us to do it's allowing us to think at a broader a higher level right what to bring the information what structures can we store them in that they don't necessarily have to be modeled because a modeler is great right but if we have to take time to model all the information before we even know we want to use it that's gonna slow the process now and that's slowing the business down the business is looking for us to speed up all of our processes a lot of what we heard in the past raised that IP tends to slow us down and that's where we're trying to change that perception in the industry is no we're actually here to speed you up we have all the tools and technologies to do so and they're only getting better I would say also on data scientists right that's another piece of the pie for us if we can bring the information in and we can quickly catalog it in a metadata and burn it bring in the information in the backend data data assets right and then supply that information back to scientists gone are the days where scientists are going and asking for connections to all these different data sources waiting days for access requests to be approved just to find out that once they figure out how it with them the relationship diagram right the design looks like in that back-end database how to get to it write the code to get to it and then figure out this is not the information I need that Sally next to me right fold me the wrong information that's where the catalog comes in that's where due to absent data governance having that catalog that metadata management platform available to you they can go into a catalog without having to request access to anything quickly and within five minutes they can see the structures what if the tables look like what did the fields look like are these are these the metrics I need to bring back answers to the business that's data apps it's allowing us to speed up all of that information you know taking stuff that took months now down two weeks down two days down two hours so Steve I wonder if you could pick up on that and just help us understand what data means you we talked about earlier in our previous conversation I mentioned it upfront is this notion of you know the demand for for data access is it was through the roof and and you've gone from that to sort of more of a self-service environment where it's not IT owning the data it's really the businesses owning the data but what what is what is all this data op stuff meaning in your world sure I think it's very similar it's it's how do we enable and get access to that clicker showing the right controls showing the right processes and and building that scalability and agility and into all of it so that we're we're doing this at scale it's much more rapidly available we can discover new data separately determine if it's right or or more importantly if it's wrong similar to what what Vic described it's it's how do we enable the business to make those right decisions on whether or not they're going down the right path whether they're not the catalog is a big part of that we've also introduced a lot of frameworks around scale so just the ability to rapidly ingest data and make that available has been a key for us we've also focused on a prototyping environment so that sandbox mentality of how do we rapidly stand those up for users and and still provide some controls but have provide that ability for people to do that that exploration what we're finding is that by providing the platform and and the foundational layers that were we're getting the use cases to sort of evolve and come out of that as opposed to having the use cases prior to then go build things from we're shifting the mentality within the organization to say we don't know what we need yet let's let's start to explore that's kind of that data scientist mentality and culture it more of a way of thinking as opposed to you know an actual project or implement well I think that that cultural aspect is important of course Caitlin you guys are an AI company or at least that you know part of what you do but you know you've you for four decades maybe centuries you've been organized around different things by factoring plant but sales channel or whatever it is but-but-but-but how has the chief data officer organization within IBM been able to transform itself and and really infuse a data culture across the entire company one of the approaches you know we've taken and we talk about sort of the blueprint to drive AI transformation so that we can achieve and deliver these really high value use cases we talked about the data the technology which we've just pressed on with organizational piece of it duration are so important the change management enabling and equipping our data stewards I'll give one a civic example that I've been really excited about when we were building our platform and starting to pull districting structured unstructured pull it in our ADA stewards are spending a lot of time manually tagging and creating business metadata about that data and we identified that that was a real pain point costing us a lot of money valuable resources so we started to automate the metadata and doing that in partnership with our deep learning practitioners and some of the models that they were able to build that capability we pushed out into our contacts our product last year and one of the really exciting things for me to see is our data stewards who be so value exporters and the skills that they bring have reported that you know it's really changed the way they're able to work it's really sped up their process it's enabled them to then move on to higher value to abilities and and business benefits so they're very happy from an organizational you know completion point of view so I think there's ways to identify those use cases particularly for taste you know we drove some significant productivity savings we also really empowered and hold our data stewards we really value to make their job you know easier more efficient and and help them move on to things that they are more you know excited about doing so I think that's that you know another example of approaching taken yes so the cultural piece the people piece is key we talked a little bit about the process I want to get into a little bit into the tech Steve I wonder if you could tell us you know what's it what's the tech we have this bevy of tools I mentioned a number of them upfront you've got different data stores you've got open source pooling you've got IBM tooling what are the critical components of the technology that people should be thinking about tapping in architecture from ingestion perspective we're trying to do a lot of and a Python framework and scaleable ingestion pipe frameworks on the catalog side I think what we've done is gone with IBM PAC which provides a platform for a lot of these tools to stay integrated together so things from the discovery of data sources the cataloging the documentation of those data sources and then all the way through the actual advanced analytics and Python models and our our models and the open source ID combined with the ability to do some data prep and refinery work having that all in an integrated platform was a key to us for us that the rollout and of more of these tools in bulk as opposed to having the point solutions so that's been a big focus area for us and then on the analytic side and the web versus IDE there's a lot of different components you can go into whether it's meal soft whether it's AWS and some of the native functionalities out there you mentioned before Kafka and Anissa streams and different streaming technologies those are all the ones that are kind of in our Ketil box that we're starting to look at so and one of the keys here is we're trying to make decisions in as close to real time as possible as opposed to the business having to wait you know weeks or months and then by the time they get insights it's late and really rearview mirror so Vic your focus you know in your career has been a lot on data data quality governance master data management data from a data quality standpoint as well what are some of the key tools that you're familiar with that you've used that really have enabled you operationalize that data pipeline you know I would say I'm definitely the IBM tools I have the most experience with that also informatica though as well those are to me the two top players IBM definitely has come to the table with a suite right like Steve said cloud pack for data is really a one-stop shop so that's allowing that quick seamless access for business user versus them having to go into some of the previous versions that IBM had rolled out where you're going into different user interfaces right to find your information and that can become clunky it can add the process it can also create almost like a bad taste and if in most people's mouths because they don't want to navigate from system to system to system just to get their information so cloud pack to me definitely brings everything to the table in one in a one-stop shop type of environment in for me also though is working on the same thing and I would tell you that they haven't come up with a solution that really comes close to what IBM is done with cloud pack for data I'd be interested to see if they can bring that on the horizon but really IBM suite of tools allows for profiling follow the analytics write metadata management access to db2 warehouse on cloud those are the tools that I've worked in my past to implement as well as cloud object store to bring all that together to provide that one stop that at Northwestern right we're working right now with belieber I think calibra is a great set it pool are great garments catalog right but that's really what it's truly made for is it's a governance catalog you have to bring some other pieces to the table in order for it to serve up all the cloud pack does today which is the advanced profiling the data virtualization that cloud pack enables today the machine learning at the level where you can actually work with our and Python code and you put our notebooks inside of pack that's some of this the pieces right that are missing in some of the under vent other vendor schools today so one of the things that you're hearing here is the theme of openness others addition we've talked about a lot of tools and not IBM tools all IBM tools there there are many but but people want to use what they want to use so Kaitlin from an IBM perspective what's your commitment the openness number one but also to you know we talked a lot about cloud packs but to simplify the experience for your client well and I thank Stephen Victoria for you know speaking to their experience I really appreciate feedback and part of our approach has been to really take one the challenges that we've had I mentioned some of the capabilities that we brought forward in our cloud platform data product one being you know automating metadata generation and that was something we had to solve for our own data challenges in need so we will continue to source you know our use cases from and grounded from a practitioner perspective of what we're trying to do and solve and build and the approach we've really been taking is co-creation line and that we roll these capability about the product and work with our customers like Stephen light victorious you really solicit feedback to product route our dev teams push that out and just be very open and transparent I mean we want to deliver a seamless experience we want to do it in partnership and continue to solicit feedback and improve and roll out so no I think that will that has been our approach will continue to be and really appreciate the partnerships that we've been able to foster so we don't have a ton of time but I want to go to practitioners on the panel and ask you about key key performance indicators when I think about DevOps one of the things that we're measuring is the elapsed time the deploy applications start finished where we're measuring the amount of rework that has to be done the the quality of the deliverable what are the KPIs Victoria that are indicators of success in operationalizing date the data pipeline well I would definitely say your ability to deliver quickly right so how fast can you deliver is that is that quicker than what you've been able to do in the past right what is the user experience like right so have you been able to measure what what the amount of time was right that users are spending to bring information to the table in the past versus have you been able to reduce that time to delivery right of information business answers to business questions those are the key performance indicators to me that tell you that the suite that we've put in place today right it's providing information quickly I can get my business answers quickly but quicker than I could before and the information is accurate so being able to measure is it quality that I've been giving that I've given back or is this not is it the wrong information and yet I've got to go back to the table and find where I need to gather that from from somewhere else that to me tells us okay you know what the tools we've put in place today my teams are working quicker they're answering the questions they need to accurately that is when we know we're on the right path Steve anything you add to that I think she covered a lot of the people components the around the data quality scoring right for all the different data attributes coming up with a metric around how to measure that and and then showing that trend over time to show that it's getting better the other one that we're doing is just around overall date availability how how much data are we providing to our users and and showing that trend so when I first started you know we had somewhere in the neighborhood of 500 files that had been brought into the warehouse and and had been published and available in the neighborhood of a couple thousand fields we've grown that into weave we have thousands of cables now available so it's it's been you know hundreds of percent in scale as far as just the availability of that data how much is out there how much is is ready and available for for people to just dig in and put into their their analytics and their models and get those back into the other application so that's another key metric that we're starting to track as well so last question so I said at the top that every application is gonna need to be infused with AI this decade otherwise that application not going to be as competitive as it could be and so for those that are maybe stuck in their journey don't really know where to get started I'll start with with Caitlin and go to Victoria and then and then even bring us home what advice would you give the people that need to get going on this my advice is I think you pull the folks that are either producing or accessing your data and figure out what the rate is between I mentioned some of the data management challenges we were seeing this these processes were taking weeks and prone to error highly manual so part was ripe for AI project so identifying those use cases I think that are really causing you know the most free work and and manual effort you can move really quickly and as you build this platform out you're able to spin those up on an accelerated fashion I think identifying that and figuring out the business impact are able to drive very early on you can get going and start really seeing the value great yeah I would actually say kids I hit it on the head but I would probably add to that right is the first and foremost in my opinion right the importance around this is data governance you need to implement a data governance at an enterprise level many organizations will do it but they'll have silos of governance you really need an interface I did a government's platform that consists of a true framework of an operational model model charters right you have data domain owners data domain stewards data custodians all that needs to be defined and while that may take some work in in the beginning right the payoff down the line is that much more it's it it's allowing your business to truly own the data once they own the data and they take part in classifying the data assets for technologists and for analysts right you can start to eliminate some of the technical debt that most organizations have acquired today they can start to look at what are some of the systems that we can turn off what are some of the systems that we see valium truly build out a capability matrix we can start mapping systems right to capabilities and start to say where do we have wares or redundancy right what can we get rid of that's the first piece of it and then the second piece of it is really leveraging the tools that are out there today the IBM tools some of the other tools out there as well that enable some of the newer next-generation capabilities like unit nai right for example allowing automation for automation which right for all of us means that a lot of the analysts that are in place today they can access the information quicker they can deliver the information accurately like we've been talking about because it's been classified that pre works being done it's never too late to start but once you start that it just really acts as a domino effect to everything else where you start to see everything else fall into place all right thank you and Steve bring us on but advice for your your peers that want to get started sure I think the key for me too is like like those guys have talked about I think all everything they said is valid and accurate thing I would add is is from a starting perspective if you haven't started start right don't don't try to overthink that over plan it it started just do something and and and start the show that progress and value the use cases will come even if you think you're not there yet it's amazing once you have the national components there how some of these things start to come out of the woodwork so so it started it going may have it have that iterative approach to this and an open mindset it's encourage exploration and enablement look your organization in the eye to say why are their silos why do these things like this what are our problem what are the things getting in our way and and focus and tackle those those areas as opposed to trying to put up more rails and more boundaries and kind of encourage that silo mentality really really look at how do you how do you focus on that enablement and then the last comment would just be on scale everything should be focused on scale what you think is a one-time process today you're gonna do it again we've all been there you're gonna do it a thousand times again so prepare for that prepare forever that you're gonna do everything a thousand times and and start to instill that culture within your organization a great advice guys data bringing machine intelligence an AI to really drive insights and scaling with a cloud operating model no matter where that data live it's really great to have have three such knowledgeable practitioners Caitlyn Toria and Steve thanks so much for coming on the cube and helping support this panel all right and thank you for watching everybody now remember this panel was part of the raw material that went into a crowd chat that we hosted on May 27th Crouch at net slash data ops so go check that out this is Dave Volante for the cube thanks for watching [Music]

Published Date : May 28 2020

**Summary and Sentiment Analysis are not been shown because of improper transcript**

ENTITIES

EntityCategoryConfidence
Steve LewisPERSON

0.99+

Caitlyn ToriaPERSON

0.99+

StevePERSON

0.99+

Linda BarraganPERSON

0.99+

Dave VolantePERSON

0.99+

two weeksQUANTITY

0.99+

Victoria StassiPERSON

0.99+

Caitlin AlfrePERSON

0.99+

two hoursQUANTITY

0.99+

VicPERSON

0.99+

two daysQUANTITY

0.99+

May 27thDATE

0.99+

500 filesQUANTITY

0.99+

IBMORGANIZATION

0.99+

Palo AltoLOCATION

0.99+

PythonTITLE

0.99+

five minutesQUANTITY

0.99+

30 new purchasesQUANTITY

0.99+

last yearDATE

0.99+

CaitlinPERSON

0.99+

ClintonPERSON

0.99+

first pieceQUANTITY

0.99+

first bookQUANTITY

0.99+

DavePERSON

0.99+

second pieceQUANTITY

0.99+

BostonLOCATION

0.99+

SallyPERSON

0.99+

todayDATE

0.99+

AWSORGANIZATION

0.99+

hundreds of percentQUANTITY

0.98+

Stephen VictoriaPERSON

0.98+

oneQUANTITY

0.98+

Northwestern MutualORGANIZATION

0.98+

KaitlinPERSON

0.97+

four decadesQUANTITY

0.97+

firstQUANTITY

0.97+

two top playersQUANTITY

0.97+

several years agoDATE

0.96+

about four yearsQUANTITY

0.96+

first customersQUANTITY

0.95+

tons of toolsQUANTITY

0.95+

KailynPERSON

0.95+

bothQUANTITY

0.95+

twoQUANTITY

0.94+

NorthwesternORGANIZATION

0.94+

NorthwesternLOCATION

0.93+

eachQUANTITY

0.91+

CrouchPERSON

0.91+

CBOORGANIZATION

0.91+

DevOpsTITLE

0.91+

two ofQUANTITY

0.89+

AIORGANIZATION

0.87+

thingsQUANTITY

0.87+

three such knowledgeable practitionersQUANTITY

0.87+

Itumeleng Monale, Standard Bank | IBM DataOps 2020


 

from the cube studios in Palo Alto in Boston connecting with thought leaders all around the world this is a cube conversation hi buddy welcome back to the cube this is Dave Volante and you're watching a special presentation data ops enacted made possible by IBM you know what's what's happening is the innovation engine in the IT economy is really shifted used to be Moore's Law today it's applying machine intelligence and AI to data really scaling that and operationalizing that new knowledge the challenges that is not so easy to operationalize AI and infuse it into the data pipeline but what we're doing in this program is bringing in practitioners who have actually had a great deal of success in doing just that and I'm really excited to have it Kumal a Himalayan Manali is here she's the executive head of data management or personal and business banking at Standard Bank of South Africa the tomb of length thanks so much for coming in the queue thank you for having me Dave you're very welcome and first of all how you holding up with this this bovid situation how are things in Johannesburg um things in Johannesburg are fine we've been on lockdown now I think it's day 33 if I'm not mistaken lost count and but we're really grateful for the swift action of government we we only I mean we have less than 4,000 places in the country and infection rate is is really slow so we've really I think been able to find the curve and we're grateful for being able to be protected in this way so all working from home or learning the new normal and we're all in this together that's great to hear why don't you tell us a little bit about your your role you're a data person we're really going to get into it but here with us you know how you spend your time okay well I head up a date operations function and a data management function which really is the foundational part of the data value chain that then allows other parts of the organization to monetize data and liberate it as as as the use cases apply we monetize it ourselves as well but really we're an enterprise wide organization that ensures that data quality is managed data is governed that we have the effective practices applied to the entire lineage of the data ownership and curation is in place and everything else from a regulatory as well as opportunity perspective then is able to be leveraged upon so historically you know data has been viewed as sort of this expense it's it's big it's growing it needs to be managed deleted after a certain amount of time and then you know ten years ago of the Big Data move data became an asset you had a lot of shadow I people going off and doing things that maybe didn't comply to the corporate ethics probably drove here here you're a part of the organization crazy but talk about that how what has changed but they in the last you know five years or so just in terms of how people approach data oh I mean you know the story I tell my colleague who are all bankers obviously is the fact that the banker in 1989 had to mainly just know debits credits and be able to look someone in the eye and know whether or not they'd be a credit risk or not you know if we lend you money and you pay it back the the banker of the late 90s had to then contend with the emergence of technologies that made their lives easier and allowed for automation and processes to run much more smoothly um in the early two-thousands I would say that digitization was a big focus and in fact my previous role was head of digital banking and at the time we thought digital was the panacea it is the be-all and end-all it's the thing that's gonna make organizations edit lo and behold we realized that once you've gotten all your digital platforms ready they are just the plate or the pipe and nothing is flowing through it and there's no food on the face if data is not the main photo really um it's always been an asset I think organizations just never consciously knew that data was that okay so so what sounds like once you've made that sort of initial digital transformation you really had to work it and what we're hearing from a lot of practitioners like self as challenges related to that involve different parts of the organization different skill sets of challenges and sort of getting everybody to work together on the same page it's better but maybe you could take us back to sort of when you started on this initiative around data Ops what was that like what were some of the challenges that you faced and how'd you get through them okay first and foremost Dave organizations used to believe that data was I t's problem and that's probably why you you then saw the emergence of things like chatter IP but when you really acknowledge that data is an essay just like money is an asset then you you have to then take accountability for it just the same way as you would any other asset in the organization and you will not abdicate its management to a separate function that's not cold to the business and oftentimes IT are seen as a support or an enabling but not quite the main show in most organizations right so what we we then did is first emphasize that data is a business capability the business function it presides in business makes to product management makes to marketing makes to everything else that the business needs for data management also has to be for to every role in every function to different degrees and varying bearing offense and when you take accountability as an owner of a business unit you also take accountability for the data in the systems that support the business unit for us that was the first picture um and convincing my colleagues that data was their problem and not something that we had to worry about they just kind of leave us to to it was was also a journey but that was kind of the first step into it in terms of getting the data operations journey going um you had to first acknowledge please carry on no you just had to first acknowledge that it's something you must take accountability of as a banker not just need to a different part of the organization that's a real cultural mindset you know in the game of rock-paper-scissors you know culture kinda beats everything doesn't it it's almost like a yep a trump card and so so the businesses embrace that but but what did you do to support that is there has to be trust in the data that it has to be a timeliness and so maybe you could take us through how you achieve those objectives and maybe some other objectives that business the man so the one thing I didn't mention Dave is that obviously they didn't embrace it in the beginning it wasn't a it wasn't there oh yeah that make sense they do that type of conversation um what what he had was a few very strategic people with the right mindset that I could partner with that understood the case for data management and while we had that as as an in we developed a framework for a fully matured data operations capability in the organization and what that would look like in a target date scenario and then what you do is you wait for a good crisis so we had a little bit of a challenge in that our local regulator found us a little bit wanting in terms of our date of college and from that perspective it then brought the case for data quality management so now there's a burning platform you have an appetite for people to partner with you and say okay we need this to comply to help us out and when they start seeing their opt-in action do they then buy into into the concept so sometimes you need to just wait for a good Christ and leverage it and only do that which the organization will appreciate at that time you don't have to go Big Bang data quality management was the use case at the time five years ago so we focused all our energy on that and after that it gave us leeway and license really bring to maturity all the other capabilities at the business might not well understand as well so when that crisis hit of thinking about people process in technology you probably had to turn some knobs in each of those areas can you talk about that so from a technology perspective that that's when we partnered with with IBM to implement information analyzer for us in terms of making sure that then we could profile the data effectively what was important for us is to to make strides in terms of showing the organization progress but also being able to give them access to self-service tools that will give them insight into their data from a technology perspective that was kind of I think the the genesis of of us implementing and the IBM suite in earnest from a data management perspective people wise we really then also began a data stewardship journey in which we implemented business unit stewards of data I don't like using the word steward because in my organization it's taken lightly almost like a part-time occupation so we converted them we call them data managers and and the analogy I would give is every department with a P&L any department worth its salt has a FDA or financial director and if money is important to you you have somebody helping you take accountability and execute on your responsibilities in managing that that money so if data is equally important as an asset you will have a leader a manager helping you execute on your data ownership accountability and that was the people journey so firstly I had kind of soldiers planted in each department which were data managers that would then continue building the culture maturing the data practices as as applicable to each business unit use cases so what was important is that every manager in every business unit to the Data Manager focus their energy on making that business unit happy by ensuring that they data was of the right compliance level and the right quality the right best practices from a process and management perspective and was governed and then in terms of process really it's about spreading through the entire ecosystem data management as a practice and can be quite lonely um in the sense that unless the whole business of an organization is managing data they worried about doing what they do to make money and most people in most business units will be the only unicorn relative to everybody else who does what they do and so for us it was important to have a community of practice a process where all the data managers across business as well as the technology parts and the specialists who were data management professionals coming together and making sure that we we work together on on specific you say so I wonder if I can ask you so the the industry sort of likes to market this notion of of DevOps applied to data and data op have you applied that type of mindset approach agile of continuous improvement is I'm trying to understand how much is marketing and how much actually applicable in the real world can you share well you know when I was reflecting on this before this interview I realized that our very first use case of data officers probably when we implemented information analyzer in our business unit simply because it was the first time that IT and business as well as data professionals came together to spec the use case and then we would literally in an agile fashion with a multidisciplinary team come together to make sure that we got the outcomes that we required I mean for you to to firstly get a data quality management paradigm where we moved from 6% quality at some point from our client data now we're sitting at 99 percent and that 1% literally is just the timing issue to get from from 6 to 99 you have to make sure that the entire value chain is engaged so our business partners will the fundamental determinant of the business rules apply in terms of what does quality mean what are the criteria of quality and then what we do is translate that into what we put in the catalog and ensure that the profiling rules that we run are against those business rules that were defined at first so you'd have upfront determination of the outcome with business and then the team would go into an agile cycle of maybe two-week sprints where we develop certain things have stand-ups come together and then the output would be - boarded in a prototype in a fashion where business then gets to go double check that out so that was the first iterate and I would say we've become much more mature at it and we've got many more use cases now and there's actually one that it's quite exciting that we we recently achieved over the end of of 2019 into the beginning of this year so what we did was they I'm worried about the sunlight I mean through the window you look creative to me like sunset in South Africa we've been on the we've been on CubeSat sometimes it's so bright we have to put on sunglasses but so the most recent one which was in in mates 2019 coming in too early this year we we had long kind of achieved the the compliance and regulatory burning platform issues and now we are in a place of I think opportunity and luxury where we can now find use cases that are pertinent to business execution and business productivity um the one that comes to mind is we're a hundred and fifty eight years old as an organization right so so this Bank was born before technology it was also born in the days of light no no no integration because every branch was a standalone entity you'd have these big ledges that transactions were documented in and I think once every six months or so these Ledger's would be taken by horse-drawn carriage to a central place to get go reconcile between branches and paper but the point is if that is your legacy the initial kind of ERP implementations would have been focused on process efficiency based on old ways of accounting for transactions and allocating information so it was not optimized for the 21st century our architecture had has had huge legacy burden on it and so going into a place where you can be agile with data is something that we constantly working toward so we get to a place where we have hundreds of branches across the country and all of them obviously telling to client servicing clients as usual and and not being able for any person needing sales teams or executional teams they were not able in a short space of time to see the impact of the tactic from a database fee from a reporting history and we were in a place where in some cases based on how our Ledger's roll up and the reconciliation between various systems and accounts work it would take you six weeks to verify whether your technique were effective or not because to actually see the revenue hitting our our general ledger and our balance sheet might take that long that is an ineffective way to operate in a such a competitive environment so what you had our frontline sales agents literally manually documenting the sales that they had made but not being able to verify whether that or not is bringing revenue until six weeks later so what we did then is we sat down and defined all the requirements were reporting perspective and the objective was moved from six weeks latency to 24 hours um and even 24 hours is not perfect our ideal would be that bite rows of day you're able to see what you've done for that day but that's the next the next epoch that will go through however um we literally had the frontline teams defining what they'd want to see in a dashboard the business teams defining what the business rules behind the quality and the definitions would be and then we had an entire I'm analytics team and the data management team working around sourcing the data optimising and curating it and making sure that the latency had done that's I think only our latest use case for data art um and now we're in a place where people can look at a dashboard it's a cubed self-service they can learn at any time I see the sales they've made which is very important right now at the time of covert nineteen from a form of productivity and executional competitiveness those are two great use cases of women lying so the first one you know going from data quality 6% the 99% I mean 6% is all you do is spend time arguing about the data bills profanity and then 99% you're there and you said it's just basically a timing issue use latency in the timing and then the second one is is instead of paving the cow path with an outdated you know ledger Barret data process week you've now compressed that down to 24 hours you want to get the end of day so you've built in the agility into your data pipeline I'm going to ask you then so when gdpr hit were you able to very quickly leverage this capability and and apply and then maybe other of compliance edik as well well actually you know what we just now was post TDP our us um and and we got GDP all right about three years ago but literally all we got right was reporting for risk and compliance purposes they use cases that we have now are really around business opportunity lists so the risk so we prioritize compliance report a long time it but we're able to do real-time reporting from a single transaction perspective I'm suspicious transactions etc I'm two hours in Bank and our governor so from that perspective that was what was prioritize in the beginning which was the initial crisis so what you found is an entire engine geared towards making sure that data quality was correct for reporting and regulatory purposes but really that is not the be-all and end-all of it and if that's all we did I believe we really would not have succeeded or could have stayed dead we succeeded because Dana monetization is actually the penis' t the leveraging of data for business opportunity is is actually then what tells you whether you've got the right culture or not you're just doing it to comply then it means the hearts and minds of the rest of the business still aren't in the data game I love this story because it's me it's nirvana for so many years we've been pouring money to mitigate risk and you have no choice do it you know the general council signs off on it the the CFO but grudgingly signs off on it but it's got to be done but for years decades we've been waiting to use these these risk initiatives to actually drive business value you know it kind of happened with enterprise data warehouse but it was too slow it was complicated and it certainly didn't happen with with email archiving that was just sort of a tech balk it sounds like you know we're at that point today and I want to ask you I mean like you know you we talking earlier about you know the crisis gonna perpetuated this this cultural shift and you took advantage of that so we're out who we the the mother nature dealt up a crisis like we've never seen before how do you see your data infrastructure your data pipeline your data ops what kind of opportunities do you see in front of you today as a result of ovid 19 well I mean because of of the quality of kind data that we had now we were able to very quickly respond to to pivot nineteen in in our context where the government put us on lockdown relatively early in in the curve or in the cycle of infection and what it meant is it brought a little bit of a shock to the economy because small businesses all of a sudden didn't have a source of revenue or potentially three to six weeks and based on the data quality work that we did before it was actually relatively easy to be agile enough to do the things that we did so within the first weekend of of lockdown in South Africa we were the first bank to proactively and automatically offer small businesses and student and students with loans on our books a instant three month payment holiday assuming they were in good standing and we did that upfront though it was actually an opt-out process rather than you had to fall in and arrange for that to happen and I don't believe we would have been able to do that if our data quality was not with um we have since made many more initiatives to try and keep the economy going to try and keep our clients in in a state of of liquidity and so you know data quality at that point and that Dharma is critical to knowing who you're talking to who needs what and in which solutions would best be fitted towards various segments I think the second component is um you know working from home now brings an entirely different normal right so so if we had not been able to provide productivity dashboard and and and sales and dashboards to to management and all all the users that require it we would not be able to then validate or say what our productivity levels are now that people are working from home I mean we still have essential services workers that physically go into work but a lot of our relationship bankers are operating from home and that face the baseline and the foundation that we said productivity packing for various methods being able to be reported on in a short space of time has been really beneficial the next opportunity for us is we've been really good at doing this for the normal operational and front line and type of workers but knowledge workers have also know not necessarily been big productivity reporters historically they kind of get an output then the output might be six weeks down the line um but in a place where teams now are not locate co-located and work needs to flow in an edge of passion we need to start using the same foundation and and and data pipeline that we've laid down as a foundation for the reporting of knowledge work and agile team type of metric so in terms of developing new functionality and solutions there's a flow in a multidisciplinary team and how do those solutions get architected in a way where data assists in the flow of information so solutions can be optimally developed well it sounds like you're able to map a metric but business lines care about you know into these dashboards you usually the sort of data mapping approach if you will which makes it much more relevant for the business as you said before they own the data that's got to be a huge business benefit just in terms of again we talked about cultural we talked about speed but but the business impact of being able to do that it has to be pretty substantial it really really is um and and the use cases really are endless because every department finds their own opportunity to utilize in terms of their also I think the accountability factor has has significantly increased because as the owner of a specific domain of data you know that you're not only accountable to yourself and your own operation but people downstream to you as a product and in an outcome depend on you to ensure that the quality of the data you produces is of a high nature so so curation of data is a very important thing and business is really starting to understand that so you know the cards Department knows that they are the owners of card data right and you know the vehicle asset Department knows that they are the owners of vehicle they are linked to a client profile and all of that creates an ecosystem around the plan I mean when you come to a bank you you don't want to be known as a number and you don't want to be known just for one product you want to be known across everything that you do with that with that organization but most banks are not structured that way they still are product houses and product systems on which your data reside and if those don't act in concert then we come across extremely schizophrenic as if we don't know our clients and so that's very very important stupid like I can go on for an hour talking about this topic but unfortunately we're we're out of time thank you so much for sharing your deep knowledge and your story it's really an inspiring one and congratulations on all your success and I guess I'll leave it with you know what's next you gave us you know a glimpse of some of the things you wanted to do pressing some of the the elapsed times and the time cycle but but where do you see this going in the next you know kind of mid term and longer term currently I mean obviously AI is is a big is a big opportunity for all organizations and and you don't get automation of anything right if the foundations are not in place so you believe that this is a great foundation for anything AI to be applied in terms of the use cases that we can find the second one is really providing an API economy where certain data product can be shared with third parties I think that probably where we want to take things as well we are really utilizing external third-party data sources I'm in our data quality management suite to ensure validity of client identity and and and residents and things of that nature but going forward because been picked and banks and other organizations are probably going to partner to to be more competitive going forward we need to be able to provide data product that can then be leveraged by external parties and vice-versa to be like thanks again great having you thank you very much Dave appreciate the opportunity thank you for watching everybody that we go we are digging in the data ops we've got practitioners we've got influencers we've got experts we're going in the crowd chat it's the crowd chat net flash data ops but keep it right there way back but more coverage this is Dave Volante for the cube [Music] you

Published Date : May 28 2020

**Summary and Sentiment Analysis are not been shown because of improper transcript**

ENTITIES

EntityCategoryConfidence
JohannesburgLOCATION

0.99+

1989DATE

0.99+

six weeksQUANTITY

0.99+

Dave VolantePERSON

0.99+

IBMORGANIZATION

0.99+

DavePERSON

0.99+

threeQUANTITY

0.99+

24 hoursQUANTITY

0.99+

two-weekQUANTITY

0.99+

6%QUANTITY

0.99+

Palo AltoLOCATION

0.99+

two hoursQUANTITY

0.99+

South AfricaLOCATION

0.99+

less than 4,000 placesQUANTITY

0.99+

99 percentQUANTITY

0.99+

Standard BankORGANIZATION

0.99+

99%QUANTITY

0.99+

21st centuryDATE

0.99+

6QUANTITY

0.99+

second componentQUANTITY

0.99+

hundreds of branchesQUANTITY

0.99+

2019DATE

0.99+

first stepQUANTITY

0.99+

five yearsQUANTITY

0.99+

first bankQUANTITY

0.99+

1%QUANTITY

0.98+

five years agoDATE

0.98+

first timeQUANTITY

0.98+

BostonLOCATION

0.98+

99QUANTITY

0.98+

each departmentQUANTITY

0.98+

firstQUANTITY

0.98+

late 90sDATE

0.97+

six weeks laterDATE

0.97+

todayDATE

0.97+

three monthQUANTITY

0.97+

ten years agoDATE

0.96+

an hourQUANTITY

0.96+

a hundred and fifty eight years oldQUANTITY

0.96+

firstlyQUANTITY

0.95+

second oneQUANTITY

0.95+

first weekendQUANTITY

0.94+

one productQUANTITY

0.94+

nineteenQUANTITY

0.94+

first pictureQUANTITY

0.93+

each business unitQUANTITY

0.91+

eachQUANTITY

0.91+

KumalPERSON

0.89+

single transactionQUANTITY

0.89+

Big BangEVENT

0.88+

first oneQUANTITY

0.88+

once every six monthsQUANTITY

0.87+

2020DATE

0.86+

LedgerORGANIZATION

0.85+

first use caseQUANTITY

0.84+

every branchQUANTITY

0.83+

about three years agoDATE

0.82+

ChristPERSON

0.81+

oneQUANTITY

0.8+

Itumeleng MonalePERSON

0.79+

DevOpsTITLE

0.78+

two great use casesQUANTITY

0.78+

yearsQUANTITY

0.77+

Standard Bank of SouthORGANIZATION

0.76+

DharmaORGANIZATION

0.76+

early this yearDATE

0.74+

l councilORGANIZATION

0.71+

FDAORGANIZATION

0.7+

endDATE

0.69+

this yearDATE

0.68+

Moore's LawTITLE

0.67+

IBM DataOpsORGANIZATION

0.65+

DanaPERSON

0.63+

every businessQUANTITY

0.62+

Inderpal Bhandari, IBM | IBM DataOps 2020


 

from the cube studios in Palo Alto in Boston connecting with thought leaders all around the world this is a cube conversation hi buddy welcome this special digital presentation where we're covering the topic of data ops and specifically how IBM is really operationalizing and automating the data pipeline with data ops and with me is Interpol Bhandari who is the global chief data officer at IBM in Nepal has always great to see you thanks for coming on my pleasure you know the standard throw away question from guys like me is you know what keeps the chief data officer up at night well I know what's keeping you up at night it's kovat 19 how are you doing it's keeping keeping all of us yeah for sure so how you guys making out as a leader I'm interested in you know how you have responded with whether it's you know communications obviously you're doing much more stuff you know remotely you're not on airplanes certainly like you used to be but but what was your first move when you actually realized this was going to require a shift well I think one of the first things that I did was to test the ability of my organization who worked remotely this was well before the the recommendations came in from the government but just so that we wanted you know to be sure that this is something that we could pull off if there were extreme circumstances where even everybody was good and so that was one of the first things we did along with that I think another major activity that we embarked on is even that we had created this central data and AI platform for IBM using our hybrid multi cloud approach how could that be adapting very very quickly you helped with the covert situation but those were the two big items that my team embarked on very quickly and again like I said this is well before there was any recommendations from the government or even internally within IBM any recommendations but B we decided that we wanted to run ahead and make sure that we were ready to ready to operate in that fashion and I believe a lot of my colleagues did the same yeah there's a there's a conversation going on right now just around productivity hits that people may be taking because they really weren't prepared it sounds like you're pretty comfortable with the productivity impact that you're achieving oh I'm totally comfortable with the productivity in fact I will tell you that while we've gone down this spot we've realized that in some cases the productivity is actually going to be better when people are working from home and they're able to focus a lot more on the work aspect you know and this could this runs the gamut from the nature of the job where you know somebody who basically needs to be in the front of a computer and is remotely taking care of operations you know if they don't have to come in their productivity is gonna go up somebody like myself who had a long drive into work you know which I would use on phone calls but now that entire time is can be used a lot more productivity but not maybe in a lot more productive manner so there is a we realize that that there's going to be some aspects of productivity that will actually be helped by the situation provided you're able to deliver the services that you deliver with the same level of quality and satisfaction that you've always done now there were certain other aspects where you know productivity is going to be affected so you know my team there's a lot of whiteboarding that gets done there are lots of informal conversations that spark creativity but those things are much harder to replicate in a remote in life so we've got a sense of you know where we have to do some work what things together versus where we were actually going to be more productive but all in all they are very comfortable that we can pull this off no that's great I want to stay on Kovac for a moment and in the context of just data and data ops and you know why now obviously with a crisis like this it increases the imperative to really have your data act together but I want to ask you both specifically as it relates to Co vid why data ops is so important and then just generally why at this this point in our time so I mean you know the journey we've been on they you know when I joined our data strategy centered around the cloud data and AI mainly because IBM's business strategy was around that and because there wasn't the notion of ái in enterprise right there was everybody understood what AI means for the consumer but for the enterprise people don't really understand what it meant so our data strategy became one of actually making IBM itself into an AI and a BA and then using that as a showcase for our clients and customers who look a lot like us to make them into a eye on the prize and in a nutshell what that translated to was that one had to in few AI into the workflow of the key business processes of enterprise so if you think about that workflow is very demanding why do you have to be able to deliver data and insights on time just when it's needed otherwise you can essentially slow down the whole workflow of a major process with but to be able to pull all that off you need to have your own data very very streamlined so that a lot of it is automated and you're able to deliver those insights as the people who are involved in the workflow needed so we've spent a lot of time while we were making IBM into an AI enterprise and infusing AI into our keepers and thus processes into essentially a data ops pipeline that was very very streamlined which then allowed us to very quickly adapt to the covert 19 situation and I'll give you one specific example that we'll go to you know how one would say one could essentially leverage that capability that I just talked about to do this so one of the key business processes that we had taken aim at was our supply chain you know we're a global company and our supply chain is critical we have lots of suppliers and they are all over the globe and we have different types of products so that you know it has a multiplicative fact is we go from each of those you have other additional suppliers and you have events you have other events you have calamities you have political events so we have to be able to very quickly understand the risk associated with any of those events with regard to our supply chain and make appropriate adjustments on the fly so that was one of the key applications that we built on our central data and the Aqua and as part of a data ops pipeline that meant he ingested the ingestion of the several hundred sources of data had to be blazingly fast and also refreshed very very quickly also we had to then aggregate data from the outside from external sources that had to do with weather related events that had to do with political events social media feeds etcetera and overlay that on top of our map of interest with regard to our supply chain sites and also where they were supposed to deliver we'd also weaved in our capabilities here to track those shipments as they flowed and have that data flow back as well so that we would know exactly where where things were this is only possible because we had a streamlined data ops capability and we had built this central data Nai platform for IBM now you flip over to the covert 19 situation when go with 19 you know emerged and we began to realize that this was going to be a significant significant pandemic what we were able to do very quickly was to overlay the Kovach 19 incidents on top of our sites of interest as well as pick up what was being reported about those sites of interest and provide that over to our business continuity so this became an immediate exercise that we embarked but it wouldn't have been possible if you didn't have the foundation of the data ops pipeline as well as that central data Nai platform in place to help you do that very very quickly and adapt so so what I really like about this story and something that I want to drill into is it essentially a lot of organizations have a real tough time operationalizing AI and fusing it to use your word and the fact that you're doing it is really a good proof point that I want to explore a little bit so you're essentially there was a number of aspects of what you just described there was the data quality piece with your data quality in theory anyway is gonna go up with more data if you can handle it and the other was speed time to insight so you can respond more quickly if it's think about this Kovan situation if your days behind or weeks behind which is not uncommon you know sometimes even worse you just can't respond I mean these things change daily sometimes certainly within the day so is that right that's kind of the the business outcome and objective that you guys were after yes you know so trauma from an infused AI into your business processes by the overarching outcome metric that one focuses on is end to end cycle so you take that process the end-to-end process and you're trying to reduce the end-to-end cycle time by you know several factors several orders of magnitude we did for instance in my organization that have to do with the generation of metadata is data about data and that's usually a very time-consuming process and we've reduced that by over 95% by using AI you actually help in the metadata generation itself and that's applied now across the board for many different business processes that you know iBM has that's the same kind of principle that was you you'll be able to do that so that foundation essentially enables you to go after that cycle time reduction right off the bat so when you get to a situation like of open 19 situation which demands urgent action your foundation is already geared to deliver on that so I think actually we might have a graphic and then the second graphic guys if you bring up this second one I think this is Interpol what you're talking about here that sort of 95 percent reduction guys if you could bring that up would take a look at it so this is maybe not a co vid use case yeah here it is so that 95 percent reduction in in cycle time improving and data quality what we talked about there's actually some productivity metrics right this is what you're talking about here in this metadata example correct yeah yes the middle do that right it's so central to everything that one does with data I mean it's basically data about data and this is really the business metadata that we're talking about which is once you have data in your data Lee if you don't have business metadata describing what that data is then it's very hard for people who are trying to do things to determine whether they can even whether they even have access to the right data and typically this process has been done manually because somebody looks at the data they looks at the fields and they describe it and it could easily take months and what we did was we essentially use a deep learning and a natural language processing approach looked at all the data that we've had historically over an idea and we've automated the metadata generation so whether it was you know you were talking about both the data relevant for probit team or for supply chain or for a receivable process any one of our business processes this is one of those fundamental steps that one must go through to be able to get your data ready for action and if you were able to take that cycle time for that step and reduce it by 95% you can imagine the acceleration yeah and I liked it we were saying before you talk about the end to end a concept you're applying system thinking here which is very very important because you know a lot of a lot of points that I talked you'll they'll be they're so focused on one metric may be optimizing one component of that end to end but it's really the overall outcome that you're trying to achieve you you may sometimes you know be optimizing one piece but not the whole so that systems thinking is is very very important isn't it the system's thinking is extremely important overall no matter you know where you're involved in the process of designing the system but if you're the data guy it's incredibly important because not only does that give you an insight into the cycle time reduction but it also gives it clues you in into what standardization is necessary in the data so that you're able to support an eventual out you know a lot of people will go down the path of data governance and creation of data standard and you can easily boil the ocean trying to do that but if you actually start with an end-to-end view of your key processes and that by extension the outcomes associated with those processes as well as the user experience at the end of those processes and kind of then work backwards as to what are the standards that you need for the data that's going to feed into all that that's how you arrive at you know a viable practical data standards effort that you can essentially push forward with so there's there are multiple aspects when you take that end-to-end system you that helps the chief later one of the other tenets of data ops is really the ability across the organization for everybody to have visibility communications it's very key we've got another graphic that I want to show around the organizational you know in the right regime and this is a complicated situation for a lot of people but it's imperative guys if you bring up the first graphic it's imperative that organizations you know fine bring in the right stakeholders and actually identify those individuals that are going to participate so that there's full visibility everybody understands what their their roles are they're not in in silos so a guys if you could show us that first graphic that would be great but talk about the organization and the right regime they're Interpol yes yes I believe you're going to what you're gonna show up is actually my organization but I think it's yes it's very very illustrative of what one has to set up to be able to pull off the kind of impact you know so let's say we talked about that central data and AI platform that's driving the entire enterprise and you're infusing AI into key business processes like the supply chain you then create applications like the operational risk insights that we talked about and then extend it over to a faster merging and changing situation like the overt nineteen you need an organization that obviously reflects the technical aspects of the plan right so you have to have the data engineering arm and in my case there's a lot of emphasis around because that's one of those skill set areas that's really quite rare and but also very very powerful so they're the major technology arms of that there's also the governance arm that I talked about where you have to produce a set of standards and implement them and enforce them so that you're able to make this end-to-end impact but then there's also there's a there's an adoption where there's a there's a group that reports in to me very very you know empowered which essentially has to convince the rest of the organization to adopt but the key to their success has been in power in the sense that they are empowered to find like-minded individuals in our key business processes who are also empowered and if they agree they just move forward and go ahead and do it because you know we've already provided the central capabilities by central I don't mean they're all in one location we're completely global and you know it's it's it's a hybrid multi-cloud set up but it's central in the sense that it's one source to come for for trusted data as well as the expertise that you need from an AI standpoint to be able to move forward and deliver the business outcome so when these business schemes come together with the adoption that's where the magic hand so that's another another aspect of the organization that's critical and then we've also got a data officer council that I chair and that has to do with the people who are the chief data officer z' of the individual business units that we have and they're kind of my extended team into the rest of the organization and we leverage that bolt from a adoption of the platform standpoint but also in terms of defining and enforcing standard it helps us do want to come back the Ovid talked a little bit about business resiliency people I think you've probably seen the news that IBM's you know providing super computer resources to the government to fight coronavirus you've also just announced that some some RTP folks are helping first responders and nonprofits and providing capabilities for no charge which is awesome I mean it's the kind of thing look I'm sensitive the companies like IBM you know you don't want to appear to be ambulance-chasing in these times however IBM and other big tech companies you're in a position to help and that's what you're doing here so maybe you could talk a little bit about what you're doing in this regard and then we'll tie it up with just business resiliency and the importance of data right right so you know I'd explained the operational risk insights application that we had which we were using internally and be covert nineteen even be using it we were using it primarily to assess the risk to our supply chain from various events and then essentially react very very quickly to those through those events so you could manage the situation well we realize that this is something that you know several non government NGOs that big they could essentially use the ability because they have to manage many of these situations like natural disasters and so we've given that same capability to the NGOs to you and to help them to help them streamline their planning and their thinking by the same token but you talked about Oh with nineteen that same capability with the poet mine team data overlaid on top of them essentially becomes a business continuity planning and resilience because let's say I'm a supply chambers right now I can look the incidence of probe ignite and I can and I know where my suppliers are and I can see the incidence and I can say oh yes know this supplier and I can see that the incidence is going up this is likely to be affected let me move ahead and start making plans backup plans just in case it reaches a crisis level then on the other hand if you're somebody in our revenue planning you know on the finance side and you know where your keep clients and customers are located again by having that information overlaid with those sites you can make your own judgments and you can make your own assessment to do that so that's how it translates over into a business continuity and resilient resilience planning - we are internally doing that now - every department you know that's something that we are actually providing them this capability because we could build rapidly on what we had already done and to be able to do that and then as we get inside into what each of those departments do with that data because you know once they see that data once they overlay it to their sites of interest and this is you know anybody and everybody in IBM because no matter what department they're in there are going to be sites of interest that are going to be affected and they have an understanding of what those sites of interest mean in the context of the planning that they're doing and so they'll be able to make judgments but as we gain a better understanding of that we will automate those capabilities more and more for each of those specific areas and now you're talking about a comprehensive approach an AI approach to business continuity and resilience planning in the context of a large complicated organization like IBM which obviously will be of great interest to enterprise clients and customers right one of the things that we're researching now is trying to understand you know what about this crisis is gonna be permanent some things won't be but but we think many things will be there's a lot of learnings do you think that organizations will rethink business resiliency in this context that they might sub optimize profitability for example to be more prepared for crises like this with better business resiliency and what role would data play in that so no it's a very good question and timely question Dave so I mean clearly people have understood that with regard to such a pandemic the first line of beef right is it is it's not going to be so much on the medicine side because the vaccine is not even we won't be available for a period of time it has to go to development so the first line of defense is actually to take a quarantine like a pro like we've seen play out across the world and then that in effect results in an impact on the businesses right in the economic climate and the businesses there's an impact I think people have realized this now they will obviously factor this in into their into how they do business will become one of those things from if this is time talking about how this becomes permanent I think it's going to become one of those things that if you're a responsible enterprise you are going to be planning for you're going to know how to implement this on the second go-around so obviously you put those frameworks and structures in place and there will be a certain cost associated with them and one could argue that that could eat into the profitability on the other hand what I would say is because these two points really that these are fast emerging fluid situations you have to respond very very quickly to those you will end up laying out a foundation pretty much like we did which enables you to really accelerate your pipeline right so the data ops pipelines we talked about there there's a lot of automation so that you can react very quickly you know data ingestion very very rapidly that you're able to you know do that kind of thing the metadata generation just the entire pipeline that we're talking about that you're able to respond and very quickly bring in new data and then aggregated at the right levels infuse it into the workflows and then deliver it to the right people at the right time I will you know that will become a must now but once you do that you could argue that there is a cost associated with doing that but we know that the cycle time reductions on things like that they can run you know I mean I gave you the example of 95 percent you know on average we see like a 70% end to end cycle time era where we've implemented the approach that's been pretty pervasive with an idea across a business process so that in a sense in in essence then actually becomes a driver for profitability so yes it might you know this might back people into doing that but I would argue that that's probably something that's going to be very good long term for the enterprises involved and they'll be able to leverage that in their in their business and I think that just the competitive pressure of having to do that will force everybody down that path mean but I think it'll be eventually a good that end and cycle time compression is huge and I like what you're saying because it's it's not just a reduction in the expected loss during a crisis there's other residual benefits to the organization Interpol thanks so much for coming on the cube and sharing this really interesting and deep case study I know there's a lot more information out there so really appreciate your time all right take care buddy thanks for watching and this is Dave Allante for the cube and we will see you next time [Music]

Published Date : May 28 2020

**Summary and Sentiment Analysis are not been shown because of improper transcript**

ENTITIES

EntityCategoryConfidence
Dave AllantePERSON

0.99+

IBMORGANIZATION

0.99+

95 percentQUANTITY

0.99+

95 percentQUANTITY

0.99+

70%QUANTITY

0.99+

Palo AltoLOCATION

0.99+

DavePERSON

0.99+

95%QUANTITY

0.99+

InterpolORGANIZATION

0.99+

NepalLOCATION

0.99+

Interpol BhandariPERSON

0.99+

two pointsQUANTITY

0.99+

nineteenQUANTITY

0.99+

first graphicQUANTITY

0.99+

first moveQUANTITY

0.99+

BostonLOCATION

0.99+

first lineQUANTITY

0.99+

oneQUANTITY

0.98+

two big itemsQUANTITY

0.98+

one pieceQUANTITY

0.98+

Kovach 19EVENT

0.97+

pandemicEVENT

0.97+

one metricQUANTITY

0.96+

Inderpal BhandariPERSON

0.96+

KovacORGANIZATION

0.95+

eachQUANTITY

0.94+

one componentQUANTITY

0.94+

KovanEVENT

0.94+

over 95%QUANTITY

0.93+

bothQUANTITY

0.93+

several hundred sourcesQUANTITY

0.92+

first line of beefQUANTITY

0.92+

iBMORGANIZATION

0.91+

second graphicQUANTITY

0.91+

second oneQUANTITY

0.91+

one sourceQUANTITY

0.9+

one of those thingsQUANTITY

0.9+

first thingsQUANTITY

0.88+

a lot of peopleQUANTITY

0.88+

lot of a lot of pointsQUANTITY

0.79+

IBM DataOpsORGANIZATION

0.78+

coronavirusOTHER

0.77+

second goQUANTITY

0.77+

lotQUANTITY

0.75+

firstQUANTITY

0.74+

a lot of peopleQUANTITY

0.73+

19OTHER

0.73+

19 situationQUANTITY

0.72+

one of those fundamental stepsQUANTITY

0.71+

non governmentQUANTITY

0.6+

OvidORGANIZATION

0.55+

2020DATE

0.55+

moreQUANTITY

0.51+

19EVENT

0.41+

Julie Lockner, IBM | IBM DataOps 2020


 

>>from the Cube Studios in Palo Alto and Boston connecting with thought leaders all around the world. This is a cube conversation. >>Hi, everybody. This is Dave Volante with Cuban. Welcome to the special digital presentation. We're really digging into how IBM is operational izing and automating the AI and data pipeline not only for its clients, but also for itself. And with me is Julie Lockner, who looks after offering management and IBM Data and AI portfolio really great to see you again. >>Great, great to be here. Thank you. Talk a >>little bit about the role you have here at IBM. >>Sure, so my responsibility in offering >>management and the data and AI organization is >>really twofold. One is I lead a team that implements all of the back end processes, really the operations behind any time we deliver a product from the Data and AI team to the market. So think about all of the release cycle management are seeing product management discipline, etcetera. The other role that I play is really making sure that I'm We are working with our customers and making sure they have the best customer experience and a big part of that is developing the data ops methodology. It's something that I needed internally >>from my own line of business execution. But it's now something that our customers are looking for to implement in their shops as well. >>Well, good. I really want to get into that. So let's let's start with data ops. I mean, I think you know, a lot of people are familiar with Dev Ops. Not maybe not everybody's familiar with data ops. What do we need to know about data? >>Well, I mean, you bring up the point that everyone knows Dev ops. And in fact, I think you know what data ops really >>does is bring a lot of the benefits that Dev Ops did for application >>development to the data management organizations. So when we look at what is data ops, it's a data management. Uh, it is a data management set of principles that helps organizations bring business ready data to their consumers. Quickly. It takes it borrows from Dev ops. Similarly, where you have a data pipeline that associates a business value requirement. I have this business initiative. It's >>going to drive this much revenue or this must cost >>savings. This is the data that I need to be able to deliver it. How do I develop that pipeline and map to the data sources Know what data it is? Know that I can trust it. So ensuring >>that it has the right quality that I'm actually using, the data that it was meant >>for and then put it to use. So in in history, most data management practices deployed a waterfall like methodology. Our implementation methodology and what that meant is all the data pipeline >>projects were implemented serially, and it was done based on potentially a first in first out program management office >>with a Dev Ops mental model and the idea of being able to slice through all of the different silos that's required to collect the data, to organize it, to integrate it, the validate its quality to create those data integration >>pipelines and then present it to the dashboard like if it's a Cognos dashboard >>or a operational process or even a data science team, that whole end to end process >>gets streamlined through what we're pulling data ops methodology. >>So I mean, as you well know, we've been following this market since the early days of Hadoop people struggle with their data pipelines. It's complicated for them, there's a a raft of tools and and and they spend most of their time wrangling data preparing data moving data quality, different roles within the organization. So it sounds like, you know, to borrow from from Dev Ops Data offices is all about streamlining that data pipeline, helping people really understand and communicate across. End the end, as you're saying, But but what's the ultimate business outcome that you're trying to drive? >>So when you think about projects that require data to again cut costs Teoh Artemia >>business process or drive new revenue initiatives, >>how long does it take to get from having access to the data to making it available? That duration for every time delay that is spent wasted trying to connect to data sources, trying to find subject matter experts that understand what the data means and can verify? It's quality, like all of those steps along those different teams and different disciplines introduces delay in delivering high quality data fat, though the business value of data ops is always associated with something that the business is trying to achieve but with a time element so if it's for every day, we don't have this data to make a decision where either making money or losing money, that's the value proposition of data ops. So it's about taking things that people are already doing today and figuring out the quickest way to do it through automation or work flows and just cutting through all the political barriers >>that often happens when these data's cross different organizational boundaries. >>Yes, sir, speed, Time to insights is critical. But in, you know, with Dev Ops, you really bringing together of the skill sets into, sort of, you know, one Super Dev or one Super ops. It sounds with data ops. It's really more about everybody understanding their role and having communication and line of sight across the entire organization. It's not trying to make everybody else, Ah, superhuman data person. It's the whole It's the group. It's the team effort, Really. It's really a team game here, isn't it? >>Well, that's a big part of it. So just like any type of practice, there's people, aspects, process, aspects and technology, right? So people process technology, and while you're you're describing it, like having that super team that knows everything about the data. The only way that's possible is if you have a common foundation of metadata. So we've seen a surgeons in the data catalog market in the last, you know, 67 years. And what what the what? That the innovation in the data catalog market has actually enabled us to be able >>to drive more data ops pipelines. >>Meaning as you identify data assets you captured the metadata capture its meaning. You capture information that can be shared, whether they're stakeholders, it really then becomes more of a essential repository for people don't really quickly know what data they have really quickly understand what it means in its quality and very quickly with the right proper authority, like privacy rules included. Put it to use >>for models, um, dashboards, operational processes. >>Okay. And we're gonna talk about some examples. And one of them, of course, is IBM's own internal example. But help us understand where you advise clients to start. I want to get into it. Where do I get started? >>Yeah, I mean, so traditionally, what we've seen with these large data management data governance programs is that sometimes our customers feel like this is a big pill to swallow. And what we've said is, Look, there's an operator. There's an opportunity here to quickly define a small project, align into high value business initiative, target something that you can quickly gain access to the data, map out these pipelines and create a squad of skills. So it includes a person with Dev ops type programming skills to automate an instrument. A lot of the technology. A subject matter expert who understands the data sources in it's meeting the line of business executive who translate bringing that information to the business project and associating with business value. So when we say How do you get started? We've developed A I would call it a pretty basic maturity model to help organizations figure out. Where are they in terms of the technology, where are they in terms of organizationally knowing who the right people should be involved in these projects? And then, from a process perspective, we've developed some pretty prescriptive project plans. They help you nail down. What are the data elements that are critical for this business business initiative? And then we have for each role what their jobs are to consolidate the data sets map them together and present them to the consumer. We find that six week projects, typically three sprints, are perfect times to be able to a timeline to create one of these very short, quick win projects. Take that as an opportunity to figure out where your bottlenecks are in your own organization, where your skill shortages are, and then use the outcome of that six week sprint to then focus on billing and gaps. Kick off the next project and iterating celebrate the success and promote the success because >>it's typically tied to a business value to help them create momentum for the next one. >>That's awesome. I want to get into some examples, I mean, or we're both Massachusetts based. Normally you'd be in our studio and we'd be sitting here for face to face of obviously with Kobe. 19. In this crisis world sheltering in place, you're up somewhere in New England. I happened to be in my studio, but I'm the only one here, so relate this to cove it. How would data ops, or maybe you have a, ah, a concrete example in terms of how it's helped, inform or actually anticipate and keep up to date with what's happening with both. >>Yeah, well, I mean, we're all experiencing it. I don't think there's a person >>on the planet who hasn't been impacted by what's been going on with this Cupid pandemic prices. >>So we started. We started down this data obscurity a year ago. I mean, this isn't something that we just decided to implement a few weeks ago. We've been working on developing the methodology, getting our own organization in place so that we could respond the next time we needed to be able todo act upon a data driven decision. So part of the step one of our journey has really been working with our global chief data officer, Interpol, who I believe you have had an opportunity to meet with an interview. So part of this year Journey has been working with with our corporate organization. I'm in a line of business organization where we've established the roles and responsibilities we've established the technology >>stack based on our cloud pack for data and Watson knowledge padlock. >>So I use that as the context. For now, we're faced with a pandemic prices, and I'm being asked in my business unit to respond very quickly. How can we prioritize the offerings that are going to help those in critical need so that we can get those products out to market? We can offer a 90 day free use for governments and hospital agencies. So in order for me to do that as a operations lead or our team, I needed to be able to have access to our financial data. I needed to have access to our product portfolio information. I needed to understand our cloud capacity. So in order for me to be able to respond with the offers that we recently announced and you'll you can take a look at some of the examples with our Watson Citizen Assistant program, where I was able to provide the financial information required for >>us to make those products available from governments, hospitals, state agencies, etcetera, >>that's a That's a perfect example. Now, to set the stage back to the corporate global, uh, the chief data office organization, they implemented some technology that allowed us to, in just data, automatically classify it, automatically assign metadata, automatically associate data quality so that when my team started using that data, we knew what the status of that information >>was when we started to build our own predictive models. >>And so that's a great example of how we've been partnered with a corporate central organization and took advantage of the automated, uh, set of capabilities without having to invest in any additional resources or head count and be able to release >>products within a matter of a couple of weeks. >>And in that automation is a function of machine intelligence. Is that right? And obviously, some experience. But you couldn't you and I when we were consultants doing this by hand, we couldn't have done this. We could have done it at scale anyway. It is it is it Machine intelligence and AI that allows us to do this. >>That's exactly right. And you know, our organization is data and AI, so we happen to have the research and innovation teams that are building a lot of this technology, so we have somewhat of an advantage there, but you're right. The alternative to what I've described is manual spreadsheets. It's querying databases. It's sending emails to subject matter experts asking them what this data means if they're out sick or on vacation. You have to wait for them to come back, and all of this was a manual process. And in the last five years, we've seen this data catalog market really become this augmented data catalog, and the augmentation means it's automation through AI. So with years of experience and natural language understanding, we can home through a lot of the metadata that's available electronically. We can calm for unstructured data, but we can categorize it. And if you have a set of business terms that have industry standard definitions through machine learning, we can automate what you and I did as a consultant manually in a matter of seconds. That's the impact that AI is have in our organization, and now we're bringing this to the market, and >>it's a It's a big >>part of where I'm investing. My time, both internally and externally, is bringing these types >>of concepts and ideas to the market. >>So I'm hearing. First of all, one of the things that strikes me is you've got multiple data, sources and data that lives everywhere. You might have your supply chain data in your er p. Maybe that sits on Prem. You might have some sales data that's sitting in a sas in a cloud somewhere. Um, you might have, you know, weather data that you want to bring in in theory. Anyway, the more data that you have, the better insights that you could gather assuming you've got the right data quality. But so let me start with, like, where the data is, right? So So it's it's anywhere you don't know where it's going to be, but you know you need it. So that's part of this right? Is being able >>to get >>to the data quickly. >>Yeah, it's funny. You bring it up that way. I actually look a little differently. It's when you start these projects. The data was in one place, and then by the time you get through the end of a project, you >>find out that it's moved to the cloud, >>so the data location actually changes. While we're in the middle of projects, we have many or even during this this pandemic crisis. We have many organizations that are using this is an opportunity to move to SAS. So what was on Prem is now cloud. But that shouldn't change the definition of the data. It shouldn't change. It's meaning it might change how you connect to it. It might also change your security policies or privacy laws. Now, all of a sudden, you have to worry about where is that data physically located? And am I allowed to share it across national boundaries right before we knew physically where it waas. So when you think about data ops, data ops is a process that sits on top of where the data physically resides. And because we're mapping metadata and we're looking at these data pipelines and automated work flows, part of the design principles are to set it up so that it's independent of where it resides. However, you have to have placeholders in your metadata and in your tool chain, where we're automating these work flows so that you can accommodate when the data decides to move. Because the corporate policy change >>from on prem to cloud. >>And that's a big part of what Data ops offers is the same thing. By the way, for Dev ops, they've had to accommodate building in, you know, platforms as a service versus on from the development environments. It's the same for data ops, >>and you know, the other part that strikes me and listening to you is scale, and it's not just about, you know, scale with the cloud operating model. It's also about what you were talking about is you know, the auto classification, the automated metadata. You can't do that manually. You've got to be able to do that. Um, in order to scale with automation, That's another key part of data office, is it not? >>It's a well, it's a big part of >>the value proposition and a lot of the part of the business case. >>Right then you and I started in this business, you know, and big data became the thing. People just move all sorts of data sets to these Hadoop clusters without capturing the metadata. And so as a result, you know, in the last 10 years, this information is out there. But nobody knows what it means anymore. So you can't go back with the army of people and have them were these data sets because a lot of the contact was lost. But you can use automated technology. You can use automated machine learning with natural, understand natural language, understanding to do a lot of the heavy lifting for you and a big part of data ops, work flows and building these pipelines is to do what we call management by exception. So if your algorithms say 80% confident that this is a phone number and your organization has a low risk tolerance, that probably will go to an exception. But if you have a you know, a match algorithm that comes back and says it's 99% sure this is an email address, right, and you have a threshold that's 98%. It will automate much of the work that we used to have to do manually. So that's an example of how you can automate, eliminate manual work and have some human interaction based on your risk threshold. >>That's awesome. I mean, you're right, the no schema on write said. I throw it into a data lake. Data Lake becomes a data swamp. We all know that joke. Okay, I want to understand a little bit, and maybe you have some other examples of some of the use cases here, but there's some of the maturity of where customers are. It seems like you've got to start by just understanding what data you have, cataloging it. You're getting your metadata act in order. But then you've got you've got a data quality component before you can actually implement and get yet to insight. So, you know, where are customers on the maturity model? Do you have any other examples that you can share? >>Yeah. So when we look at our data ops maturity model, we tried to simplify, and I mentioned this earlier that we try to simplify it so that really anybody can get started. They don't have to have a full governance framework implemented to to take advantage of the benefits data ops delivers. So what we did is we said if you can categorize your data ops programs into really three things one is how well do you know your data? Do you even know what data you have? The 2nd 1 is, and you trust it like, can you trust it's quality? Can you trust it's meeting? And the 3rd 1 is Can you put it to use? So if you really think about it when you begin with what data do you know, write? The first step is you know, how are you determining what data? You know? The first step is if you are using spreadsheets. Replace it with a data catalog. If you have a department line of business catalog and you need to start sharing information with the department's, then start expanding to an enterprise level data catalog. Now you mentioned data quality. So the first step is do you even have a data quality program, right. Have you even established what your criteria are for high quality data? Have you considered what your data quality score is comprised of? Have you mapped out what your critical data elements are to run your business? Most companies have done that for there. They're governed processes. But for these new initiatives And when you identify, I'm in my example with the covert prices, what products are we gonna help bring to market quickly? I need to be able to >>find out what the critical data elements are. And can I trust it? >>Have I even done a quality scan and have teams commented on it's trustworthiness to be used in this case, If you haven't done anything like that in your organization, that might be the first place to start. Pick the critical data elements for this initiative, assess its quality, and then start to implement the work flows to re mediate. And then when you get to putting it to use, there's several methods for making data available. One is simply making a gate, um, are available to a small set of users. That's what most people do Well, first, they make us spreadsheet of the data available, But then, if they need to have multiple people access it, that's when, like a Data Mart might make sense. Technology like data virtualization eliminates the need for you to move data as you're in this prototyping phase, and that's a great way to get started. It doesn't cost a lot of money to get a virtual query set up to see if this is the right join or the right combination of fields that are required for this use case. Eventually, you'll get to the need to use a high performance CTL tool for data integration. But Nirvana is when you really get to that self service data prep, where users can weary a catalog and say these are the data sets I need. It presents you a list of data assets that are available. I can point and click at these columns I want as part of my data pipeline and I hit go and automatically generates that output or data science use cases for it. Bad news, Dashboard. Right? That's the most mature model and being able to iterate on that so quickly that as soon as you get feedback that that data elements are wrong or you need to add something, you can do it. Push button. And that's where data obscurity should should bring organizations too. >>Well, Julie, I think there's no question that this covert crisis is accentuated the importance of digital. You know, we talk about digital transformation a lot, and it's it's certainly riel, although I would say a lot of people that we talk to we'll say, Well, you know, not on my watch. Er, I'll be retired before that all happens. Well, this crisis is accelerating. That transformation and data is at the heart of it. You know, digital means data. And if you don't have data, you know, story together and your act together, then you're gonna you're not gonna be able to compete. And data ops really is a key aspect of that. So give us a parting word. >>Yeah, I think This is a great opportunity for us to really assess how well we're leveraging data to make strategic decisions. And if there hasn't been a more pressing time to do it, it's when our entire engagement becomes virtual like. This interview is virtual right. Everything now creates a digital footprint that we can leverage to understand where our customers are having problems where they're having successes. You know, let's use the data that's available and use data ops to make sure that we can generate access. That data? No, it trust it, Put it to use so that we can respond to >>those in need when they need it. >>Julie Lockner, your incredible practitioner. Really? Hands on really appreciate you coming on the Cube and sharing your knowledge with us. Thank you. >>Thank you very much. It was a pleasure to be here. >>Alright? And thank you for watching everybody. This is Dave Volante for the Cube. And we will see you next time. >>Yeah, yeah, yeah, yeah, yeah

Published Date : May 28 2020

SUMMARY :

from the Cube Studios in Palo Alto and Boston connecting with thought leaders all around the world. portfolio really great to see you again. Great, great to be here. from the Data and AI team to the market. But it's now something that our customers are looking for to implement I mean, I think you know, I think you know what data ops really Similarly, where you have a data pipeline that associates a This is the data that I need to be able to deliver it. for and then put it to use. So it sounds like, you know, that the business is trying to achieve but with a time element so if it's for every you know, with Dev Ops, you really bringing together of the skill sets into, sort of, in the data catalog market in the last, you know, 67 years. Meaning as you identify data assets you captured the metadata capture its meaning. But help us understand where you advise clients to start. So when we say How do you get started? it's typically tied to a business value to help them create momentum for the next or maybe you have a, ah, a concrete example in terms of how it's helped, I don't think there's a person on the planet who hasn't been impacted by what's been going on with this Cupid pandemic Interpol, who I believe you have had an opportunity to meet with an interview. So in order for me to Now, to set the stage back to the corporate But you couldn't you and I when we were consultants doing this by hand, And if you have a set of business terms that have industry part of where I'm investing. Anyway, the more data that you have, the better insights that you could The data was in one place, and then by the time you get through the end of a flows, part of the design principles are to set it up so that it's independent of where it for Dev ops, they've had to accommodate building in, you know, and you know, the other part that strikes me and listening to you is scale, and it's not just about, So you can't go back with the army of people and have them were these data I want to understand a little bit, and maybe you have some other examples of some of the use cases So the first step is do you even have a data quality program, right. And can I trust it? able to iterate on that so quickly that as soon as you get feedback that that data elements are wrong And if you don't have data, you know, Put it to use so that we can respond to Hands on really appreciate you coming on the Cube and sharing Thank you very much. And we will see you next time.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JuliePERSON

0.99+

Julie LocknerPERSON

0.99+

IBMORGANIZATION

0.99+

Dave VolantePERSON

0.99+

New EnglandLOCATION

0.99+

90 dayQUANTITY

0.99+

99%QUANTITY

0.99+

80%QUANTITY

0.99+

MassachusettsLOCATION

0.99+

Data MartORGANIZATION

0.99+

first stepQUANTITY

0.99+

98%QUANTITY

0.99+

Palo AltoLOCATION

0.99+

BostonLOCATION

0.99+

67 yearsQUANTITY

0.99+

six weekQUANTITY

0.99+

Cube StudiosORGANIZATION

0.99+

bothQUANTITY

0.99+

oneQUANTITY

0.99+

a year agoDATE

0.99+

firstQUANTITY

0.98+

Dev OpsORGANIZATION

0.98+

2nd 1QUANTITY

0.97+

OneQUANTITY

0.97+

FirstQUANTITY

0.97+

InterpolORGANIZATION

0.97+

one placeQUANTITY

0.97+

each roleQUANTITY

0.97+

HadoopTITLE

0.95+

KobePERSON

0.95+

SASORGANIZATION

0.95+

Cupid pandemicEVENT

0.94+

todayDATE

0.93+

3rd 1QUANTITY

0.93+

this yearDATE

0.93+

few weeks agoDATE

0.88+

PremORGANIZATION

0.87+

last five yearsDATE

0.87+

2020DATE

0.85+

three sprintsQUANTITY

0.81+

one SuperQUANTITY

0.8+

NirvanaORGANIZATION

0.79+

CubanORGANIZATION

0.77+

three thingsQUANTITY

0.76+

pandemicEVENT

0.74+

step oneQUANTITY

0.71+

one of themQUANTITY

0.7+

last 10 yearsDATE

0.69+

Dev OpsTITLE

0.69+

Teoh ArtemiaORGANIZATION

0.68+

CognosORGANIZATION

0.61+

Watson Citizen AssistantTITLE

0.6+

Dev opsTITLE

0.6+

CubeCOMMERCIAL_ITEM

0.57+

opsORGANIZATION

0.54+

weeksQUANTITY

0.48+

CubeORGANIZATION

0.47+

coupleQUANTITY

0.47+

WatsonTITLE

0.42+

Victoria Stasiewicz, Harley-Davidson Motor Company | IBM DataOps 2020


 

from the cube studios in Palo Alto in Boston connecting with thought leaders all around the world this is a cube conversation hi everybody this is Dave Volante and welcome to this special digital cube presentation sponsored by IBM we're going to focus in on data op data ops in action a lot of practitioners tell us that they really have challenges operationalizing in infusing AI into the data pipeline we're going to talk to some practitioners and really understand how they're solving this problem and really pleased to bring Victoria stayshia vich who's the Global Information Systems Manager for information management at harley-davidson Vik thanks for coming to the cube great to see you wish we were face to face but really appreciate your coming on in this manner that's okay that's why technology's great right so you you are steeped in a data role at harley-davidson can you describe a little bit about what you're doing and what that role is like definitely so obviously a manager of information management >> governance at harley-davidson and what my team is charged with is building out data governance at an enterprise level as well as supporting the AI and machine learning technologies within my function right so I have a portfolio that portfolio really includes DNA I and governance and also our master data and reference data and data quality function if you're familiar with the dama wheel of course what I can tell you is that my team did an excellent job within this last year in 2019 standing up the infrastructure so those technologies right specific to governance as well as their newer more modern warehouse on cloud technologies and cloud objects tour which also included Watson Studio and Watson Explorer so many of the IBM errs of the world might hear about obviously IBM ISEE or work on it directly we stood that up in the cloud as well as db2 warehouse and cloud like I said in cloud object store we spent about the first five months of last year standing that infrastructure up working on the workflow ensuring that access security management was all set up and can within the platform and what we did the last half of the year right was really start to collect that metadata as well as the data itself and bring the metadata into our metadata repository which is rx metadata base without a tie FCE and then also bring that into our db2 warehouse on cloud environment so we were able to start with what we would consider our dealer domain for harley-davidson and bring those dimensions within to db2 warehouse on cloud which was never done before a lot of the information that we were collecting and bringing together for the analytics team lived in disparate data sources throughout the enterprise so the goal right was to stop with redundant data across the enterprise eliminate some of those disparity to source data resources right and bring it into a centralized repository for reporting okay Wow we got a lot to unpack here Victoria so but let me start with sort of the macro picture I mean years ago you see the data was this thing that had to be managed and it still does but it was a cost was largely a liability you know governance was sort of front and center sometimes you know it was the tail that wagged the value dog and then the whole Big Data movement comes in and everybody wants to be data-driven and so you saw some pretty big changes in just the way in which people looked at data they wanted to you know mine that data and make it an asset versus just a straight liability so what what are the changes that you discerned in in data and in your organization over the last let's say half a decade we to tell you the truth we started looking at access management and the ability to allow some of our users to do some rapid prototyping that they could never do before so what more and more we're seeing as far as data citizens or data scientists right or even analysts throughout most enterprises is it well they want access to the information they want it now they want speed to insight at this moment using pretty much minimal Viable Product they may not need the entire data set and they don't want to have to go through leaps and bounds right to just get access to that information or to bring that information into necessarily a centralized location so while I talk about our db2 warehouse on cloud and that's an excellent example of one we actually need to model data we know that this is data that we trust right that's going to be called upon many many times from many many analysts right there's other information out there that people are collecting because there's so much big data right there's so many ways to enrich your data within your organization for your customer reporting the people are really trying to tap into those third-party datasets so what my team has done what we're seeing right change throughout the industry is that a lot of teams and a lot of enterprises are looking at s technologists how can we enable our scientists and our analysts right the ability to access data virtually so instead of repeating right recuperating redundant data sources we're actually ambling data virtualization at harley-davidson and we've been doing that first working with our db2 warehouse on cloud and connecting to some of our other trusted versions of data warehouses that we have throughout the enterprise that being our dealer warehouse as well to enable obviously analysts to do some quick reporting without having to bring all that data together that is a big change I see the fact that we were able to tackle that that's allowed technology to get back ahead because most backup Furnish say most organizations right have given IT the bad rap wrap up it takes too long to get what we need my technologists cannot give me my data at my fingertips in a timely manner to not allow for speed to insight and answers the business questions at point of time of delivery most and we've supplied data to our analysts right they're able to calculate aggregate brief the reporting metrics to get those answers back to the business but they're a week two weeks too late the information is no longer relevant so data virtualization through data Ops is one of the ways and we've been able to speed that up and act as a catalyst for data delivery but we've also done though and I see this quite a bit is well that's excellent we still need to start classifying our information and labeling that at the system level we've seen most most enterprises right I worked at Blue Cross as well with IBM tool had the same struggle they were trying to eliminate their technology debt reduce their spend reduce the time it takes for resources working on technologies to maintain technologies they want to reduce their their IT portfolio of assets and capabilities that they license today so what do they do to do that it's time to start taking a look at what systems should be classified as essential systems versus those systems that are disparate and could be eliminated and that starts with data governance right so okay so your your main focus is on governance and you talked about real people want answers now they don't want to have to wait they don't want to go big waterfall process so what was what would you say was sort of some of the top challenges in terms of just operationalizing your data pipelining getting to the point that you are today you know I have to be quite honest um standing up the governance framework the methodology behind it right to get it data owners data stewards at a catalog established that was not necessarily the heavy lifting the heavy lifting really came with I'm setting up a brand new infrastructure in the cloud for us to be quite honest um we with IBM partnered and said you know what we're going to the cloud and these tools had never been implemented in the cloud before we were kind of the first do it so some of the struggles that we aren't they or took on and we're actually um standing up the infrastructure security and access management network pipeline access right VPN issues things of that nature I would say is some of the initial roadblocks we went through but after we overcame those challenges with the help of IBM and the patience of both the Harley and IBM team it became quite easy to roll out these technologies to other users the nice thing is right we at harley-davidson have been taking the time to educate our users today up for example we had what we call the data bytes a Lunch and Learn and so in that Lunch and Learn what we did is we took our entire GIS team our global information services team which is all of IT through these new technologies it was a form of over 250 people with our CIO and CTO on and taking them through how do we use these tools what are the purpose of schools why do we need governance to maintain these pools why is metadata management important to the organization that piece of it seems to be much easier than just our initial scanning it up so it's good enough to start letting users in well sounds like you had real sponsorship from from leadership and input from leadership and they were kind of leaning into the whole process first of all is that true and how important is that for success oh it's essential we often said when we were first standing up the tools to be quite honest is our CIO really understand what it is that were for standing up as our CIO really understand governance because we didn't have the time to really get that face-to-face interaction with our leadership so I myself made it a mandate having done this previously at Blue Cross to get in front of my CIO and my CTO and educate them on what it is we are exactly standing up and once we did that it was very easy to get at an executive steering committee as well as an executive membership Council right I'm boarded with our governance council and now they're the champions of that it's never easy that was selling governance to leadership and the ROI is never easy because it's not something that you can easily calculate it's something that has to show its return on investment over time and that means that you're bringing dashboards you're educating your CIO and CTO and how you're bringing people together how groups are now talking about solutions and technologies in a domain like environment right where you have people from at an international level we have people from Asia from Europe from China that join calls every Thursday to talk about the data quality issue specific to dealer for example what systems were using what solutions on there are on the horizon to solve them so that now instead of having people from other countries that work for Harley as well as just even within the US right creating one-off solutions that are answering the same business questions using the same data but creating multiple solutions right to solve the same problem we're now bringing them together and we're solving together and we're prioritizing those as well so that return on investment necessarily down the line you can show that is you know what instead of this printing into five projects we've now turned this into one and instead of implementing four systems we've now implemented one and guess what we have the business rules and we have the classification I to this system so that you CIO or CTO right you now go in and reference this information a glossary a user interface something that a c-level can read interpret understand quickly write dissect the information for their own need without having to take the long lengthy time to talk to a technologist about what does this information mean and how do i how do I use it you know what's interesting is take away based on what you just said is you know harley-davidson is an iconic brand cool company with fuckin motorcycles right and but you came out of an insurance background which is a regulated industry where you know governance is sort of de rigueur right I mean it's it's a table steak so how are you able that arleigh to balance the sort of tension between governance and the sort of business flexibility so there's different there's different lovers I would call them right obviously within healthcare in insurance the importance becomes compliance and risk and regulatory right they're big pushes gosh I don't want to pay millions of dollars for fines start classifying this information enabling security reducing risk all that good stuff right for Harley Davidson it was much different it was more or less we have a mission right we want to invest in our technologies yet we want to save money how do we cut down the technologies that we have today reduce our technology spend yet and able our users have access to more information in a timely manner that's not an easy that's not an easy pass right um so what we did is I took that my married governance part-time model and our time model is specific worried they're gonna tolerate an application we're going to invest in an application we're gonna migrate an application or we're gonna eliminate that so I'm talking to my CIO said you know we can use governance the classifier system help act as a catalyst when we start to implement what it is we're doing with our technologies which technologies are we going to eliminate tomorrow we as IG cannot do that unless we discuss some sort of business impact unless you look at a system and say how many users are using us what reports are essential the business teams do they need this system is this something that's critical for users today to eat is this duplicate 'iv right we have many systems that are solving the same capability that is how I sold that off my CIO and it made it important to the rest of the organization they knew we had a mandate in front of us we had to reduce technology spend and that really for me made it quite easy and talking to other technologists as well as business users on why if governance is important why it's going to help harley-davidson and their mission to save money going forward I will tell you though that the businesses of biggest value right is the fact that they now owns the data they're more likely right to use your master data management systems like I said I'm the owner of our MDM services today as well as our customer knowledge center today they're more likely to access and reference those systems if they feel that they built the rule and they own the rules in those systems so that's another big value add to write as many business users will say ok you know you think I need access to this system I don't know I'm not sure I don't know what the data looks like within it is it easily accessible is it gonna give me the reporting metrics that I need that's where governance will help them for example like our state a scientist beam using a catalog right you can browse your metadata you can look at your server your database your tables your fields understand what those mean understand the classifications the formulas within them right they're all documented in a glossary versus having to go and ask for access to six different systems throughout the enterprise hoping right that's Sally next few that told you you needed access to these systems was right just to find out that you don't need the access and hence it took you three days to get the access anyway that's why a glossary is really a catalyst a lot of that well it's really interesting what you just said about you went through essentially an application rationalization exercise which which saved your organization money that's not always easy because you know businesses even though the you know IIT may be spending money on these systems businesses don't want to give them up but you were able to use it sounds like you're able to use data to actually inform which applications you should invest in versus you know sunset as well you'd sounds like you were giving the business a real incentive to go through this exercise because they ended up as you said owning the data well then what's great right who wants pepper what's using the old power and driving a new car if they can buy the I'm sorry bull owning the old car right driving the old park if they can truly own a new car for a cheaper price nobody wants to do that I've even looked at Tesla's right I can buy a Tesla for the same prices I can buy a minivan these days I think I might buy the Tesla but what I will say is that we also use that we built out a capabilities model with our enterprise architecture team and building that capabilities model we started to bucket our technologies within those capabilities models right like AI machine learning warehouse on cloud technologies are even warehousing technologies governance technologies you know those types of classifications today integrations technologies reporting technologies by kind of grouping all those into a capabilities matrix right and was Eve it was easy for us to then start identifying alright we're the system owners for these when it comes to technologies who are the business users for these based on that right let's go talk to this team the dealer management team about access to this new profiling capability with an IBM or this new catalog with an IBM right that they can use stay versus this sharepoint excel spreadsheets they were using for their metadata management right or the profiling tools that were old you know ten years old some of our sa peoples that they were using before right let's sell them on the noodles and start migrating them that becomes pretty easy because I mean unless you're buying some really old technology when you give people a purview into those new tools and those new capabilities especially with some of the IBM's new tools we have today there the buy-in is pretty quick it's pretty easy to sell somebody on something shiny and it's much easier to use than some of the older technologies let's talk about the business impact in my understanding is you were trying to increase the improve the effectiveness of the dealers not not just go out and brute force sign up more dealers were you able to achieve that outcome and what does it meant for your business yes actually we were so right now what we did is we slipped something called a CDR and that's our consumer dealer and development repository right that's where a lot of our dealer information resides today it's actually argue ler warehouse we had some other systems that we're collecting that information Kalinin like speed for example we were able to bring all that reporting man to one location sunset some of those other technologies but then also enable for that centralized reporting layer which we've also used data virtualization to start to marry submit information to db2 warehouse on cloud for users so we're allowing basically those that want to access CDR and our db2 warehouse and called dealer information to do that within one reporting layer um in doing so we were able to create something called a dealer harmonized ID really which is our version of we have so many dealers today right and some of those dealers actually sell bytes some of those dealers sell just apparel material some of those dealers just sell parts of those dealers right can we have certain you IDs kind of a golden record mastered information if you will right bought back in reporting so that we can accurately assess the dealer performance up to two years ago right it was really hard to do that we had information spread out all over it was really hard to get a good handle on what dealers were performing and what dealers weren't because was it was tough right for our analysts to wrangle that information and bring it together it took time many times we you would get multiple answers to one business question which is never good right one one question should have one answer if it's accurate um that is what we worked on within us last year and that's where really our CEO so the value at is now we can start to act on what dealers are performing at an optimal level versus what dealers are struggling and that's allowed even our account reps or field steel fields that right to go work with those struggling dealers and start to share with them the information of you know these are what some of our stronger dealer performing dealers are doing today that is making them more affecting it inside sorry effective is selling bikes you know these are some of the best practices you can implement that's where we make right our field staff smarter and our dealers smarter we're not looking to shut down dealers we just want to educate them on how to do better well and to your point about a single version of the truth if you will the the lines of business kind of owning their own data that's critical because you're not spending all your time you know pointing at fingers trying to understand the data if the if the users own it then they own it I and so how does self-service fit in were you able to achieve you know some level of self-service how far could you and you go there we were we did use some other tools I'll be quite honest aside from just the IBM tools today that's enabled some of that self-service analytics si PSAC was one of them Alteryx is another big one that we like to that our analyst team likes to use today to wrangle and bring that data together but that really allowed for our analysts spread in our reporting teams to start to build their own derivations their transformations for reporting themselves because they're more user interface space versus going in the backend systems and having to write straight pull right sequel queries things of that nature it usually takes time then requires a deeper level of knowledge then what we'd like to allow for our analysts right to have today I can say the same thing with the data scientist scheme you know they use a lot of the R and Python coding today what we've tried to do is make sure that the tools are available so that they can do everything they need to do without us really having to touch anything and I will be quite honest we have not had to touch much of anything we have a very skilled data scientist team so I will tell you that the tools that we put in place today Watson explore some of the other tools as well they haven't that has enabled the data scientists to really quickly move do what they need to do for reporting and even in cases where maybe Watson or Explorer may not be the optimal technology right for them to use we've also allowed for them to use some of our other resources are open source resources to build some of the models that they're that they were looking to build well I'm glad you brought that up Victoria because IBM makes a big deal out of you know being open and so you're kind of confirming that you can use third-party tools and and if you like you know tool vendor ABC you can use them as part of this framework yeah it's really about TCO right so take a look at what you have today if it's giving you at least 80% of what you need for the business or for your data scientists or reporting analysts right to do what they need to do it's to me it's good enough right it's giving you what you need it's pretty hard to find anything that's exactly 100 percent it's about being open though to when you're scientists or your analysts find another reporting tool right that requires minimal maintenance or let's just say did a scientist flow that requires minimal maintenance it's free right because it's open source IBM can integrate with that and we can enable that to be a quicker way for them to do what they need to do versus telling them no right you can't use the other technologies or the other open source information out there for you today you've got to use just these spools that's pretty tough to do and I think that would shut most IT shops down pretty quick within larger enterprises because it would really act as a roadblock to allow most of our teams right to do what they need to do reporting well last question so a big part of this the data ops you know borrowing from DevOps is this continuous integration continuous improvement you know kind of ongoing MOOC raising the bar if you will what do you see going from here oh I definitely see I see a world I see a world of where we're allowing for that rapid prototyping like I was talking about earlier I see a very big change in the data industry you said it yourself right we are in the brink of big data and it's only gonna get bigger there are organizations right right now that have literally understood how much of an asset their data really is today but they're starting to sell their data ah to other of their similar people are smaller industries right similar vendors within the industry similar spaces right so they can make money off of it because data truly is an asset now the key to it that was obviously making sure that it's curated that it's cleanse that it's rusted so that when you are selling that back you can't really make money off of it but we've seen though and what I really see on the horizon is the ability to vet that data right is in the past what have you been doing the past decade or just buying big data sets we're trusting that it's you know good information we're not doing a lot of profiling at most organizations arts you're gonna pay this big top dollar you're gonna receive this third-party data set and you're not gonna be able to use it the way you need to what I see on the horizon is us being able to do that you know we're building data Lake houses if you will right we're building um really those Hadoop link environments those data lakes right where we can land information we can quickly access it we can quickly profile it with tools that it would take hours for an ALICE write a bunch of queries do to understand what the profile of that data look like we did that recently at harley-davidson we bought and some third-party data evaluated it quickly through our agile scrum team right within a week we determined that the data was not as good as it as the vendor selling it right pretty much sold it to be and so we told the vendor we want our money back the data is not what we thought it would be please take the data sets back now that's just one use case right but to me that was golden it's a way to save money and start betting the data that we're buying otherwise what I would see in the past or what I've seen in the past is many organizations are just buying up big third-party data sets and just saying okay now it's good enough we think that you know just because it comes from the motorcycle and council right for motorcycles and operation Council then it's good enough it may not be it's up to us to start vetting that and that's where technology is going to change data is going to change analytics is going to change is a great example you're really in the cutting edge of this whole data op trend really appreciate you coming on the cube and sharing your insights and there's more in the crowd chatter crowd chatter off the Thank You Victoria for coming on the cube well thank you Dave nice to meet you it was a pleasure speaking with you yeah really a pleasure was all ours and thank you for watching everybody as I say crowd chatting at flash data op or more detail more Q&A this is Dave Volante for the cube keep it right there but right back right after this short break [Music]

Published Date : May 28 2020

**Summary and Sentiment Analysis are not been shown because of improper transcript**

ENTITIES

EntityCategoryConfidence
Dave VolantePERSON

0.99+

AsiaLOCATION

0.99+

IBMORGANIZATION

0.99+

five projectsQUANTITY

0.99+

Victoria StasiewiczPERSON

0.99+

ChinaLOCATION

0.99+

TeslaORGANIZATION

0.99+

VictoriaPERSON

0.99+

HarleyORGANIZATION

0.99+

Harley DavidsonORGANIZATION

0.99+

Palo AltoLOCATION

0.99+

Blue CrossORGANIZATION

0.99+

Blue CrossORGANIZATION

0.99+

EuropeLOCATION

0.99+

DavePERSON

0.99+

USLOCATION

0.99+

Harley-Davidson Motor CompanyORGANIZATION

0.99+

harley-davidsonPERSON

0.99+

six different systemsQUANTITY

0.99+

Dave VolantePERSON

0.99+

last yearDATE

0.99+

over 250 peopleQUANTITY

0.99+

todayDATE

0.99+

three daysQUANTITY

0.99+

100 percentQUANTITY

0.99+

IGORGANIZATION

0.99+

WatsonTITLE

0.99+

BostonLOCATION

0.99+

tomorrowDATE

0.98+

one business questionQUANTITY

0.98+

firstQUANTITY

0.98+

ABCORGANIZATION

0.98+

one answerQUANTITY

0.97+

four systemsQUANTITY

0.97+

oneQUANTITY

0.97+

Victoria stayshiaPERSON

0.96+

Watson ExplorerTITLE

0.96+

ExplorerTITLE

0.96+

2019DATE

0.96+

agileORGANIZATION

0.95+

VikPERSON

0.95+

two years agoDATE

0.95+

one questionQUANTITY

0.95+

two weeksQUANTITY

0.94+

bothQUANTITY

0.93+

excelTITLE

0.93+

SallyPERSON

0.92+

a weekQUANTITY

0.92+

harleyORGANIZATION

0.91+

Watson StudioTITLE

0.91+

last half of the yearDATE

0.89+

AlteryxORGANIZATION

0.88+

millions of dollarsQUANTITY

0.87+

single versionQUANTITY

0.86+

every ThursdayQUANTITY

0.86+

RTITLE

0.85+

Ritika Gunnar, IBM | IBM Think 2020


 

>>Yeah, >>from the Cube Studios in Palo Alto and Boston. It's the Cube covering IBM. Think brought to you by IBM. >>Everybody, this is Dave Vellante of the Cube. Welcome back. The continuous coverage that we're running here of the IBM Think Digital 2020 Experience. I'm with Radica Gunnar, who is a longtime Cube alum. She's the vice president for Data and AI. Expert labs and learning Radica. Always a pleasure. I wish we were seeing each other face to face in San Francisco. But, you know, we have to make the best. >>Always a pleasure to be with you, Dave. >>So, listen, um, we last saw each other in Miami Attain IBM data event. You hear a lot of firsts in the industry. You hear about Cloud? First, you hear about data. First hear about AI first. I'm really interested in how you see AI first coming customers. They want to operationalize ai. They want to be data first. They see cloud, you know, is basic infrastructure to get there, but ultimately they want insights out of data. And that's where AI comes in. What's your point of view on this? >>I think any client that's really trying to establish how to be able to develop a AI factory in their organization so that they're embedding AI across the most pervasive problems that they have in their order. They need to be able to start first with the data. That's why we have the AI ladder, where we really think the foundation is about how clients organized there to collect their data, organize their data, analyze it, infuse it in the most important applications and, of course, use that whole capability to be able to modernize what they're doing. So we all know to be able to have good ai, you need a good foundational information, architecture and the US A lot of the first steps we have with our clients is really starting with data doing an analysis of where are you with the data maturity? Once you have that, it becomes easier to start applying AI and then to scale AI across the business. >>So unpack that a little bit and talk about some of the critical factors and the ingredients that are really necessary to be successful. What are you seeing with customers? >>Well, to be successful with, a lot of these AI projects have mentioned. It starts with the data, and when we come to those kind of characteristics, you would often think that the most important thing is the technology. It's not that is a myth. It's not the reality. What we found is some of the most important things start with really understanding and having a sponsor who understands the importance of the AI capabilities that you're trying to be able to drive through business. So do you have the right hunger and curiosity of across your organization from top to bottom to really embark on a lot of these AI project? So that's cultural element. I would say that you have to be able to have that in beds within it, like the skills capabilities that you need to be able to have, not just by having the right data scientists or the right data engineers, but by having every person who is going to be able to touch these new applications and to use these new applications, understand how AI is going to impact them, and then it's really about the process. You know, I always talk about AI is not a thing. It's an ingredient that makes everything else better, and that means that you have to be able to change your processes. Those same applications that had Dev ops process is to be able to put it in production. Need to really consider what it means to have something that's ever changing, like AI as part of that which is also really critical. So I think about it as it is a foundation in the data, the cultural changes that you need to have from top to bottom of the organization, which includes the skills and then the process components that need to be able to change. >>Do you really talking about like Dev ops for AI data ops, I think is a term that's gonna gaining popularity of you guys have applied some of that in internally. Is that right? >>Yeah, it's about the operations of the AI life cycle in, and how you can automate as much of that is possible by AI. They're as much as possible, and that's where a lot of our investments in the Data and AI space are going into. How do you use AI for AI to be able to automate that whole AI life site that you need to be able to have in it? Absolutely >>So I've been talking a lot of C. XO CEO CEOs. We've held some C so and CEO roundtables with our data partner ET are. And one of the things that's that's clear is they're accelerating certain things as a result of code 19. There's certainly much more receptive to cloud. Of course, the first thing you heard from them was a pivot to work from home infrastructure. Many folks weren't ready, so okay, but the other thing that they've said is even in some hard hit industries, we've essentially shut down all spending, with the exception of very, very critical things, including, interestingly, our digital transformation. And so they're still on that journey. They realized the strategic imperative. Uh, and they don't want to lose out. In fact, they want to come out of this stronger AI is a critical part of that. So I'm wondering what you've seen specifically with respect to the pandemic and customers, how they're approaching ai, whether or not you see it accelerating or sort of on the same track. What are you seeing out there with clients? >>You know, this is where, um in pandemics In areas where, you know, we face a lot of uncertainty. I am so proud to be an IBM. Er, um, we actually put out offer when the pandemic started in a March timeframe. Teoh Many of our organizations and communities out there to be able to use our AI technologies to be able to help citizens really understand how Kobe 19 was gonna affect them. What are the symptoms? Where can I get tested? Will there be school tomorrow? We've helped hundreds of organizations, and not only in the public sector in the healthcare sector, across every sector be able to use AI capabilities. Like what we have with Watson assistant to be able to understand how code in 19 is impacting their constituents. As I mentioned, we have hundreds of them. So one example was Children's health care of Atlanta, where they wanted to be able to create an assistant to be able to help parents really understand what symptoms are and how to handle diagnosis is so. We have been leveraging a lot of AI technologies, especially right now, to be able to help, um, not just citizens and other organizations in the public and healthcare sector, but even in the consumer sector, really understand how they can use AI to be able to engage with their constituents a lot more closely. That's one of the areas where we have done quite a bit of work, and we're seeing AI actually being used at a much more rapid rate than ever >>before. Well, I'm excited about this because, you know, we were talking about the recovery, What there's a recovery look like is it v shaped? Nobody really expects that anymore. But maybe a U shaped. But the big concern people have, you know, this w shape recovery. And I'm hopeful that machine intelligence and data can be used to just help us really understand the risks. Uh, and then also getting out good quality information. I think it's critical. Different parts of the country in the world are gonna open at different rates. We're gonna learn from those experiences, and we need to do this in near real time. I mean, things change. Certainly there for a while they were changing daily. They kind of still are. You know, maybe we're on a slower. Maybe it's three or four times a week now, but that pace of change is critical and, you know, machine machines and the only way to keep up with that wonder if you could comment. >>Well, machines are the only way to keep, and not only that, but you want to be able to have the most up to date relevant information that's able to be communicated to the masses and ways that they can actually consume that data. And that's one of the things that AI and one of the assistant technologies that we have right now are able to do. You can continually update and train them such that they can continually engage with that end consumer and that end user and be able to give them the answers they want. And you're absolutely right, Dave. In this world, the answers change every single day and that kind of workload, um, and and the man you can't leave that alone to human laborers. Even human human labors need an assistant to be able to help them answer, because it's hard for them to keep up with what the latest information is. So using AI to be able to do that, it's absolutely critical, >>and I want to stress that I said machines you can't do without machines. And I believe that, but machines or a tool for humans to ultimately make the decisions in a crisis like this because, you see, I mean, I know we have a global audience, but here in the United States, you got you have 50 different governors making decisions about when and how certainly the federal government putting down guidelines. But the governor of Georgia is going to come back differently than the governor of New York, Different from the governor of California. They're gonna make different decisions, and they need data. And AI and Machine intelligence will inform that ultimately their public policy is going to be dictated by a combination of things which obviously includes, you know, machine intelligence. >>Absolutely. I think we're seeing that, by the way, I think many of those governors have made different decisions at different points, and therefore their constituents need to really have a place to be able to understand that as well. >>You know, you're right. I mean, the citizens ultimately have to make the decision while the governor said sick, safe to go out. You know, I'm gonna do some of my own research and you know, just like if you're if you're investing in the stock market, you got to do your own research. It's your health and you have to decide. And to the extent that firms like IBM can provide that data, I think it's critical. Where does the cloud fit in all this? I mentioned the cloud before. I mean, it seems to be critical infrastructure to get information that will talk about >>all of the capabilities that we have. They run on the IBM cloud, and I think this is where you know, when you have data that needs to be secured and needs to be trusted. And you need these AI capabilities. A lot of the solutions that I talked about, the hundreds of implementations that we have done over the past just six weeks. If you kind of take a look at 6 to 8 weeks, all of that on the IBM Public cloud, and so cloud is the thing that facilitates that it facilitates it in a way where it is secure. It is trusted, and it has the AI capabilities that augmented >>critical. There's learning in your title. Where do people go toe? Learn more How can you help them learn about AI And I think it started or keep going? >>Well, you know, we think about a lot of these technologies as it isn't just about the technology. It is about the expertise and the methodologies that we bring to bear. You know, when you talk about data and AI, you want to be able to blend the technology with expertise. Which is why are my title is expert labs that come directly from the labs and we take our learnings through thousands of different clients that we have interacted with, working with the technologies in the lab, understanding those outcomes and use cases and helping our clients be successful with their data and AI projects. So we that's what we do That's our mission. Love doing that every day. >>Well, I think this is important, because I mean, ah company, an organization the size of IBM, a lot of different parts of that organization. So I would I would advise our audience the challenge IBM and say, Okay, you've got that expertise. How are you applying that expertise internally? I mean, I've talked into public Sorry about how you know the data. Science is being applied within IBM. How that's then being brought out to the customers. So you've actually you've got a Petri dish inside this massive organization and it sounds like, you know, through the, you know, the expert labs. And so the Learning Center's you're sort of more than willing to and aggressively actually sharing that with clients. >>Yeah, I think it's important for us to not only eat our own dog food, so you're right. Interpol, The CDO Office Depot office we absolutely use our own technology is to be able to drive the insights we need for our large organization and through the learnings that we have, not only from ourselves but from other clients. We should help clients, our clients and our communities and organizations progress their use of their data and their AI. We really firmly believe this is the only way. Not only these organizations will progress that society as a whole breast, that we feel like it's part of our mission, part of our duty to make sure that it isn't just a discussion on the technology. It is about helping our clients and the community get to the outcomes that they need to using ai. >>Well, guy, I'm glad you invoke the dog food ing because, you know, we use that terminology a lot. A lot of people marketing people stepped back and said, No, no, it's sipping our champagne. Well, to get the champagne takes a lot of work, and the grapes at the early stages don't taste that pain I have to go through. And so that's why I think it's a sort of an honest metaphor, but critical your you've been a friend of the Cube, but we've been on this data journey together for many, many years. Really appreciate you coming on back on the Cube and sharing with the think audience. Great to see you stay safe. And hopefully we'll see you face to face soon. >>All right. Thank you. >>Alright. Take care, my friend. And thank you for watching everybody. This is Dave Volante for the Cube. You're watching IBM think 2020. The digital version of think we'll be right back after this short break. >>Yeah, yeah, yeah.

Published Date : May 7 2020

SUMMARY :

Think brought to you by IBM. you know, we have to make the best. They see cloud, you know, is basic infrastructure to get there, know to be able to have good ai, you need a good foundational information, that are really necessary to be successful. and that means that you have to be able to change your processes. gonna gaining popularity of you guys have applied some of that in internally. to be able to automate that whole AI life site that you need to be able to have in it? Of course, the first thing you heard from them and communities out there to be able to use our AI technologies to be able But the big concern people have, you know, this w shape recovery. Well, machines are the only way to keep, and not only that, but you want to be able to have the most up to date relevant But the governor of Georgia is going to come back differently than the governor of at different points, and therefore their constituents need to really have a place to be able to understand that I mean, it seems to be critical infrastructure to get information that will and I think this is where you know, when you have data that needs to be secured and needs to be Learn more How can you help them learn about It is about the expertise and the methodologies that we bring to bear. and it sounds like, you know, through the, you know, the expert labs. It is about helping our clients and the community get to the outcomes that they need to Great to see you stay safe. And thank you for watching everybody.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
IBMORGANIZATION

0.99+

San FranciscoLOCATION

0.99+

DavePERSON

0.99+

Dave VellantePERSON

0.99+

Dave VolantePERSON

0.99+

Palo AltoLOCATION

0.99+

Ritika GunnarPERSON

0.99+

hundredsQUANTITY

0.99+

BostonLOCATION

0.99+

United StatesLOCATION

0.99+

MiamiLOCATION

0.99+

threeQUANTITY

0.99+

6QUANTITY

0.99+

tomorrowDATE

0.99+

thousandsQUANTITY

0.99+

FirstQUANTITY

0.99+

AtlantaLOCATION

0.99+

MarchDATE

0.99+

50 different governorsQUANTITY

0.99+

CDOORGANIZATION

0.99+

8 weeksQUANTITY

0.99+

InterpolORGANIZATION

0.99+

code 19OTHER

0.99+

Radica GunnarPERSON

0.98+

oneQUANTITY

0.97+

first stepsQUANTITY

0.97+

firstQUANTITY

0.96+

New YorkLOCATION

0.94+

pandemicEVENT

0.94+

hundreds of organizationsQUANTITY

0.94+

USLOCATION

0.93+

firstsQUANTITY

0.9+

Cube StudiosORGANIZATION

0.88+

four times a weekQUANTITY

0.87+

CubeCOMMERCIAL_ITEM

0.87+

ETORGANIZATION

0.86+

ThinkCOMMERCIAL_ITEM

0.85+

one exampleQUANTITY

0.84+

six weeksQUANTITY

0.84+

Dev opsTITLE

0.82+

IBM Think Digital 2020 ExperienceEVENT

0.82+

CaliforniaLOCATION

0.81+

WatsonTITLE

0.8+

first thingQUANTITY

0.8+

pandemicsEVENT

0.77+

GeorgiaLOCATION

0.74+

governorPERSON

0.74+

C. XOPERSON

0.7+

RadicaPERSON

0.7+

single dayQUANTITY

0.69+

Kobe 19COMMERCIAL_ITEM

0.59+

think 2020TITLE

0.56+

implementationsQUANTITY

0.54+

DepotORGANIZATION

0.49+

CubeORGANIZATION

0.4+

2020TITLE

0.32+

19TITLE

0.31+

Rob Thomas, IBM | IBM Think 2020


 

>>From the cube studios in Palo Alto in Boston. It's the cube covering the IBM thing brought to you by IBM. We're back and this is Dave Vellante and you're watching the cube and we're covering wall-to-wall the IBM 2020 I think digital experience. Rob Thomas is here. He's the senior vice president of clouds and data. Right. Warm rub. Always a pleasure to see you. I wish you were face to face, but Hey, we're doing the best we can. As you say, doing the best we can. Great to see you Dave. Hope family safe, healthy, happy as best you can be. Yeah. Ditto. You back out your Robin. Congratulations on on the new role, you and the cube. We've been riding this data wave for quite some time now. It's really been incredible. It really is. And last year I talked to you about how clients, we're slowly making progress on data strategy, starting to experiment with AI. >>We've gotten to the point now where I'd say it's game on for AI, which is exciting to see and that's a lot of what the theme of this year's think is about. Yeah, and I definitely want to dig into that, but I want to start by asking you sort of moves that you saw you're in there seeing your clients make with regard to the cobot night covert 19 crisis. Maybe how you guys are helping them in very interested in what you see as sort of longterm and even, you know, quasi permanent as a result of this. I would first say it this way. I don't, I'm not sure the crisis is going to change businesses as much as it's going to be accelerating. What would have happened anyway, regardless of the industry that you're in. We see clients aggressively looking at how do we get the digital faster? >>How do we automate more than we ever have before? There's the obvious things like business resiliency and business continuity, managing the distributed workforce. So to me, what we've seen is really about, and acceleration, not necessarily in a different direction, but an acceleration on. The thing is that that we're already kind of in the back of their minds or in the back of their plans now that as we'll come to the forefront and I'm encouraged because we see clients moving at a rate and pace that we'd never seen before that's ultimately going to be great for them, great for their businesses. And so I'm really happy to see that you guys have used Watson to really try to get, you know, some good high fidelity answers to the citizens. I wonder if you could explain that initiative. Well, we've had this application called Watson assistant for the last few years and we've been supporting banks, airlines, retailers, companies across all industries and helping them better interact with our customers and in some cases, employees. >>We took that same technology and as we saw the whole covert 19 situation coming, we said, Hey, we can evolve Watson assistant to serve citizens. And so it started by, we started training the models, which are intent based models in Watson assistant on all the publicly available data from the CDC as an example. And we've been able to build a really powerful virtual agent to serve really any citizen that has questions about and what they should be doing. And the response has been amazing. I mean, in the last two weeks we've gone live with 20 organizations, many of which are state and local governments. Okay. Also businesses, the city of Austin children's healthcare of Atlanta. Mmm. They local governments in Spain and Greece all over the world. And in some instances these clients have gotten live in less than 24 hours. Meaning they have a virtual agent that can answer any question. >>They can do that in less than 24 hours. It's actually been amazing to see. So proud of the team that built this over time. And it was kind of proof of the power of technology when we're dealing with any type of a challenge. You know, I had a conversation earlier with Jamie Thomas about quantum and was asking her sort of how your clients are using it. The examples that came up were financial institutions, pharmaceutical know battery manufacturers, um, airlines. And so it strikes me when you think about uh, machine intelligence and AI, the type of AI that you're yeah, at IBM is not consumer oriented AI. It's really designed for businesses. And I wonder if you could sort of add some color to that. Yeah, let's distinguish the difference there. Cause I think you've said it well consumer AI is smart speakers things in our home, you know, music recommendations, photo analysis and that's great and it enriches all of our personal lives. >>AI for business is very different. This is about how do you make better predictions, how do you optimize business processes, how do you automate things that maybe your employees don't want to do in the first time? Our focus in IBM as part of, we've been doing with Watson is really anchoring on three aspects of AI language. So understanding language because the whole business world is about communication of language, trust meaning trusted AI. You understand the models, you understand the data. And then third automation and the whole focus of what we're doing here in the virtual think experience. It's focused on AI for automation. Whether that's automating business processes or the new announcement this week, which is around automating AI opera it operations for a CIO. You, you've talked the years about this notion of an AI ladder. You actually, I actually wrote a book on it and uh, but, but it's been hard for customers to operationalize AI. >>Mmm. We talked about this last year. Thanks. What kind of progress, uh, have we made in the last 12 months? There's been a real recognition of this notion that your AI is only as good as your data. And we use the phrase, there's no AI without IAA, meaning information architecture, it's all the same concept, which is that your data, it has to be ready for AI if you want to too get successful outcomes with AI and the steps of those ladders around how you collect data, how you organize data, how you analyze data, how you infuse that into your business processes. seeing major leaps forward in the last nine months where organizations are understanding that connection and then they're using that to really drive initiatives around AI. So let's talk about that a little bit more. This notion of AI ops, I mean it's essentially the take the concept of dev ops and apply it to the data pipeline if you will. >>Everybody, you know, complains, you know, data scientists complained that all, they spent all their time wrangling data, improving data quality, they don't have line of sight across their organization with regard to other data specialists, whether it's data engineers or even developers. Maybe you could talk a little bit more about that announcement and sort of what you're doing in that area. Sure. So right. Let me put a number on it because the numbers are amazing. Every year organizations lose 2016 point $5 billion of revenue because of outages in it system. That is a staggering number when you think about it. And so then you say, okay, so how do you break down and attack that problem? Well, do you have to get better at fixing problems or you have to get better at avoiding problems altogether. And as you may expect, a little bit of both. You, you want to avoid problems obviously, but in an uncertain world, you're always going to deal with unforeseen challenges. >>So the also the question becomes how fast can you respond and there's no better use of AI. And then to do, I hope you like those tasks, which is understanding your environment, understanding what the systems are saying through their data and identifying issues become before they become outages. And once there is an outage, how do you quickly triage data across all your systems to figure out where is the problem and how you can quickly address it. So we are announcing Watson AI ops, which is the nervous system for a CIO, the manager, all of their systems. What we do is we just collect data, log data from every source system and we build a semantic layer on top that. So Watson understands the systems, understands the normal behavior, understands the acceptable ranges, and then anytime something's not going like it should, Watson raises his hand and says, Hey, you should probably look at this before it becomes a problem. >>We've partnered with companies like Slack, so the UI for Watson AI ops, it's actually in Slack so that companies can use and employees can use a common collaboration tool too. Troubleshoot or look at either systems. It's, it's really powerful. So that we're really proud of. Well I just kind of leads me to my next question, which I mean, IBM got the religion 20 years ago on openness. I mean I can trace it back to the investment you made and Lennox way back when. Um, and of course it's a huge investment last year in red hat, but you know, open source company. So you just mentioned Slack. Talk about open ecosystems and how that it fits into your AI and data strategy. Well, if you think about it, if we're going to take on a challenge this grand, which is AI for all of your it by definition you're going to be dealing with full ecosystem of different providers because every organization has a broad set of capabilities we identified early on. >>That means that our ability to provide open ecosystem interoperability was going to be critical. So we're launching this product with Slack. I mentioned with box, we've got integrations into things like PagerDuty service now really all of the tools of modern it architecture where we can understand the data and help clients better manage those environments. So this is all about an open ecosystem and that's how we've been approaching it. Let's start, it's really about data, applying machine intelligence or AI to that data and about cloud for scale. So I wonder what you're seeing just in terms of that sort of innovation engine. I mean obviously it's gotta be secure. It's, it seems like those are the pillars of innovation for the next 10 plus years. I think you're right. And I would say this whole situation that we're dealing with has emphasized the importance of hybrid deployment because companies have it capabilities on public clouds, on private clouds, really everywhere. >>And so being able to operate that as a single architecture, it's becoming very important. You can use AI to automate tasks across that whole infrastructure that makes a big difference. And to your point, I think we're going to see a massive acceleration hybrid cloud deployments using AI. And this will be a catalyst for that. And so that's something we're trying to help clients with all around the world. You know, you wrote in your book that O'Reilly published that AI is the new electricity and you talked about problems. Okay. Not enough data. If your data is you know, on prem and you're only in the cloud, well that's a problem or too much data. How you deal with all that data, data quality. So maybe we could close on some of the things that you know, you, you talked about in that book, you know, maybe how people can get ahold of it or any other, you know, so the actions you think people should take to get smart on this topic. >>Yeah, so look, really, really excited about this. Paul's the capitalists, a friend of mine and a colleague, we've published this book working with a Riley called the a ladder and it's all the concepts we talked about in terms of how companies can climb this ladder to AI. And we go through a lot of different use cases, scenarios, I think. Yeah. Anybody reading this is going to see their company in one of these examples, our whole ambition was to hopefully plant some seeds of ideas for how you can start to accelerate your journey to AI in any industry right now. Well, Rob, it's always great having you on the cube, uh, your insights over the years and you've been a good friend of ours, so really appreciate you coming on and, uh, and best of luck to you, your family or wider community. I really appreciate it. Thanks Dave. Great to be here and again, wish you and the whole cube team the best and to all of our clients out there around the world. We wish you the best as well. All right. You're watching the cubes coverage of IBM think 20, 20 digital, the vent. We'll be right back right after this short break. This is Dave Volante.

Published Date : May 7 2020

SUMMARY :

the IBM thing brought to you by IBM. and I definitely want to dig into that, but I want to start by asking you sort of moves that you saw you're happy to see that you guys have used Watson to really try to get, you know, I mean, in the last two weeks we've gone live with 20 And I wonder if you could sort of add some color to that. business processes, how do you automate things that maybe your employees don't dev ops and apply it to the data pipeline if you will. And so then you say, okay, so how do you break down and attack that problem? And then to do, I hope you like those tasks, which is understanding and of course it's a huge investment last year in red hat, but you know, open source company. And I would say this whole So maybe we could close on some of the things that you know, you, you talked about in that book, Great to be here and again, wish you and the whole cube team the best and to all

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
IBMORGANIZATION

0.99+

SpainLOCATION

0.99+

DavePERSON

0.99+

Dave VellantePERSON

0.99+

Rob ThomasPERSON

0.99+

GreeceLOCATION

0.99+

20 organizationsQUANTITY

0.99+

Palo AltoLOCATION

0.99+

2016DATE

0.99+

Dave VolantePERSON

0.99+

RobPERSON

0.99+

Jamie ThomasPERSON

0.99+

DittoPERSON

0.99+

$5 billionQUANTITY

0.99+

RobinPERSON

0.99+

AtlantaLOCATION

0.99+

last yearDATE

0.99+

last yearDATE

0.99+

less than 24 hoursQUANTITY

0.99+

AustinLOCATION

0.99+

LennoxORGANIZATION

0.99+

PaulPERSON

0.98+

Watson assistantTITLE

0.98+

firstQUANTITY

0.98+

bothQUANTITY

0.98+

thirdQUANTITY

0.98+

BostonLOCATION

0.97+

first timeQUANTITY

0.96+

think 20COMMERCIAL_ITEM

0.96+

this weekDATE

0.96+

20 years agoDATE

0.96+

CDCORGANIZATION

0.95+

last two weeksDATE

0.93+

oneQUANTITY

0.92+

this yearDATE

0.92+

SlackORGANIZATION

0.91+

last 12 monthsDATE

0.91+

last nine monthsDATE

0.9+

19QUANTITY

0.89+

SlackTITLE

0.88+

20COMMERCIAL_ITEM

0.88+

last few yearsDATE

0.88+

WatsonPERSON

0.87+

WatsonTITLE

0.86+

Think 2020COMMERCIAL_ITEM

0.85+

single architectureQUANTITY

0.82+

next 10 plus yearsDATE

0.8+

PagerDutyORGANIZATION

0.71+

three aspectsQUANTITY

0.71+

O'ReillyORGANIZATION

0.69+

RileyORGANIZATION

0.62+

19 crisisEVENT

0.58+

WatsonORGANIZATION

0.45+

covertEVENT

0.41+

2020COMMERCIAL_ITEM

0.37+

cobot nightTITLE

0.3+

Mani Dasgupta, IBM | IBM Think 2020


 

>>From the cube studios in Palo Alto in Boston. It's the cube covering IBM thing brought to you by IBM. >>All right, ready? We're back. This is the cubes continuous coverage of IBM 2020 the digital event experience. My name is Dave Volante. Dasgupta is here. She's the vice president of marketing at IBM. She's also the COO of the global business services. good to see you. Thanks for coming back in the cube. Oh, I'm so happy to be here. Deva fantastic to be here. Do you have a lot of experience with brands? IBM itself, you know, amazing well known, a leading brand well, I'm wondering if you have any thoughts on what you're seeing in terms of how brands are responding to the 19 crisis. There are things out there that you're seeing that are inspiring you and yeah. What should we be looking for? Oh my gosh. I mean all around the last two, two months we have been living now in a, in an, in a new reality and this is not going to go back, do what we knew was normal. >>Right. This is going to be the new normal and how brands react to it sets us up for future growth and future success. You know, as a in the global business services steam as a CMO there I meet a lot of every single day and they are coming to us with business challenges. What makes the big difference right now? I think in terms of of being a successful brand is the resilience and the adaptive. If you see a company like IBM and you've talked a little bit about how iconic this brand is, it's been there for about 108 hundred and nine years now and it is being able to successfully reinvent itself every turn of the century and every turn of what's happening around us. Uh, it being able to I think it's extremely important. What also is important as a brand is the emphasis that you can feel towards the growth and success of your client's business. I think sets, um, any, any brand apart from growth. So adaptability and empathy. Those would be my two big thanks. We talked to a number of CIO is IBM came out as one of the companies really helping. It wasn't just IBM, there were many, many large organizations, small organizations that really had this empathic, we're in this together. >>That's exactly right. If you look at it, it's, it's both of what we do for our clients but also what we do for our own employees. Um, 95% of our work IBM is not working from home in a safe and secure environment. We've been able to work with our clients and move those teams that work with our clients also in a more safe and safe your environment. For example, something like our cocreation workshop, the IBM garage would think that for cocreation innovation, you all need to be together in a room and put up sticky notes on the board behind you. Okay. Yeah. We have moved into to be a virtual experience and we are now offering free trials of a lot of our products and solutions to our clients for the next 90 days where they can get their most resting business. Yes. Problem solved. You know, we just want to make sure we get that together and get the economy back on track. Get the companies back on the track of. >>Now, one of the other passions of yours I know is this notion of of the cognitive business, a smarter business. And, and I want to ask you, help us understand what that is. You know, beyond the sort of marketing taglines, what is a smarter business? >>Yes, a smarter business is adaptive and resilient. that would be the biggest things, um, that I would highlight. Now, how do they do that? They do that because they are able to have business arms. They use the data that they have at their disposal. Then mind you, this is not the, um, data that is searchable online. 80% of or customer data is with. The organizations themselves. Now, how do they use that data to create business plans, forms that give them competitive advantage is one of the core tenets of what makes a smarter business. The second piece is around workflows that are more intelligent. Now, what makes these work, those more intelligent, what are these words? Those, these are end to end processes. So think of supply chain. How do you make your supply chain more resilient in the covert crisis right now that many, um, many companies are grappling with. >>How do you strengthen your direct to consumer routes? Many companies that used to deliver to stores now are figuring out how to get direct to consumers. So yeah, making these work close more intelligent, more resilient. How do you manage your work of course, right? Um, how do you make sure that the customer data that many employee's work is safe and secure? Sure. So second is the intelligence. Yes. And the third thing is all about the expedience and being able to engage with your customers in you are ways, if you think of some specific industries that are dealing with customer claims, you know, you look at the health, yeah. Provider industry, you're looking at insurance claims and and things like that. They are grappling with this new reality and being to then connect with your customers in new and engaging ways. I think is of utmost importance. So the three things, platforms, most expedience is what makes us smarter business possible. And that business is adaptive and resilient. >>Uh, the way in which brands are engaging dramatically different then it was just a few months ago. And our thinking is there's going to be some permanent changes here. What, what are your thoughts in that regard? >>Absolutely. 100% agree. Um, when we go back work, when we all get out of our home offices, um, it's going to be an a new way of. Right. Okay. And we're already seeing, uh, the engagement within our own work. Forced rising. Yeah. For example, I just came off of a, one of our all hands calls and we create these new videos on how we have new coworkers. We have, you know, pets and kids and parents cared for at home. Yeah. Mmm. all of this though, there is a greater sense of togetherness. There is a greater sense of solidarity. And what inspires me the most is when I look at the people around us in the delivery, uh, deems, you know, across the world. If you look at India, if you look at Philippines in our big teams that are delivering for clients every single day, the resiliency that they have shown in being able to overcome these, these hurdles are giving us ideas that this is not a one and done. >>This could actually be the new normal going beyond it. The automation that we have been able to apply. Um, uh, when you have like AI, how do you processes different if things are more efficient, wouldn't it be a better idea? Just have that go throughout to the rest of, you know, um, what's the new normal around us? So this is absolutely gonna change the way we work, the way we engage with our clients and the kind of, um, new ways of a new routes to market. I think that is the most exciting to me. How can we, how can we feel T and find out new routes to market new customers and be able to provide them value. The Watson Watson digital assistant is, is interesting to me because it allows us as one example of a hospital to be able to put out information that's accurate and timely. >>These things have to be done in near real time. As we know, the Covance situation, it changes daily. You know, maybe the change is, is decelerating a little bit, but it's still several times a week. there was a period of time where it was changing multiple times for days. Yes. So for instance, do I wear a mask? Do I not wear a mask? How far do I have to stand away? Can I can I actually get this by walking behind somebody, et cetera, et cetera. So much information that changed so quickly is the medical community got that. So you have to be able to access that data and you know, to your point about that, yeah, intelligent workflow, be able to do that in near real time. And that's what to me anyway, it's about operationalizing that data, you know, AI capability across the organization. >>Not just in some stovepipe where I have to ask somebody to some analysis for me that that is a huge change in the way in which businesses operate, isn't it? It is a huge change. And I think it's also about visibility, um, that the common man is right now the citizens that the people who are, who are, um, trying to access these technology. Yes. I think it gives them a renewed hope, um, in what technology could really provide. How are we are still being able to work while we are stuck in our homes, how we are still able to buy things online and the not jeopardize the safety of our loved ones who, you know, I'm the who maybe immunocompromised. You cannot go out and shop how we are able to still do the delivery. And, and the beauty of this is we in the technology industry we knew this, so >>go back one year we were working with. Um, no a company that supplies life saving medicines to many parts of Africa, the supply chain there and the technology and the intelligence that we had embedded in that hello made it possible for this human and tech interaction. And I think that is what the beauty of this is the renewed understanding of what technology can do for you. Yeah. And the ability to interact with the technology to make that happen. For example, in Africa, you have to sometimes rely on the Goodwill of the local villagers when there are floods and the pats are run over with water. You have to trigger, um, uh, an email or you know, you have to go to your cell phone so that the locals can then the medicine's over. Yeah. Uh, over the flooded planes to the hospitals. The interaction of the human with the technology that is there to help you and make your lives easier. I think right now there's renewed understanding and acceptance of that and I think it's a, it's a good thing. It's a good thing for all of us. >>I mean, it really is the, the uniqueness of IBM, deep industry expertise, knowledge, and yet know tons of R and D and technology. Oh, galore. Manny, thanks so much for coming back on the cubes. Great to see you. Hopefully next time it'll be face to face, but I really appreciate your time. >>Oh, I, I so wished for that I, so I, I do miss the the live connections, but you know, technology will take us forward till then and, uh, fantastic to be here. I loved it. Toxin. >>Great. And thank you for watching everybody. This is Dave Volante, but the cube for the IBM digital event experience, you 2020. We'll be right back right after this short break.

Published Date : May 7 2020

SUMMARY :

IBM thing brought to you by IBM. This is the cubes continuous coverage of IBM 2020 What also is important as a brand is the emphasis that you can feel towards the growth We have moved into to be a virtual experience and we are now You know, beyond the sort of marketing taglines, what is a smarter business? They do that because they are able to have business arms. industries that are dealing with customer claims, you know, you look at the health, yeah. And our thinking is there's going to be some permanent changes here. the delivery, uh, deems, you know, across the world. to the rest of, you know, um, what's the new normal around us? So much information that changed so quickly is the jeopardize the safety of our loved ones who, you know, I'm the who maybe immunocompromised. You have to trigger, um, uh, an email or you know, Manny, thanks so much for coming back on the cubes. fantastic to be here. This is Dave Volante, but the cube for the IBM digital event

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VolantePERSON

0.99+

IBMORGANIZATION

0.99+

Mani DasguptaPERSON

0.99+

AfricaLOCATION

0.99+

Palo AltoLOCATION

0.99+

95%QUANTITY

0.99+

100%QUANTITY

0.99+

DasguptaPERSON

0.99+

80%QUANTITY

0.99+

second pieceQUANTITY

0.99+

secondQUANTITY

0.99+

DevaPERSON

0.99+

oneQUANTITY

0.99+

IndiaLOCATION

0.99+

bothQUANTITY

0.99+

three thingsQUANTITY

0.98+

PhilippinesLOCATION

0.98+

2020DATE

0.98+

MannyPERSON

0.98+

two monthsQUANTITY

0.97+

BostonLOCATION

0.97+

19 crisisEVENT

0.97+

third thingQUANTITY

0.96+

about 108 hundred and nine yearsQUANTITY

0.93+

one exampleQUANTITY

0.93+

GoodwillORGANIZATION

0.93+

few months agoDATE

0.91+

two big thanksQUANTITY

0.86+

ToxinPERSON

0.84+

times a weekQUANTITY

0.8+

IBM 2020EVENT

0.75+

ThinkCOMMERCIAL_ITEM

0.75+

one yearQUANTITY

0.72+

twoQUANTITY

0.7+

single dayQUANTITY

0.69+

next 90 daysDATE

0.65+

WatsonORGANIZATION

0.64+

tenetsQUANTITY

0.51+

everyQUANTITY

0.51+

WatsonCOMMERCIAL_ITEM

0.45+

CovanceLOCATION

0.34+

Sriram Raghavan, IBM Research AI | IBM Think 2020


 

(upbeat music) >> Announcer: From the cube Studios in Palo Alto and Boston, it's the cube! Covering IBM Think. Brought to you by IBM. >> Hi everybody, this is Dave Vellante of theCUBE, and you're watching our coverage of the IBM digital event experience. A multi-day program, tons of content, and it's our pleasure to be able to bring in experts, practitioners, customers, and partners. Sriram Raghavan is here. He's the Vice President of IBM Research in AI. Sriram, thanks so much for coming on thecUBE. >> Thank you, pleasure to be here. >> I love this title, I love the role. It's great work if you're qualified for it.(laughs) So, tell us a little bit about your role and your background. You came out of Stanford, you had the pleasure, I'm sure, of hanging out in South San Jose at the Almaden labs. Beautiful place to create. But give us a little background. >> Absolutely, yeah. So, let me start, maybe go backwards in time. What do I do now? My role's responsible for AI strategy, planning, and execution in IBM Research across our global footprint, all our labs worldwide and their working area. I also work closely with the commercial parts. The parts of IBM, our Software and Services business that take the innovation, AI innovation, from IBM Research to market. That's the second part of what I do. And where did I begin life in IBM? As you said, I began life at our Almaden Research Center up in San Jose, up in the hills. Beautiful, I had in a view. I still think it's the best view I had. I spent many years there doing work at the intersection of AI and large-scale data management, NLP. Went back to India, I was running the India lab there for a few years, and now I'm back here in New York running AI strategy. >> That's awesome. Let's talk a little bit about AI, the landscape of AI. IBM has always made it clear that you're not doing consumer AI. You're really tying to help businesses. But how do you look at the landscape? >> So, it's a great question. It's one of those things that, you know, we constantly measure ourselves and our partners tell us. I think we, you've probably heard us talk about the cloud journey . But look barely 20% of the workloads are in the cloud, 80% still waiting. AI, at that number is even less. But, of course, it varies. Depending on who you ask, you would say AI adoption is anywhere from 4% to 30% depending on who you ask in this case. But I think it's more important to look at where is this, directionally? And it's very, very clear. Adoption is rising. The value is more, it's getting better appreciated. But I think more important, I think is, there is broader recognition, awareness and investment, knowing that to get value out of AI, you start with where AI begins, which is data. So, the story around having a solid enterprise information architecture as the base on which to drive AI, is starting to happen. So, as the investments in data platform, becoming making your data ready for AI, starts to come through. We're definitely seeing that adoption. And I think, you know, the second imperative that businesses look for obviously is the skills. The tools and the skills to scale AI. It can't take me months and months and hours to go build an AI model, I got to accelerate it, and then comes operationalizing. But this is happening, and the upward trajectory is very, very clear. >> We've been talking a lot on theCUBE over the last couple of years, it's not the innovation engine of our industry is no longer Moore's Law, it's a combination of data. You just talked about data. Applying machine technology to that data, being able to scale it, across clouds, on-prem, wherever the data lives. So. >> Right. >> Having said that, you know, you've had a journey. You know, you started out kind of playing "Jeopardy!", if you will. It was a very narrow use case, and you're expanding that use case. I wonder if you could talk about that journey, specifically in the context of your vision. >> Yeah. So, let me step back and say for IBM Research AI, when I think about how we, what's our strategy and vision, we think of it as in two parts. One part is the evolution of the science and techniques behind AI. And you said it, right? From narrow, bespoke AI that all it can do is this one thing that it's really trained for, it takes a large amount of data, a lot of computing power. Two, how do you have the techniques and the innovation for AI to learn from one use case to the other? Be less data hungry, less resource hungry. Be more trustworthy and explainable. So, we call that the journey from narrow to broad AI. And one part of our strategy, as scientists and technologists, is the innovation to make that happen. So that's sort of one part. But, as you said, as people involved in making AI work in the enterprise, and IBM Research AI vision would be incomplete without the second part, which is, what are the challenges in scaling and operationalizing AI? It isn't sufficient that I can tell you AI can do this, how do I make AI do this so that you get the right ROI, the investment relative to the return makes sense and you can scale and operationalize. So, we took both of these imperatives. The AI narrow-to-broad journey, and the need to scale and operationalize. And what of the things that are making it hard? The things that make scaling and operationalizing harder: data challenges, we talked about that, skills challenges, and the fact that in enterprises, you have to govern and manage AI. And we took that together and we think of our AI agenda in three pieces: Advancing, trusting, and scaling AI. Advancing is the piece of pushing the boundary, making AI narrow to broad. Trusting is building AI which is trustworthy, is explainable, you can control and understand its behavior, make sense of it and all of the technology that goes with it. And scaling AI is when we address the problem of, how do I, you know, reduce the time and cost for data prep? How do I reduce the time for model tweaking and engineering? How do I make sure that a model that you build today, when something changes in the data, I can quickly allow for you to close the loop and improve the model? All of the things, think of day-two operations of AI. All of that is part of our scaling AI strategy. So advancing, trusting, scaling is sort of the three big mantras around which the way we think about our AI. >> Yeah, so I've been doing a little work in this around this notion of DataOps. Essentially, you know, DevOps applied to the data and the data pipeline, and I had a great conversation recently with Inderpal Bhandari, IBM's Global Chief Data Officer, and he explained to me how, first of all, customers will tell you, it's very hard to operationalize AIs. He and his team took that challenge on themselves and have had some great success. And, you know, we all know the problem. It's that, you know AI has to wait for the data. It has to wait for the data to be cleansed and wrangled. Can AI actually help with that part of the problem, compressing that? >> 100%. In fact, the way we think of the automation and scaling story is what we call the "AI For AI" story. So, AI in service of helping you build the AI that helps you make this with speed, right? So, and I think of it really in three parts. It's AI for data automation, our DataOps. AI used in better discovery, better cleansing, better configuration, faster linking, quality assessment, all of that. Using AI to do all of those data problems that you had to do. And I called it AI for data automation. The second part is using AI to automatically figure out the best model. And that's AI for data science automation, which is, feature engineering, hyperparameter optimization, having them all do work, why should a data scientist take weeks and months experimenting? If the AI can accelerate that from weeks to a matter of hours? That's data science automation. And then comes the important part, also, which is operations automation. Okay, I've put a data model into an application. How do I monitor its behavior? If the data that it's seeing is different from the data it was trained on, how do I quickly detect it? And a lot of the work from Research that was part of that Watson OpenScale offering is really addressing the operational side. So AI for data, AI for data science automation, and AI to help automate production of AI, is the way we break that problem up. >> So, I always like to ask folks that are deep into R&D, how they are ultimately are translating into commercial products and offerings? Because ultimately, you got to make money to fund more R&D. So, can you talk a little bit about how you do that, what your focus is there? >> Yeah, so that's a great question, and I'm going to use a few examples as well. But let me say at the outset, this is a very, very closed partnership. So when we, the Research part of AI and our portfolio, it's a closed partnership where we're constantly both drawing problem as well as building technology that goes into the offering. So, a lot of our work, much of our work in AI automation that we were talking about, is part of our Watson Studio, Watson Machine Learning, Watson OpenScale. In fact, OpenScale came out of Research working Trusted AI, and is now a centerpiece of our Watson project. Let me give a very different example. We have a very, very strong portfolio and focus in NLP, Natural Language Processing. And this directly goes into capabilities out of Watson Assistant, which is our system for conversational support and customer support, and Watson Discovery, which is about making enterprise understand unstructurally. And a great example of that is the Working Project Debater that you might have heard, which is a grand challenge in Research about building a machine that can do debate. Now, look, we weren't looking to go sell you a debating machine. But what did we build as part of doing that, is advances in NLP that are all making their way into assistant and discovery. And we actually just talked about earlier this year, announced a set of capabilities around better clustering, advanced summarization, deeper sentiment analysis. These made their way into Assistant and Discovery but are born out of research innovation and solving a grand problem like building a debating machine. That's just an example of how that journey from research to product happens. >> Yeah, the Debater documentary, I've seen some of that. It's actually quite astounding. I don't know what you're doing there. It sounds like you're taking natural language and turning it into complex queries with data science and AI, but it's quite amazing. >> Yes, and I would encourage you, you will see that documentary, by the way, on Channel 7, in the Think Event. And I would encourage you, actually the documentary around how Debater happened, sort of featuring back of the you know, backdoor interviews with the scientist who created it was actually featured last minute at Copenhagen International Documentary Festival. I'll invite viewers to go to Channel 7 and Data and AI Tech On-Demand to go take a look at that documentary. >> Yeah, you should take a look at it. It's actually quite astounding and amazing. Sriram, what are you working on these days? What kind of exciting projects or what's your focus area today? >> Look, I think there are three imperatives that we're really focused on, and one is very, you know, just really the project you're talking about, NLP. NLP in the enterprise, look, text is a language of business, right? Text is the way business is communicated. Within each other, with their partners, with the entire world. So, helping machines understand language, but in an enterprise context, recognizing that data and the enterprises live in complex documents, unstructured documents, in e-mail, they live in conversations with the customers. So, really pushing the boundary on how all our customers and clients can make sense of this vast volume of unstructured data by pushing the advances of NLP, that's one focus area. Second focus area, we talked about trust and how important that is. And we've done amazing work in monitoring and explainability. And we're really focused now on this emerging area of causality. Using causality to explain, right? The model makes this because the model believes this is what it wants, it's a beautiful way. And the third big focus continues to be on automation. So, NLP, trust, automation. Those are, like, three big focus areas for us. >> sriram, how far do you think we can take AI? I know it's a topic of conversation, but from your perspective, deep into the research, how far can it go? And maybe how far should it go? >> Look, I think we are, let me answer it this way. I think the arc of the possible is enormous. But I think we are at this inflection point in which I think the next wave of AI, the AI that's going to help us this narrow-to-broad journey we talked about, look, the narrow-to-broad journey's not like a one-week, one-year. We're talking about a decade of innovation. But I think we are at a point where we're going to see a wave of AI that we like to call "neuro-symbolic AI," which is AI that brings together two sort of fundamentally different approaches to building intelligence systems. One approach of building intelligence system is what we call "knowledge driven." Understand data, understand concept, logically, reasonable. We human beings do that. That was really the way AI was born. The more recent last couple of decades of AI was data driven, Machine learning. Give me vast volumes of data, I'll use neural techniques, deep learning, to to get value. We're at a point where we're going to bring both of them together. Cause you can't build trustworthy, explainable systems using only one, you can't get away from not using all of the data that you have to make them. So, neuro-symbolic AI is, I think, going to be the linchpin of how we advance AI and make it more powerful and trustworthy. >> So, are you, like, living your childhood dream here or what? >> Look, for me I'm fascinated. I've always been fascinated. And any time you can't find a technology person who hasn't dreamt of building an intelligent machine. To have a job where I can work across our worldwide set of 3,000 plus researchers and think and brainstorm on strategy with AI. And then, most importantly, not to forget, right? That you talked about being able to move it into our portfolios so it actually makes a difference for our clients. I think it's a dream job and a whole lot of fun. >> Well, Sriram, it was great having you on theCUBE. A lot of fun, interviewing folks like you. I feel a little bit smarter just talking to you. So thanks so much for coming on. >> Fantastic. It's been a pleasure to be here. >> And thank you for watching, everybody. You're watching theCUBE's coverage of IBM Think 2020. This is Dave Vellante. We'll be right back right after this short break. (upbeat music)

Published Date : May 7 2020

SUMMARY :

Brought to you by IBM. and it's our pleasure to be at the Almaden labs. that take the innovation, AI innovation, But how do you look at the landscape? But look barely 20% of the it's not the innovation I wonder if you could and the innovation for AI to learn and the data pipeline, and And a lot of the work from So, can you talk a little that goes into the offering. Yeah, the Debater documentary, of featuring back of the Sriram, what are you and the enterprises live the data that you have to make them. And any time you can't just talking to you. a pleasure to be here. And thank you for watching, everybody.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

IBMORGANIZATION

0.99+

Sriram RaghavanPERSON

0.99+

New YorkLOCATION

0.99+

80%QUANTITY

0.99+

20%QUANTITY

0.99+

BostonLOCATION

0.99+

SriramPERSON

0.99+

IBM ResearchORGANIZATION

0.99+

Palo AltoLOCATION

0.99+

Inderpal BhandariPERSON

0.99+

two partsQUANTITY

0.99+

second partQUANTITY

0.99+

bothQUANTITY

0.99+

4%QUANTITY

0.99+

IndiaLOCATION

0.99+

One partQUANTITY

0.99+

one partQUANTITY

0.99+

Channel 7ORGANIZATION

0.99+

one-yearQUANTITY

0.99+

San JoseLOCATION

0.99+

sriramPERSON

0.99+

one-weekQUANTITY

0.99+

3,000 plus researchersQUANTITY

0.99+

TwoQUANTITY

0.99+

three partsQUANTITY

0.98+

Copenhagen International Documentary FestivalEVENT

0.98+

South San JoseLOCATION

0.98+

Second focusQUANTITY

0.98+

30%QUANTITY

0.98+

three piecesQUANTITY

0.98+

DataORGANIZATION

0.98+

One approachQUANTITY

0.97+

earlier this yearDATE

0.97+

JeopardyTITLE

0.96+

AlmadenORGANIZATION

0.96+

oneQUANTITY

0.95+

OpenScaleORGANIZATION

0.95+

threeQUANTITY

0.94+

one focus areaQUANTITY

0.94+

third bigQUANTITY

0.93+

Watson AssistantTITLE

0.92+

one use caseQUANTITY

0.92+

MooreORGANIZATION

0.92+

todayDATE

0.91+

StanfordLOCATION

0.91+

Almaden Research CenterORGANIZATION

0.9+

one thingQUANTITY

0.88+

2020TITLE

0.87+

waveEVENT

0.87+

WatsonTITLE

0.86+

three big mantrasQUANTITY

0.85+

> 100%QUANTITY

0.85+

two sortQUANTITY

0.84+

ThinkCOMMERCIAL_ITEM

0.83+

second imperativeQUANTITY

0.81+

Global Chief Data OfficerPERSON

0.8+

three imperativesQUANTITY

0.76+

last couple of yearsDATE

0.76+

DebaterTITLE

0.76+

WatsonORGANIZATION

0.72+

NLPORGANIZATION

0.72+

StudioORGANIZATION

0.72+

dayQUANTITY

0.67+

twoQUANTITY

0.65+

VicePERSON

0.65+

theCUBEORGANIZATION

0.63+

Watson DiscoveryTITLE

0.62+

theCUBETITLE

0.6+

Jesus Mantas, IBM | IBM Think 2020


 

>> Narrator: From theCUBE studios in Palo Alto in Boston, it's theCUBE, covering IBM Think. Brought to you by IBM. >> Hi everybody, welcome back. This is Dave Vellante, and you're watching theCUBE's coverage of IBM Think 2020, the digital version of IBM Think and theCUBE is pleased to be providing the wall-to-wall coverage as we have physically for so many years at big IBM events. Jesus Mantas is here, he's the managing partner for Global Strategy for IBM and the Global Business Services. Jesus, great to see you, thanks for coming on. >> Great to be here, Dave. >> So, every guest that we've talked to this week, really, we've talked about COVID but just briefly. Here, we're going to do a bigger drill down and really try to get, Jesus your perspectives as an IBM's point of view on what's going on here. So let me start with, we've never seen anything like this before, obviously. I mean, there are some examples you got to go back to 1918, try to get some similarities, but 1918 is a long long time ago, so, what's different about this? What are the similarities? >> Yeah, it's, you know what Mark Twain used to say that history doesn't repeat, but it often rhymes. I think there are similarities of what we are experiencing right now in this pandemic with other pandemics like Spanish flu. I think the situation is unique in terms of the impact, and the synchronicity of that impact, right? So we can go back, whether if you want, economic crisis, or our society crisis, where you have either one country or one aspect being disrupted. But this is really society being interrupted, on a global scale. So its impact is unprecedented in that perspective in modern time, and I think all of us are adjusting to it. >> I want to ask you about digital transformation, because I've made the point that, while a lot of people talk digital transformation, there's been a lot of complacency, people say, not my lifetime, we're a bank, we're making a lot of money, we're doing okay. How do you think COVID-19 will sort of change that complacency and really accelerate digital transformation as a mindset and actually turn it into action? >> Yeah, I think the best way to put it is digital transformation five months ago was about obtaining competitive advantage and digital transformation today in many industries is about survival. That is how big of a change it is. The need for efficiency and cost savings, the need for resiliency that we have talked about, the need to be able to drive agility, to be able to switch and adapt, the need to make hyper local decisions, right, to use data, none of that can be done unless you have fully digitized processes, you are consuming local data and you have trained the people to really operate in those new, more intelligent processes. So, it has gone from optionality is okay, you can do okay but if you digitize you're going to do better to unless you digitize your business may not exist next year. I think that's the change, the change is, I think now is widely understood that the majority of our digitization processes have to be accelerated, and I would say there is a great statistic that when we go back in history, and there has been many, as I mentioned, of this crisis. You can look back at the two behaviors that businesses have, one is, to play defense and then what happens two years later, and the other one is, okay, you defense but you immediately switch to offense and then what happens two year later. Those companies that use this time to just defend and hunker down, history said in a couple of years later, 21% of them outperform. But those businesses that they shift from defense to offense and actually accelerate in these cases, programs like digitalization, 37% outperform. So, there is a premium for businesses that right now actually immediately switch to offense, focus on this set of digitalization and empowering cloud, managing data, ensuring the skills of the people, they're more likely not only to survive, but thrive in the next few years than those that just use this time to defend. >> To your point, it's about survival, it's not about, not getting disrupted, 'cause you're going to get disrupted, it's almost a certainty, and so in order to survive, you've got to digitally transform. Your final thoughts on digital transformation, then I want to ask you if there's a silver lining in all this. >> I think, what we do, we can't change the context. But we cannot let the context define who we are either as individuals or as company. What we can do, is to choose how do we act on that context. I would say, those organizations and those individuals that take advantage of the situation to understand that some of these behaviors are going to change, understand that the more that we shift technology to the cloud, the more that we shift work close to the cloud, the more that we use technologies like artificial intelligence and drive nonlinear decisions that massively optimize everything we do from the way that we deliver health care, to the way that we manage supply chains, to the way that we secure food, frankly, to the way that we protect the environment, there is a silver lining that technology it is one of those solutions that can help in all of these areas, and the silver lining of this is, hopefully, let's use this time to get better prepared for the next pandemia, to get better prepared for the next crisis, to implement technologies that drive efficiency faster, that create new jobs, that protect the environment, and while we cannot change the fact that we have COVID-19, we can change what happens after COVID-19, so what we return to is something better that what we enter before COVID-19. >> Very thoughtful commentary, Jesus. Thank you so much for coming on theCUBE, blessings to your family and yourself. >> Appreciated Dave, thank you, and thank you for everything you do to keep everybody informed. >> Really a pleasure. And thank you for watching everybody. This is Dave Vellante. You're watching theCUBE's coverage of IBM Think 2020 the digital event, to be right back right after the short break. (upbeat music)

Published Date : May 7 2020

SUMMARY :

Brought to you by IBM. and theCUBE is pleased to be providing What are the similarities? and the synchronicity I want to ask you about and the other one is, okay, you defense and so in order to survive, to the way that we manage supply chains, blessings to your family and yourself. and thank you for everything you do to be right back right

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavePERSON

0.99+

Dave VellantePERSON

0.99+

IBMORGANIZATION

0.99+

Mark TwainPERSON

0.99+

Palo AltoLOCATION

0.99+

Jesus MantasPERSON

0.99+

Global Business ServicesORGANIZATION

0.99+

COVID-19OTHER

0.99+

21%QUANTITY

0.99+

JesusPERSON

0.99+

next yearDATE

0.99+

37%QUANTITY

0.99+

two year laterDATE

0.99+

one aspectQUANTITY

0.99+

BostonLOCATION

0.99+

one countryQUANTITY

0.99+

two behaviorsQUANTITY

0.99+

theCUBEORGANIZATION

0.98+

five months agoDATE

0.98+

two years laterDATE

0.98+

pandemiaEVENT

0.98+

oneQUANTITY

0.97+

1918DATE

0.96+

Global StrategyORGANIZATION

0.94+

couple of years laterDATE

0.93+

this weekDATE

0.91+

pandemicEVENT

0.91+

todayDATE

0.89+

IBM Think 2020EVENT

0.86+

2020EVENT

0.86+

COVIDORGANIZATION

0.81+

ThinkCOMMERCIAL_ITEM

0.71+

pandemicsEVENT

0.71+

IBM ThinkCOMMERCIAL_ITEM

0.66+

yearsDATE

0.64+

Spanish fluEVENT

0.61+

IBM ThinkORGANIZATION

0.53+

Think 2020COMMERCIAL_ITEM

0.4+

Michelle Peluso, IBM | IBM Think 2020


 

(relaxing music) >> Announcer: From theCUBE studios in Palo Alto and Boston, it's theCUBE, covering IBM Think, brought to you by IBM. >> Welcome back to theCUBE, I'm Stu Miniman, and this is theCUBEs coverage of IBM Think 2020, the digital experience, we're getting to talk to the IBM executive, the customers, and their partners Where they are around the globe, really happy to bring back the program, one of our people online. Michelle Peluso, she is the senior vice president of digital sales and chief marketing officer for all of IBM. Michelle, thanks so much for joining us. >> Thank you so much. It's great to have you as we get ready for Think 2020. >> boy, Michelle, you know, working for a big company like IBM, I can only imagine how much current global activities have impacting you, anybody If you turn on TV, you know that the ads that you're seeing are obviously have a very different manner than what we were seeing before this happened. And, you know, the focus of Think, of course, you know, really centers around what is happening, how you're helping IBM customers in part through there. So give us a little bit of insight as to, you know, how much the team has had the, you know, rapidly move towards the new reality? >> Well, look our company has been very focused on a couple of major priorities. First of all, our people keeping them safe and healthy and thinking about what are we learning from all this? How do we use new tools in different ways? How do we work in agile ways that will outlast even this current crisis? Secondly, of course, our clients we have pivoted hard to the essential offerings for recovery and transformation our clients need most right now. Things like business continuity, things like enabling Watson to engage all your customers virtually, things like supply chain resiliency, things like increased agility on the cloud, health and human Services. These are new offerings, new bundles that we know our clients need most right now, and so we've been pivoting hard. Third thing, as a marketer, of course, I've been very focused on how does the brand show up in this moment? How do we think about this cadre of events we used to do in person? How do we transform and think about generating demand in a virtual world, really improving the end to end digital experiences of everything we do? And of course, lastly, it's about how do we help create a cure? How do we help make sure that we speed this process along so we've done a lot from you know, taking super computing power and really applying it to the fight to find cures and find vaccines. We have donated things like Watson Assistant so that governments can get access to free chatbots to help their customers with knowledge and information about COVID-19. So, lots of things we're doing across all those friends. It's certainly been a time of really rapid transformation and the most important thing we can do is listen and pivot quickly. >> Yeah, really important points Michelle, listening to customers. I'm curious, you know, what are you hearing from customers? Obviously, you know, they have lots of challenges. And therefore, it probably changed a little bit how they think about who they partner with, you know, who they go to, to be a trusted, you know, partner in these times. So, you know, what feedback Are you getting from customers? How do they look at the relationship with IBM in your ecosystem, that might be a little different than before? >> Well, we're talking to customers more than ever, as you can imagine. And I think we have seen seven offerings, seven things that our clients really are learning going through this experience and need help with. And those range as I mentioned earlier, from supply chain continuity and resiliency to the new cybersecurity landscape. There's so many different and unique cyber risks right now. Virtual teaming, virtual work from home. Business continuity and resiliency, increased agility on the cloud things like, you know, making sure that we're supporting the health and human Services of our people. So those are some of the examples of what matters most to clients right now virtually engaging with customers with Watson. So those are the things that we have pivoted hard to make sure that we help our clients with the essential process of recovery and transformation. Because there isn't going where, there's no back to normal. We were very convinced that this is a rethink and Think 2020 is coming at the perfect time, as businesses start to slowly reopen their doors. You know, it's going to be a very important conversation with our clients on how we accelerate recovery and transformation. And transformation is important because we have learned a lot. And there are some things that we need to go back and improve. And there's some lessons we've learned that we can, you know, take with us into this sort of new world. So it's a challenging time for sure. But it's also one that is ripe with opportunities. And I've seen so much creativity and so much dedication. As we, you know, we had to remake Think in 60 days, a totally new platform, you know, new capabilities, new content, and at three x the volume. So the teams have done a remarkable job. And I'm excited for the conversation. >> What I'm curious, what you're hearing, is customers that are, you know, starting are in the midst of that journey, is the global pandemic, is it accelerating what they're doing? Is it stalling them? They're not definitely finding, >> you know, and I think it's really two things. One is, how does the team operate and you know, I've been very passionate for my entire career about agile as a discipline, small cross functional teams aligned on a mission, shared values, really have an incredible ability using the agile rituals to prioritize and to move quickly and to optimize that is more important than ever before. That is what is enabling kind of this more rapid, you know, cycles we're seeing and then I think are critical. >> What should we be taking as lessons and, you know, new practices that will continue in the future? >> Well, from a client perspective, I think we're going to see where digital has always sort of been, you know, mission critical. I think there's going to be incredible and continued, you know, rapid acceleration to a digital environment. And that's not just outside in what, you know, do we have a good mobile app? Do we have a good web experience that's inside out. How do we digitize the, you know, the call center so that customers can get virtual answers with chatbots? How do we digitize and use AI to improve HR, supply chain apart from fundamental, you know, manufacturing operational procedures. So that's one thing I think will be a permanent change. Secondly, I think we're going to see the same thing on the cloud, I think clients that had you know, three to five year journeys on their roadmaps of how they think about their cloud architecture in what workloads are we going to move to the public cloud? Almost all of them are saying that now has to be compressed. So I think we're going to see more rapid acceleration and adoption and journey to cloud. I think there's some new things that we'll see in terms of blockchain and cybersecurity and others that will also reimagine the landscape of our clients. On the people side, you know, we're adjusting, right? We're going to have to figure out this new way of being, this new way of normal, which might be a bit more hybrid than we're used to. Sometime in the office, sometime at home. I fundamentally believe more agile teams truly agile is a mission. So I think these are just some of the areas that we're going to see a reimagination of how work gets done, and what work gets done to make us more resilient, you know, stronger, and to emerge from what has been an immensely challenging period for so many, and personally so, for so many. And how do we take some lessons from this? So we emerged stronger >> All right. So Michelle, I was looking back at when we first had you on theCUBE. And when you were, you know, just coming on IBM as the CMO. And you know, you talk then about how you've always worked for digital companies, so here in 2020, the global pandemic, of course, you know, is on everyone's mind, but when people leave Think, how should they be thinking about IBM? if, you know, what is different, you know, and what is the same, over 100 year old company, one of the most trusted brands in the industry, but new leadership with Arvind, And how do you want people to think of IBM going forward? >> I think times of great challenge, are actually meant for the IBM brand. I think that our clients are looking more than ever for partners they can trust who can help them find the world's most innovative technology, with deep expertise and understanding of how work actually happens across these industries and with a blanket of Kind of security and likely trusted, responsible stewardship that matters more. So I hope our clients and our business partners because we have an immensely rich agenda for our business partners, I hope they emerge knowing that IBM is their essential partner for recovery and for transformation. And there is simply nothing we won't do to help them make their business stronger and in so doing to build a stronger more resilient world. >> Well, Michelle Peluso congratulations and the team on everything to make Think 2020 Digital come and really appreciate being able to participate with you. >> Thanks for I really appreciate it. >> Stay tuned for lots more coverage from the cube. I'm Stu Miniman. Thanks for watching. (upbeat music)

Published Date : May 5 2020

SUMMARY :

brought to you by IBM. really happy to bring back the program, It's great to have you of course, you know, really centers around and the most important thing we can do you know, what are you we can, you know, take with us and you know, I've been very passionate I think clients that had you know, And you know, you talk then and in so doing to build being able to participate with you. coverage from the cube.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
MichellePERSON

0.99+

IBMORGANIZATION

0.99+

Michelle PelusoPERSON

0.99+

Stu MinimanPERSON

0.99+

Palo AltoLOCATION

0.99+

BostonLOCATION

0.99+

OneQUANTITY

0.99+

threeQUANTITY

0.99+

two thingsQUANTITY

0.99+

2020DATE

0.99+

five yearQUANTITY

0.99+

60 daysQUANTITY

0.99+

COVID-19OTHER

0.98+

SecondlyQUANTITY

0.98+

ArvindPERSON

0.98+

agileTITLE

0.97+

firstQUANTITY

0.96+

seven thingsQUANTITY

0.95+

over 100 year oldQUANTITY

0.95+

seven offeringsQUANTITY

0.95+

Think 2020COMMERCIAL_ITEM

0.94+

WatsonTITLE

0.94+

Third thingQUANTITY

0.94+

oneQUANTITY

0.93+

theCUBEORGANIZATION

0.9+

FirstQUANTITY

0.9+

theCUBEsORGANIZATION

0.8+

one thingQUANTITY

0.74+

pandemicEVENT

0.7+

2020TITLE

0.64+

ThinkCOMMERCIAL_ITEM

0.58+

Watson AssistantTITLE

0.53+

WatsonORGANIZATION

0.39+

DigitalORGANIZATION

0.28+

David La Rose, IBM Partner Ecosystem | IBM Think 2020


 

>>Yeah, >>from The Cube Studios in Palo Alto and Boston. It's the Cube covering IBM. Think brought to you by IBM. >>Hi, everybody. We're back. And you're watching the Cube's coverage of the IBM think digital event for 2020. This is that he's socially distant and socially responsible. You? My name is Dave a lot. David Larose. Who's the general manager of the IBM partner ecosystem? David, Good to see you. >>Likewise. Great to be here. >>Yes, it is. Your first year in running the the ecosystem. You probably don't expect to be managing through the this world crisis this novel Corona virus. But what was your first move? Your outreach to partners? How are you communicating to them? Maybe you could share with us how that's all going. >>It certainly wasn't in the break. So when I took this job, you're right. But, you know, we have a very strong relationship with with our partners, and what we have is we have a global advisory board. So about 25 or 30 of our largest customers across the world, we engaged with them very, very quickly. That's all of the CIOs presidents, vice presidents of styles, um, and we engage with them on a survey and said, How are you thinking about it's what are the what are your big concerns and, you know, not unusually. They came back with a couple of key things. Number one was, and their primary concern is how do they support their clients? That was probably number one on their list of things followed very closely. By enabling their firms that have a financial stability that was number two and then probably number three. I would say it was, you know, managing their work. But they're moving to a digital only type of environment, similar to what IBM has done ever kind of the three, the three big concerns. And then we spent some time talking to them about how we could help, then really deal with that and address some of those problems. And earlier this week, we we announced so therefore very key things around. How do we help them? One. How do we adapt our programs and our incentives and, uh, really looking at providing them with, you know, extension of things like loyalty program. So don't worry about you know, your ability to re validate and recertify. We gotta protect your loyalty for 2020. We added a lot of incentives in the hardware systems program in the second quarter, so we've increased by half a percent off their based incentive from the first dollar that a lot of areas around programs in terms. Then we sort of really tried to address that point of view around digital. Um, some companies were digitally ready, but there are many companies that weren't actually know a digital platform, but also we very quickly rolled out what we call it my digital marketing platform, where partners can come in and download content and curate content from IBM and then wrap their own campaigns around that and get that out and continue to engage with their clients and their partners. Um, and we funding all of that 100% from an IBM perspective on using our car marketing goals. We used to have a 50 50 funding model without with their partners. But in this particular scenario, that's a digital program that they're running with funding at 100%. And then we're also opening up to provide consultancy on how to optimize digital. So I think you know the thing that we've done here is just their programs in terms quickly, um, or money back into the into the program during the second quarter and protect the ability for our partners and then really trying to help them and enable them to get Teoh Digital our workforce and the digital program. >>Yeah, a couple things there. I mean, we were talking earlier to the the folks from IBM Global Finance, and that's a key part that you mentioned liquidity. You know, certainly these partners air obviously very much concerned about the uncertainty ahead. So having a partner like IBM that can, whether it's, you know, pass on, you know, lease terms, etcetera and provide that sort of blacking is this key? I think the other thing, too we've heard from a lot of executives, is you've got to stay close to your clients during we always do, but especially during times like this, And that's where partners are so crucial in IBM huge company, you know, massive direct sales force, but you can't cover everything. And so having the partner who's got intimate relationships, I mean, I was on a call earlier this week with a partner in Minneapolis I mean, he knows everybody in that region. And so you just that level of intimacy, I think becomes very, very important in times like this, doesn't it? >>Absolutely. And stay connected with with that. So we have about, you know, just a lot of, Ah, 21,000 active planners across across the world and staying close to the senior members of our largest partners is is really important to us. We had a We hosted a call earlier this week actually, with with their advisory council to test the programs that we've gotten in the market, Are they getting where they need, where they need the help the most? We take a lot of feedback with adjusted our programs. We're looking at this on a literally a daily basis right now on don't envisage that we would we would update pretty agile in terms of how we move that. But you know, to your point, having a partner network that we do have around by the hardware and software only on and is right, so learn what were wrong. You know what they're hearing from their pot from their clients and, you know, it allows us to more easily and quickly address needs across all of the IBM client set. >>So we interview a lot of partners, and, you know, when you talk to the familiar, they've got to make money. They have. The margin is very important to them, but it's almost it's table stakes. I mean, again, they can make money a lot of different ways. So what differentiates the suppliers is all these other things that you're talking about? Um, So I want to ask you when you came in to this this role, what you're doing priorities in terms of, you know, partner outreach, retaining that, that loyalty And what do you see changing a za result of this pandemic? >>Yeah, it's a great question. So look, four key priorities that we declared very early on and, by the way, you know, took over from John Touched at the time. And John has spent the last two years really transforming, um, your channel and the way we engage with channel. And so there was a lot of hard lifting that was already done, but it was sort of four. Things that we focused in on one was obviously, how do we continue? Accelerate IBM drive into the hybrid multi cloud market, particularly now with the integration of Red Hat into the organization. That's a very different, you know, sales motion that Wei had so accelerating that was one of the key parties, the 2nd 1 waas. And how do we continue to differentiate on the value and so ensuring that that our programs are staying up to speed and that they're being modernized? You know, the IBM possible program is being a program predominantly built on Recile over the last 10 years. Now the microchip that we're now talking about platforms not talking about consumption. And this week during Partner World, we're gonna talk about how we are going to evolve the part of the program to move into the rest partners who are building on platforms. And how about they're moving to consumption again, all around hybrid, multi, multi cloud. That's kind of the second thing skills, skills and expertise for out for a channel. We kind of have declared that we want our channel to be the most skilled channel in the industry, and it's really interesting, Dave, during this period of the pandemic, it's one of these times where we seem to have more recent and more time, and the partners have been giving us a lot of feedback to say during this time around. Workforce is home and is connected digitally. Why don't we? Wasn't IBM help with in enhancing the enablement programme certification? And so we're doing a lot around that. We see a Z great opportunity to CIO to really develop certifications and skills and expertise during this period. Um, and then the full thing is around winning in what we call out selective segments. And so we want our partners to operate across the IBM portfolio and across our client set. But where we really need the help and where we're putting the money in the programs is around the mid size organizations where they can bring the portfolio into places that it doesn't have this today, new clients or existing clients with with IBM. But the Jason like server was that kind of the four priorities and what we're seeing is and this situation that we're going through this pandemic going through, it's actually accelerating the areas around moving. My partner multi cloud cloud is becoming a differentiator for us and accelerating. I need to get a program that is relevant beyond just resell. But you know this, this concept of platforms and building. But as they build with their own light beyond platform and consumption, So I think it's it's actually accelerating what we've seen and have it moving forward. >>It's interesting what you're saying about resale. We've talked many, many years now on the Cube about the partner ecosystem. It really used to be about resale. You know, we have a majority of his box selling, and you could make a lot of money doing that, you know, a decade or two ago. But when Cloud came, partners really started to underst and that that there was a sea change happening in I T. For a while there, they thought, Wow, you know, this is really going to be challenging. Cloud's going to kill us. But what they realized after a while is with five exactly complex hybrid Cloud is it's not simple to cure and create a seamless experience across clouds on prime etcetera. So the huge opportunities open up, add value. So there's been a massive change in the mindset. Uh, and it sounds like particularly with digital, that the pandemic is going to accelerate that on. People are going to come out of this, um, almost having done some exercises, maybe in a little bit better shape than they came into it. You buy that premise? >>My question is no question about it. I mean, if you think about, you know, IBM portfolio for a minute. Um, and over the last really 6 to 9 months, we have containerized out our software portfolio. It's based on the, you know, the kubernetes container ization and an open ship. So we're ready from a portfolio perspective. And, you know, now we're catching up from a program perspective we're introducing this week and out in the world this concept of a build program and a service program, and so that is their will preserve and continue to evolve the cell program that resell. But you know this concept of the build program and the service programs and only extend the reach that we have to the data and the ecosystem that we're operating in new sets of partners. But is that one of transition, that business from recent consumption? We're going to support that. But then you have to your point around this whole digital everything from digital capabilities around, generating amount of opportunity and a little bit about that earlier with my program and the funding that we've got behind that the experience that we're we're offering as consultants, but also this concept of digital selling. You know, there's not all about partners are savvy around digital selling. So we've been doing that for many, many years. And, uh and so we're opening up digital selling Enablement sessions, Webinars consultancy and a bunch of assets that that IBM has and has invested in for many, many years and opening that up to you want to add channel? >>Yes, There was some great opportunities there for our partners. I mean, we The Cube has been covering the Red Hat Summit we had Jim Whitehurst on. We're in the process of scheduling Arvin so great to see, you know, kind of connect the dots between those franchises and identify the opportunities, and they're significant. I mean, Red Hat has a lot of momentum in the market. IBM has a huge presence, great opportunity to modernize applications, And then your point about the hardware side we just saw on IBM s latest earnings, released at the Fisher running in hardware right now on, uh, you know, obviously tailwind of the Z cycle, but other parts of the portfolio storage from 19%. So so some exciting times for partners, even though there's so much uncertainty in the market again, staying close to customers, you know, doing doing right by your employees, leveraging the IBM relationship where you're obviously providing a lot of backdrop in support. David, I wonder if you could just sort of wrap a bow around. You know, think 2020 is the virtual trucks are pulling away away from the virtual digital Mosconi. What's the take? Aways will give us the bumper sticker. >>Look, the bumper sticker is that it's never been a better time to be an idea. We've got a leading portfolio that is now ready for the new world. Will the consumption and the world building on um, we are, you know, we're modernizing our programs to ensure that you can make money here. There's a lot of money to be made as we as we get into this thing, this new world and we are behind you right now to support you financially and to get you develop digitally enabled guy. So never been a time to be an IBM partner right now. >>David. Great message. Thank you very much for coming on the Cube. And best of luck to you. Stay safe and ah, again, really appreciate your time. >>You too, Dave. Thanks very much. Bye. Site. >>You know, uh, and you're watching the Cube here at IBM? Think 2020. Our digital coverage. We'll be right back right after this short break. I'm Dave Volante, and you're watching the Cube? >>Yeah, yeah, yeah.

Published Date : May 5 2020

SUMMARY :

Think brought to you by IBM. Who's the general manager of the IBM Great to be here. How are you communicating to them? So don't worry about you know, whether it's, you know, pass on, you know, lease terms, etcetera and provide But you know, to your point, having a partner network that we do have around So we interview a lot of partners, and, you know, when you talk to the familiar, they've got to make money. on and, by the way, you know, took over from John Touched at the time. You know, we have a majority of his box selling, and you could make a lot of money doing that, Um, and over the last really 6 to 9 months, in the market again, staying close to customers, you know, doing doing right by your employees, There's a lot of money to be made as we as we get into this thing, this new world and we are And best of luck to you. You know, uh, and you're watching the Cube here at IBM?

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
David LarosePERSON

0.99+

DavidPERSON

0.99+

IBMORGANIZATION

0.99+

Dave VolantePERSON

0.99+

JohnPERSON

0.99+

MinneapolisLOCATION

0.99+

David La RosePERSON

0.99+

IBM Global FinanceORGANIZATION

0.99+

19%QUANTITY

0.99+

Palo AltoLOCATION

0.99+

DavePERSON

0.99+

100%QUANTITY

0.99+

Jim WhitehurstPERSON

0.99+

first dollarQUANTITY

0.99+

BostonLOCATION

0.99+

2020DATE

0.99+

first yearQUANTITY

0.99+

half a percentQUANTITY

0.99+

JasonPERSON

0.99+

30QUANTITY

0.98+

threeQUANTITY

0.98+

secondQUANTITY

0.98+

6QUANTITY

0.98+

second quarterDATE

0.98+

2ndQUANTITY

0.98+

this weekDATE

0.98+

FisherORGANIZATION

0.98+

IBM Partner EcosystemORGANIZATION

0.97+

first moveQUANTITY

0.97+

ArvinPERSON

0.97+

earlier this weekDATE

0.97+

fiveQUANTITY

0.96+

OneQUANTITY

0.96+

Corona virusOTHER

0.96+

21,000 active plannersQUANTITY

0.96+

9 monthsQUANTITY

0.95+

Red HatTITLE

0.95+

Red Hat SummitEVENT

0.94+

oneQUANTITY

0.94+

WeiPERSON

0.94+

The Cube StudiosORGANIZATION

0.94+

pandemicEVENT

0.93+

RecileORGANIZATION

0.92+

todayDATE

0.92+

a decade orDATE

0.9+

three big concernsQUANTITY

0.9+

CubeCOMMERCIAL_ITEM

0.89+

last 10 yearsDATE

0.87+

50 50QUANTITY

0.85+

Think 2020ORGANIZATION

0.81+

last two yearsDATE

0.77+

Red HatORGANIZATION

0.76+

two agoDATE

0.75+

John TouchedPERSON

0.74+

IBM Think 2020ORGANIZATION

0.74+

about 25QUANTITY

0.73+

four keyQUANTITY

0.68+

The CubeORGANIZATION

0.65+

fourQUANTITY

0.64+

MosconiORGANIZATION

0.63+

number twoQUANTITY

0.58+