Jerry Gupta, Swiss Re & Joe Selle, IBM | IBM CDO Summit 2019
>> Live from San Francisco, California. It's theCUBE, covering the IBM Chief Data Officer Summit. Brought to you by IBM. >> We're back at Fisherman's Wharf at the IBM CDO conference. You're watching theCUBE, the leader in live tech coverage. My name is Dave Volante, Joe Selle is here. He's the Global Advanced Analytics and Cognitive Lead at IBM, Boston base. Joe, good to see you again. >> You to Dave. >> And Jerry Gupta, the Senior Vice President and Digital Catalyst at Swiss Re Institute at Swiss Re, great to see you. Thanks for coming on. >> Thank you for having me Dave. >> You're very welcome. So Jerry, you've been at this event now a couple of years, we've been here I think the last four or five years and in the early, now this goes back 10 years this event, now 10 years ago, it was kind of before the whole big data meme took off. It was a lot of focus I'm sure on data quality and data compliance and all of a sudden data became the new source of value. And then we rolled into digital transformation. But how from your perspective, how have things changed? Maybe the themes over the last couple of years, how have they changed? >> I think, from a theme perspective, I would frame the question a little bit differently, right? For me, this conference is a must have on my calendar, because it's very relevant. The topics are very current. So two years ago, when I first attended this conference, it was about cyber and when we went out in the market, they were not too many companies talking about cyber. And so you come to a place like this and you're not and you're sort of blown away by the depth of knowledge that IBM has, the statistics that you guys did a great job presenting. And that really helped us inform ourselves about the cyber risk that we're going on in cyber and so evolve a little bit the consistent theme is it's relevant, it's topical. The other thing that's very consistent is that you always learn something new. The struggle with large conferences like this is sometimes it becomes a lot of me too environment. But in conference that IBM organizes the CDO, in particular, I always learn something new because the practitioners, they do a really good job curating the practitioners. >> And Joe, this has always been an intimate event. You do 'em in San Francisco and Boston, it's, a couple hundred people, kind of belly to belly interactions. So that's kind of nice. But how do you scale this globally? >> Well, I would say that is the key question 'cause I think the AI algorithms and the machine learning has been proven to work. And we've infiltrated that into all of the business processes at IBM, and in many of our client companies. But we've been doing proof of concepts and small applications, and maybe there's a dozen or 50 people using it. But the the themes now are around scale AI at scale. How do you do that? Like we have a remit at IBM to get 100,000 IBMers that's the real number. On our Cognitive Enterprise Data Platform by the end of this calendar year, and we're making great progress there. But that's the key question, how do you do that? and it involves cultural issues of teams and business process owners being willing to share the data, which is really key. And it also involves technical issues around cloud computing models, hybrid public and private clouds, multi cloud environments where we know we're not the only game in town. So there's a Microsoft Cloud, there's an IBM Cloud, there's another cloud. And all of those clouds have to be woven together in some sort of a multi-cloud management model. So that's the techie geek part. But the cultural change part is equally as challenging and important and you need both to get to 100,000 users at IBM. >> You know guys what this conversation brings into focus for me is that for decades, we've marched to the cadence of Moore's laws, as the innovation engine for our industry, that feels like just so yesterday. Today, it's like you've got this data bedrock that we built up over the last decade. You've got machine intelligence or AI, that you now can apply to that data. And then for scale, you've got cloud. And there's all kinds of innovation coming in. Does that sort of innovation cocktail or sandwich makes sense in your business? >> So there's the innovation piece of it, which is new and exciting, the shiny, new toy. And that's definitely exciting and we definitely tried that. But from my perspective and the perspective of my company, it's not the shiny, new toy that's attractive, or that really moves the needle for us. It is the underlying risk. So if you have the shiny new toy of an autonomous vehicle, what mayhem is it going to cause?, right? What are the underlying risks that's what we are focused on. And Joe alluded to, to AI and algorithms and stuff. And it clearly is a very, it's starting to become a very big topic globally. Even people are starting to talk about the risks and dangers inherent in algorithms and AI. And for us, that's an opportunity that we need to study more, look into deeply to see if this is something that we can help address and solve. >> So you're looking for blind spots, essentially. And then and one of them is this sort of algorithmic risk. Is that the right way to look at it? I mean, how do you think about risk of algorithms? >> So yeah, so algorithmic risk would be I would call blind spot I think that's really good way of saying it. We look at not just blind spots, so risks that we don't even know about that we are facing. We also look at known risks, right? >> So we are one of the largest reinsurers in the world. And we insure just you name a risk, we reinsure it, right? so your auto risk, your catastrophe risk, you name it, we probably have some exposure to it. The blind spot as you call it are, anytime you create something new, there are pros and cons. The shiny, new toy is the pro. What risks, what damage, what liability can result there in that's the piece that we're starting to look at. >> So you got the potentially Joe these unintended consequences of algorithms. So how do you address that? Is there a way in which you've thought through, some kind of oversight of the algorithms? Maybe you could talk about IBM's point of view there. >> Well we have >> Yeah and that's a fantastic and interesting conversation that Jerry and I are having together on behalf of our organizations. IBM knowing in great detail about how these AI algorithms work and are built and are deployed, Jerry and his organization, knowing the bigger risk picture and how you understand, predict, remediate and protect against the risk so that companies can happily adopt these new technologies and put them everywhere in their business. So the name of the game is really understanding how as we all move towards a digital enterprise with big data streaming in, in every format, so we use AI to modify the data to a train the models and then we set some of the models up as self training. So they're learning on their own. They're enhancing data sets. And once we turn them on, we can go to sleep, so they do their own thing, then what? We need a way to understand how these models are producing results. Are they results that we agree with? Are these self training algorithms making these, like railroad trains going off the track? Or are they still on the track? So we want to monitor understand and remediate, but it's at scale again, my earlier comments. So you might be an organization, you might have 10,000 not models at work. You can't watch those. >> So you're looking at the intersection of risk and machine intelligence and then you're, if I understand it correctly applying AI, what I call machine intelligence to oversee the algorithms, is that correct? >> Well yes and you could think of it as an AI, watching over the other AI. That's really what we have 'cause we're using AI in as we envision what might or might not be the future. It's an AI and it's watching other AI. >> That's kind of mind blowing. Jerry, you mentioned autonomous vehicles before that's obviously a potential disruptor to your business. What can you share about how you guys are thinking about that? I mean, a lot of people are skeptical. Like there's not enough data, every time there's a another accident, they'll point to that. What's your point of view on that? From your corporation standpoint are you guys thinking is near term, mid term, very long term or it's sort of this journey, that there's quasi-autonomous that sort of gets us there. >> So on autonomous vehicles or algorithmic risk? >> On autonomous vehicles. >> So, the journey towards full automation is a series of continuous steps, right? So it's a continuum and to a certain extent, we are in a space now, where even though we may not have full autonomy while we're driving, there is significant feedback and signals that a car provides and acts or not in an automated manner that eventually move us towards full autonomy, right? So for example, the anti-lock braking system. That's a component of that, right? which is it prevents the car from skidding out of control. So if you're asking for a time horizon when it might have happened, yeah, at our previous firm, we had done some analysis and the horizons were as sort of aggressive as 15 years to as conservative as 50 years. But the component that we all agreed to where there was not such a wide range was that the cars are becoming more sophisticated because the cars are not just cars, any automobile or truck vehicles, they're becoming more automated. Where does risk lie at each piece? Or each piece of the value chain, right? And the answer is different. If you look at commercial versus personal. If you look at commercial space, autonomous fleets are already on the road. >> Right >> Right? And so the question then becomes where does liability lie? Owner, manufacturer, driver >> Shared model >> Shared, manual versus automated mode, conditions of driving, what decisions algorithm is making, which is when you know, the physics don't allow you to avoid an accident? Who do you end up hitting? (crosstalk) >> Again, not just the technology problem. Now, last thing is you guys are doing a panel, on wowing customers making customers the king, I think, is what the title of it is. What's that all about? And get into that a little bit? >> Sure. Well, we focus as IBM mostly on a B2B framework. So the example that I that I'll share to you is, somewhere between like making a customer or making a client the king, the example is that we're using some of our AI to create an alert system that we call Operations Risks Insights. And so the example that I wanted to share was that, we've been giving this away to nonprofit relief agencies who can deploy it around a geo-fenced area like say, North Carolina and South Carolina. And if you're a relief agency providing flood relief or services to people affected by floods, you can use our solution to understand the magnitude and the potential damage impact from a storm. We can layer up a map with not only normal geospatial information, but socio-economic data. So I can say find the relief agency and I've got a huge storm coming in and I can't cover the entire two-state area. I can say okay, well show me the area where there's greater population density than 1000 per square kilometer and the socio-economic level is, lower than a certain point and those are the people that don't have a lot of resources can't move, are going to shelter in place. So I want to know that because they need my help. >> That's where the risk is. Yeah, right they can't get out >> And we use AI to do to use that those are happy customers, and I've delivered wow to them. >> That's pretty wow, that's right. Jerry, anything you would add to that sort of wow customer experience? Yeah, absolutely, So we are a B2B company as well. >> Yeah. >> And so the span of interaction is dictated by that piece of our business. And so we tried to create wow, by either making our customers' life easier, providing tools and technologies that make them do their jobs better, cheaper, faster, more efficiently, or by helping create, goal create, modify products, such that, it accomplishes the former, right? So, Joe mentioned about the product that you launched. So we have what we call parametric insurance and we are one of the pioneers in the field. And so we've launched three products in that area. For earthquake, for hurricanes and for flight delay. And so, for example, our flight delay product is really unique in the market, where we are able to insure a traveler for flight delays. And then if there is a flight delay event that exceeds a pre established threshold, the customer gets paid without even having to file a claim. >> I love that product, I want to learn more about that. You can say (mumbles) but then it's like then it's not a wow experience for the customer, nobody's happy. So that's for Jerry. Guys, we're out of time. We're going to leave it there but Jerry, Joe, thanks so much for. >> We could go on Dave but thank you Let's do that down the road. Maybe have you guys in Boston in the fall? it'll be great. Thanks again for coming on. >> Thanks Dave. >> All right, keep it right there everybody. We'll back with our next guest. You're watching theCUBE live from IBM CDO in San Francisco. We'll be right back. (upbeat music)
SUMMARY :
Brought to you by IBM. at the IBM CDO conference. the Senior Vice President and Digital Catalyst and in the early, now this goes back 10 years this event, But in conference that IBM organizes the CDO, But how do you scale this globally? But that's the key question, how do you do that? of Moore's laws, as the innovation engine for our industry, or that really moves the needle for us. Is that the right way to look at it? so risks that we don't even know about that we are facing. And we insure just you name a risk, So how do you address that? Jerry and his organization, knowing the bigger risk picture and you could think of it as an AI, What can you share about how you guys But the component that we all agreed to Again, not just the technology problem. So the example that I that I'll share to you is, That's where the risk is. And we use AI to do Jerry, anything you would add to that So, Joe mentioned about the product that you launched. for the customer, nobody's happy. Let's do that down the road. in San Francisco.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Joe | PERSON | 0.99+ |
Dave Volante | PERSON | 0.99+ |
Jerry | PERSON | 0.99+ |
Jerry Gupta | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Boston | LOCATION | 0.99+ |
Joe Selle | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
San Francisco | LOCATION | 0.99+ |
San Francisco, California | LOCATION | 0.99+ |
100,000 | QUANTITY | 0.99+ |
50 years | QUANTITY | 0.99+ |
15 years | QUANTITY | 0.99+ |
North Carolina | LOCATION | 0.99+ |
100,000 users | QUANTITY | 0.99+ |
each piece | QUANTITY | 0.99+ |
South Carolina | LOCATION | 0.99+ |
10,000 | QUANTITY | 0.99+ |
Swiss Re Institute | ORGANIZATION | 0.99+ |
Today | DATE | 0.99+ |
50 people | QUANTITY | 0.98+ |
10 years | QUANTITY | 0.98+ |
yesterday | DATE | 0.98+ |
two years ago | DATE | 0.98+ |
one | QUANTITY | 0.97+ |
Fisherman's Wharf | LOCATION | 0.97+ |
both | QUANTITY | 0.96+ |
10 years ago | DATE | 0.96+ |
three products | QUANTITY | 0.96+ |
Swiss Re | ORGANIZATION | 0.96+ |
1000 per square kilometer | QUANTITY | 0.95+ |
a dozen | QUANTITY | 0.95+ |
first | QUANTITY | 0.95+ |
five years | QUANTITY | 0.94+ |
Moore | PERSON | 0.94+ |
IBM CDO Summit 2019 | EVENT | 0.93+ |
IBM Chief Data Officer Summit | EVENT | 0.93+ |
last decade | DATE | 0.89+ |
Microsoft | ORGANIZATION | 0.88+ |
last couple of years | DATE | 0.86+ |
two-state area | QUANTITY | 0.86+ |
IBM CDO | EVENT | 0.85+ |
end of this calendar year | DATE | 0.83+ |
IBM | LOCATION | 0.75+ |
four | QUANTITY | 0.69+ |
couple hundred people | QUANTITY | 0.66+ |
Risks Insights | OTHER | 0.63+ |
and Cognitive | ORGANIZATION | 0.61+ |
CDO | EVENT | 0.61+ |
years | QUANTITY | 0.53+ |
decades | QUANTITY | 0.5+ |
Catalyst | ORGANIZATION | 0.5+ |
Platform | TITLE | 0.48+ |
Advanced | ORGANIZATION | 0.47+ |
Cloud | TITLE | 0.46+ |
Enterprise | TITLE | 0.46+ |
Joe Selle & Tom Ward, IBM | IBM CDO Fall Summit 2018
>> Live from Boston, it's theCUBE! Covering IBM Chief Data Officer Summit, brought to you by IBM. >> Welcome back everyone to the IBM CDO Summit and theCUBE's live coverage, I'm your host Rebecca Knight along with my co-host Paul Gillin. We have Joe Selle joining us. He is the Cognitive Solution Lead at IBM. And Thomas Ward, Supply Chain Cloud Strategist at IBM. Thank you so much for coming on the show! >> Thank you! >> Our pleasure. >> Pleasure to be here. >> So, Tom, I want to start with you. You are the author of Risk Insights. Tell our viewers a little bit about Risk Insights. >> So Risk Insights is a AI application. We've been working on it for a couple years. What's really neat about it, it's the coolest project I've ever worked on. And it really gets a massive amount of data from the weather company, so we're one of the biggest consumers of data from the weather company. We take that and we'd visualize who's at risk from things like hurricanes, earthquakes, things like IBM sites and locations or suppliers. And we basically notify them in advance when those events are going to impact them and it ties to both our data center operations activity as well as our supply chain operations. >> So you reduce your risk, your supply chain risk, by being able to proactively detect potential outages. >> Yeah, exactly. So we know in some cases two or three days in advance who's in harm's way and we're already looking up and trying to mitigate those risks if we need to, it's going to be a real serious event. So Hurricane Michael, Hurricane Florence, we were right on top of it and said we got to worry about these suppliers, these data center locations, and we're already working on that in advance. >> That's very cool. So, I mean, how are clients and customers, there's got to be, as you said, it's the coolest project you've ever worked on? >> Yeah. So right now, we use it within IBM right? And we use it to monitor some of IBM's client locations, and in the future we're actually, there was something called the Call for Code that happened recently within IBM, this project was a semifinalist for that. So we're now working with some non-profit groups to see how they could also avail of it, looking at things like hospitals and airports and those types of things as well. >> What other AI projects are you running? >> Go ahead. >> I can answer that one. I just wanted to say one thing about Risk Insights, which didn't come out from Tom's description, which is that one of the other really neat things about it is that it provides alerts, smart alerts out to supply chain planners. And the alert will go to a supply chain planner if there's an intersection of a supplier of IBM and a path of a hurricane. If the hurricane is vectored to go over that supplier, the supply chain planner that is responsible for those parts will get some forewarning to either start to look for another supplier, or make some contingency plans. And the other nice thing about it is that it launches what we call a Resolution Room. And the Resolution Room is a virtual meeting place where people all over the globe who are somehow impacted by this event can collaborate, share documents, and have a persistent place to resolve this issue. And then, after that's all done, we capture all the data from that issue and the resolution and we put that into a body of knowledge, and we mine that knowledge for a playbook the next time a similar event comes along. So it's a full-- >> It becomes machine learning. >> It's a machine learning-- >> Sort of data source. >> It's a full soup to nuts solution that gets smarter over time. >> So you should be able to measure benefits, you should have measurable benefits by now, right? What are you seeing, fewer disruptions? >> Yes, so in Risk Insights, we know that out of a thousand of events that occurred, there were 25 in the last year that were really the ones we needed to identify and mitigate against. And out of those we know there have been circumstances where, in the past IBM's had millions of dollars of losses. By being more proactive, we're really minimizing that amount. >> That's incredible. So you were going to talk about other kinds of AI that you run. >> Right, so Tom gave an overview of Risk Insights, and we tied it to supply chain and to monitoring the uptime of our customer data centers and things like that. But our portfolio of AI is quite broad. It really covers most of the middle and back and front office functions of IBM. So we have things in the sales domain, the finance domain, the HR domain, you name it. One of the ones that's particularly interesting to me of late is in the finance domain, monitoring accounts receivable and DSO, day sales outstanding. So a company like IBM, with multiple billions of dollars of revenue, to make a change of even one day of day sales outstanding, provides gigantic benefit to the bottom line. So we have been integrating disparate databases across the business units and geographies of IBM, pulling that customer and accounts receivable data into one place, where our CFO can look at an integrated approach towards our accounts receivable and we know where the problems are, and we're going to use AI and other advanced analytic techniques to determine what's the best treatment for that AI, for those customers who are at risk because of our predictive models, of not making their payments on time or some sort of financial risk. So we can integrate a lot of external unstructured data with our own structured data around customers, around accounts, and pull together a story around AR that we've never been able to pull before. That's very impactful. >> So speaking of unstructured data, I understand that data lakes are part of your AI platform. How so? >> For example, for Risk Insights, we're monitoring hundreds of trusted news sources at any given time. So we know, not just where the event is, what locations are at risk, but also what's being reported about it. We monitor Twitter reports about it, we monitor trusted news sources like CNN or MSNBC, or on a global basis, so it gives our risk analyst not just a view of where the event is, where it's located, but also what's being said, how severe it is, how big are those tidal waves, how big was the storm surge, how many people were affected. By applying some of the machine learning insights to these, now we can say, well if there are couple hundred thousand people without power then it's very likely there is going to be multimillions of dollars of impact as a result. So we're now able to correlate those news reports with the magnitude of impact and potential financial impact to the businesses that we're supporting. >> So the idea being that IBM is saying, look what we've done for our own business (laughs), imagine what we could do for you. As Inderpal has said, it's really using IBM as its own test case and trying to figure this all out and learning as it goes and he said, we're going to make some mistakes, we've already made some mistakes but we're figuring it out so you don't have to make those mistakes. >> Yeah that's right. I mean, if you think about the long history of this, we've been investing in AI, really, since, depending on how you look at it, since the days of the 90's, when we were doing Deep Blue and we were trying to beat Garry Kasparov at chess. Then we did another big huge push on the Jeopardy program, where we we innovated around natural language understanding and speed and scale of processing and probability correctness of answers. And then we kind of carry that right through to the current day where we're now proliferating AI across all of the functions of IBM. And there, then, connecting to your comment, Inderpal's comment this morning was around let's just use all of that for the benefit of other companies. It's not always an exact fit, it's never an exact fit, but there are a lot of pieces that can be replicated and borrowed, either people, process or technology, from our experience, that would help to accelerate other companies down the same path. >> One of the questions around AI though is, can you trust it? The insights that it derives, are they trustworthy? >> I'll give a quick answer to that, and then Tom, it's probably something you want to chime in on. There's a lot of danger in AI, and it needs to be monitored closely. There's bias that can creep into the datasets because the datasets are being enhanced with cognitive techniques. There's bias that can creep into the algorithms and any kind of learning model can start to spin on its own axis and go in its own direction and if you're not watching and monitoring and auditing, then it could be starting to deliver you crazy answers. Then the other part is, you need to build the trust of the users, because who wants to take an answer that's coming out of a black box? We've launched several AI projects where the answer just comes out naked, if you will, just sitting right there and there's no context around it and the users never like that. So we've understood now that you have to put the context, the underlying calculations, and the assessment of our own probability of being correct in there. So those are some of the things you can do to get over that. But Tom, do you have anything to add to that? >> I'll just give an example. When we were early in analyzing Twitter tweets about a major storm, what we've read about was, oh, some celebrity's dog was in danger, like uh. (Rebecca laughs) This isn't very helpful insight. >> I'm going to guess, I probably know the celebrity's dog that was in danger. (laughs) >> (laughs) actually stop saying that. So we learned how to filter those things out and say what are the meaningful keywords that we need to extract from and really then can draw conclusions from. >> So is Kardashian a meaningful word, (all laughing) I guess that's the question. >> Trending! (all laughing) >> Trending now! >> I want to follow up on that because as an AI developer, what responsibility do developers have to show their work, to document how their models have worked? >> Yes, so all of our information that we provided the users all draws back to, here's the original source, here's where the information was taken from so we can draw back on that. And that's an important part of having a cognitive data, cognitive enterprise data platform where all this information is stored 'cause then we can refer to that and go deeper as well and we can analyze it further after the fact, right? You can't always respond in the moment, but once you have those records, that's how you can learn from it for the next time around. >> I understand that building test models in some cases, particularly in deep learning is very difficult to build reliable test models. Is that true, and what progress is being made there? >> In our case, we're into the machine learning dimension yet, we're not all the way into deep learning in the project that I'm involved with right now. But one reason we're not there is 'cause you need to have huge, huge, vast amounts of robust data and that trusted dataset from which to work. So we aspire towards and we're heading towards deep learning. We're not quite there yet, but we've started with machine learning insights and we'll progress from there. >> And one of the interesting things about this AI movement overall is that it's filled with very energetic people that's kind of a hacker mindset to the whole thing. So people are grabbing and running with code, they're using a lot of open source, there's a lot of integration of the black box from here, from there in the other place, which all adds to the risk of the output. So that comes back to the original point which is that you have to monitor, you have to make sure that you're comfortable with it. You can't just let it run on its own course without really testing it to see whether you agree with the output. >> So what other best practices, there's the monitoring, but at the same time you do that hacker culture, that's not all bad. You want people who are energized by it and you are trying new things and experimenting. So how do you make sure you let them have, sort of enough rein but not free rein? >> I would say, what comes to mind is, start with the business problem that's a real problem. Don't make this an experimental data thing. Start with the business problem. Develop a POC, a proof of concept. Small, and here's where the hackers come in. They're going to help you get it up and running in six weeks as opposed to six months. And then once you're at the end of that six-week period, maybe you design one more six-week iteration and then you know enough to start scaling it and you scale it big so you've harnessed the hackers, the energy, the speed, but you're also testing, making sure that it's accurate and then you're scaling it. >> Excellent. Well thank you Tom and Joe, I really appreciate it. It's great to have you on the show. >> Thank you! >> Thank you, Rebecca, for the spot. >> I'm Rebecca Knight for Paul Gillin, we will have more from the IBM CDO summit just after this. (light music)
SUMMARY :
brought to you by IBM. Thank you so much for coming on the show! You are the author of Risk Insights. consumers of data from the weather company. So you reduce your risk, your supply chain risk, and trying to mitigate those risks if we need to, as you said, it's the coolest project you've ever worked on? and in the future we're actually, there was something called from that issue and the resolution and we put that It's a full soup to nuts solution the ones we needed to identify and mitigate against. So you were going to talk about other kinds of AI that you run. and we know where the problems are, and we're going to use AI So speaking of unstructured data, So we know, not just where the event is, So the idea being that IBM is saying, all of that for the benefit of other companies. and any kind of learning model can start to spin When we were early in analyzing Twitter tweets I'm going to guess, I probably know the celebrity's dog So we learned how to filter those things out I guess that's the question. and we can analyze it further after the fact, right? to build reliable test models. and that trusted dataset from which to work. So that comes back to the original point which is that but at the same time you do that hacker culture, and then you know enough to start scaling it It's great to have you on the show. Rebecca, for the spot. we will have more from the IBM CDO summit just after this.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Paul Gillin | PERSON | 0.99+ |
Rebecca Knight | PERSON | 0.99+ |
Tom | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Joe Selle | PERSON | 0.99+ |
Joe | PERSON | 0.99+ |
Rebecca | PERSON | 0.99+ |
Thomas Ward | PERSON | 0.99+ |
Garry Kasparov | PERSON | 0.99+ |
six weeks | QUANTITY | 0.99+ |
six-week | QUANTITY | 0.99+ |
Tom Ward | PERSON | 0.99+ |
MSNBC | ORGANIZATION | 0.99+ |
25 | QUANTITY | 0.99+ |
CNN | ORGANIZATION | 0.99+ |
six months | QUANTITY | 0.99+ |
Boston | LOCATION | 0.99+ |
last year | DATE | 0.99+ |
ORGANIZATION | 0.99+ | |
three days | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
multimillions of dollars | QUANTITY | 0.98+ |
both | QUANTITY | 0.98+ |
Risk Insights | TITLE | 0.97+ |
Kardashian | PERSON | 0.97+ |
Deep Blue | TITLE | 0.97+ |
hundreds of trusted news sources | QUANTITY | 0.97+ |
one day | QUANTITY | 0.96+ |
one | QUANTITY | 0.95+ |
One | QUANTITY | 0.95+ |
one reason | QUANTITY | 0.95+ |
IBM CDO Summit | EVENT | 0.95+ |
couple hundred thousand people | QUANTITY | 0.92+ |
IBM CDO Fall Summit 2018 | EVENT | 0.91+ |
Risk Insights | ORGANIZATION | 0.86+ |
90's | DATE | 0.86+ |
Hurricane Florence | EVENT | 0.86+ |
Hurricane Michael | EVENT | 0.85+ |
millions of dollars | QUANTITY | 0.84+ |
this morning | DATE | 0.83+ |
one place | QUANTITY | 0.82+ |
IBM Chief Data Officer Summit | EVENT | 0.81+ |
billions of dollars | QUANTITY | 0.8+ |
Inderpal | PERSON | 0.77+ |
Inderpal | ORGANIZATION | 0.75+ |
One of | QUANTITY | 0.71+ |
thousand of events | QUANTITY | 0.68+ |
Risk | ORGANIZATION | 0.68+ |
CDO | EVENT | 0.59+ |
questions | QUANTITY | 0.56+ |
waves | EVENT | 0.56+ |
theCUBE | ORGANIZATION | 0.34+ |
Joe Selle | IBM CDO Strategy Summit 2017
>> Announcer: Live from Fisherman's Wharf in San Francisco. It's theCUBE. Covering IBM Chief Data Officer Strategy Summit Spring 2017. Brought to you by IBM. >> Hey Welcome back everybody. Jeff Frick with theCUBE, along with Peter Burris from Wikibon. We are in Fisherman's Wharf in San Francisco at the IBM Chief Data Officer Strategy Summit Spring 2017. Coming to the end of a busy day, running out of steam. Blah, blah, blah. I need more water. But Joe's going to take us home. We're joined by Joe Selle. He is the global operations analytic solution lead for IBM. Joe, welcome. >> Thank you, thank you very much. It's great to be here. >> So you've been in sessions all day. I'm just curious to get kind of your general impressions of the event and any surprises or kind of validations that are coming out of these sessions. >> Well, general impression is that everybody is thrilled to be here and the participants, the speakers, the audience members all know that they're at the cusp of a moment in business history of great change. And that is as we graduate from regular analytics which are descriptive and dashboarding into the world of cognitive which is taking the capabilities to a whole other level. Many levels actually advanced from the basic things. >> And you're in a really interesting position because IBM has accepted the charter of basically consuming your own champagne, drinking your own champagne, whatever expression you want to use. >> I'm so glad you said that cause most people say eating your dog food. >> Well, if we were in Germany we'd talk about beer, but you know, we'll stick with the champagne analogy. But really, trying to build, not only to build and demonstrate the values that you're trying to sell to your customers within IBM but then actually documenting it and delivering it basically, it's called the blueprint, in October. We've already been told it's coming in October. So what a great opportunity. >> Part of that is the fact that Ginni Rometty, our CEO, had her start in IBM in the consulting part of IBM, GBS, Global Business Services. She was all about consulting to clients and creating big change in other organizations. Then she went through a series of job roles and now she's CEO and she's driving two things. One is the internal transformation of IBM, which is where I am, part of my role is, I should say. Reporting to the chief data officer and the chief analytics officer and their jobs are to accelerate the transformation of big blue into the cognitive era. And Ginni also talks about showcasing what we're doing internally for the rest of the world and the rest of the economy to see because parts of this other companies can do. They can emulate our road map, the blueprint rather, sorry, that Inderpal introduced, is going to be presented in the fall. That's our own blueprint for how we've been transforming ourselves so, some part of that blueprint is going to be valid and relevant for other companies. >> So you have a dual reporting relationship, you said. The chief data officer, which is this group, but also the chief analytics officer. What's the difference between the Chief data officer, the chief data analytics officer and how does that combination drive your mission? >> Well, the difference really is the chief data officer is in charge of making some very long-term investments, including short-term investments, but let me talk about the long-term investment. Anything around an enterprise data lake would be considered a long-term investment. This is where you're creating an environment where users can go in, these would be internal to IBM or whatever client company we're talking about, where they can use some themes around self-service, get out this information, create analysis, everything's available to them. They can grab external data. They can grab internal data. They can observe Twitter feeds. They can look at weather company information. In our case we get that because we're partnered with the weather company. That's the long-term vision of the chief data officer is to create a data lake environment that serves to democratize all of this for users within a company, within IBM. The chief analytics officer has the responsibility to deliver projects that are sort of the leading projects that prove out the value of analytics. So on that side of my dual relationship, we're forming projects that can deliver a result literally in a 10 or a 12 week time period. Or a half a year. Not a year and a half but short term and we're sprinting to the finish, we're delivering something. It's quite minimally scaled. The first project is always a minimally viable product or project. It's using as few data sources as we can and still getting a notable result. >> The chief analytics officer is at the vanguard of helping the business think about use cases, going after those use cases, asking problems the right way, finding data with effectiveness as well as efficiency and leading the charge. And then the Chief data officer is helping to accrete that experience and institutionalize it in the technology, the practices, the people, et cetera. So the business builds a capability over time. >> Yes, scalable. It's sort of an issue of it can scale. Once Inderpal and the Chief data officer come to the equation, we're going to scale this thing massively. So, high volume, high speed, that's all coming from a data lake and the early wins and the medium term wins maybe will be more in the realm of the chief analytics officer. So on your first summary a second ago, you're right in that the chief analytics officer is going around, and the team that I'm working with is doing this, to each functional group of IBM. HR, Legal, Supply Chain, Finance, you name it, and we're engaging in cognitive discovery sessions with them. You know, what is your roadmap? You're doing some dashboarding now, you're doing some first generation analytics or something but, what is your roadmap for getting cognitive? So we're helping to burst the boundaries of what their roadmap is, really build it out into something that was bigger then they had been conceiving of it. Adding the cognitive projects and then, program managing this giant portfolio so that we're making some progress and milestones that we can report to various stake holders like Ginni Rometty or Jim Kavanaugh who are driving this from a senior senior executive standpoint. We need to be able to tell them, in one case, every couple of weeks, what have you gotten done. Which is a terrible cadence, by the way, it's too fast. >> So in many Respects-- >> But we have to get there every couple of weeks we've got to deliver another few nuggets. >> So in many respects, analytics becomes the capability and data becomes the asset. >> Yes, that's true. Analytics has assets as well though. >> Paul: Sure, of course. >> Because we have models and we have techniques and we bake the models into a business process to make it real so people actually use it. It doesn't just sit over there as this really nifty science experiment. >> Right but kind of where are we on the journey? It's real still early days, right? Because, you know, we hear all the time about machine learning and deep learning and AI and VR and AI and all this stuff. >> We're patchy, every organization is patchy even IBM, but I'm learning from being here, so this is end of day one, I'm learning. I'm getting a little more perspective on the fact that we at IBM are actually, 'cause we've been investing in this heavily for a number of years. I came through the ranks and supply chain. We've been investing in these capabilities for six or seven years. We were some of the early adopters within IBM. But, I would say that maybe 10% of the people at this conference are sort of in the category of I'm running fast and I'm doing things. So that's 10%. Then there's maybe another 30% that are jogging or fast walking. And then there's the rest of them, so maybe 50%, if my math is right, it's been a long day. Are kind of looking and saying, yeah, I got to get that going at some point and I have two or three initiatives but I'm really looking forward to scaling it at some point. >> Right. >> I've just painted a picture to you of the fact that the industry in general is just starting this whole journey and the big potential is still in front of us. >> And then on the Champagne. So you've got the cognitive, you've got the brute and then you've got the Watson. And you know, there's a lot of, from the outside looking in at IBM, there's a lot of messaging about Watson and a lot of messaging about cognitive. How the two mesh and do they mesh within some of the projects that you're working on? Or how should people think of the two of them? >> Well, people should know that Watson is a brand and there are many specific technologies under the Watson brand. So, and then, think of it more as capabilities instead of technologies. Things like being able to absorb unstructured information. So you've heard, if you've been to any conferences, whether they're analytics or data, any company, any industry, 80% of your data is unstructured and invisible and you're probably working with 20% of your data on an active basis. So, do you want to go the 80%-- >> With 40% shrinking. >> As a percentage. >> That's true. >> As a percentage. >> Yeah because the volumes are growing. >> Tripling in size but shrinking as a percentage. >> Right, right. So, just, you know, think about that. >> Is Watson really then kind of the packaging of cognitive, more specific application? Because we're walking for health or. >> I'll tell you, Watson is a mechanism and a tool to achieve the outcome of cognitive business. That's a good way to think of it. And Watson capabilities that I was just about to get to are things like reading, if you will. In Watson Health, he reads oncology articles and they know, once one of them has been read, it's never forgotten. And by the way, you can read 200 a week and you can create the smartest doctor that there is on oncology. So, a Watson capability is absorbing information, reading. It's in an automated fashion, improving its abilities. So these are concepts around deep learning and machine learning. So the algorithms are either self correcting or people are providing feedback to correct them. So there's two forms of learning in there. >> Right, right. >> But these are kind of capabilities all around Watson. I mean, there are so many more. Optical, character recognition. >> Right. >> Retrieve and rank. >> Right. >> So giving me a strategy and telling me there's an 85% chance, Joe, that you're best move right now, given all these factors is to do x. And then I can say, well, x wouldn't work because of this other constraint which maybe the system didn't know about. >> Jeff: Right. >> Then the system will tell me, in that case, you should consider y and it's still an 81% chance of success verses the first which was at 85. >> Jeff: Right. >> So retrieving and ranking, these are capabilities that we call Watson. >> Jeff: Okay. >> And we try to work those in to all the job roles. >> Jeff: Okay. >> So again, whether you're in HR, legal, intellectual property management, environmental compliance. You know, regulations around the globe are changing all the time. Trade compliance. And if you violate some of these rules and regs, then you're prohibited from doing business in a certain geography. >> Jeff: Right. >> It's devastating. The stakes are really high. So these are the kind of tools we want. >> So I'm just curious, from your perspective, you've got a corporate edict behind you at the highest level, and your customers, your internal customers, have that same edict to go execute quickly. So given that you're not in that kind of slow moving or walking or observing half, what are the biggest challenges that you have to overcome even given the fact that you've got the highest level most senior edict both behind you as well as your internal customers. >> Yeah, well it, guess what, it comes down to data. Often, a lot of times, it comes to data. We can put together an example of a solution that is a minimally viable solution which might have only three or four or five different pieces of data and that's pretty neat and we can deliver a good result. But if we want to scale it and really move the needle so that it's something that Ginni Rometty sees and cares about, or a shareholder, then we have to scale. Then we need a lot of data, so then we come back to Inderpal, and the chief data officer role. So the constraint is on many of the programs and projects is if you want to get beyond the initial proof of concept, >> Jeff: Right. >> You need to access and be able to manipulate the big data and then you need to train these cognitive systems. This is the other area that's taking a lot of time. And I think we're going to have some technology and innovation here, but you have to train a cognitive system. You don't program it. You do some painstaking back and forth. You take a room full of your best experts in whatever the process is and they interact with the system. They provide input, yes, no. They rank the efficacy of the recommendations coming out of the system and the system improves. But it takes months. >> That's even the starting point. >> Joe: That's a problem. >> And then you trade it over often, an extended period of time. >> Joe: A lot of it gets better over time. >> Exactly. >> As long as you use this thing, like a corpus of information is built and then you can mine the corpus. >> But a lot of people seem to believe that you roll all this data, you run a bunch of algorithms and suddenly, boom, you've got this new way of doing things. And it is a very very deep set of relationships between people who are being given recommendations as you said, weighing them, voting them, voting on them, et cetera. This is a highly interactive process. >> Yeah, it is. If you're expecting lightning fast results, you're really talking about a more deterministic kind of solution. You know, if/then. If this is, then that's the answer. But we're talking about systems that understand and they reason and they tap you on the shoulder with a recommendation and tell you that there's an 85% chance that this is what you should do. And you can talk back to the system, like my story a minute ago, and you can say, well it makes sense, but, or great, thanks very much Watson, and then go ahead and do it. Those systems that are expert systems that have expertise just woven through them, you cannot just turn those on. But, as I was saying, one of the things we talked about on some of the panels today, was there's new techniques around training. There's new techniques around working with these corpuses of information. Actually, I'm not sure what the plural of corpus. Corpi? It's not Corpi. >> Jeff: I can look that up. >> Yeah, somebody look that up. >> It's not corpi. >> So anyway, I want to give you the last word, Jeff. So you've been doing this for a while, what advice would you give to someone kind of in your role at another company who's trying to be the catalyst to get these things moving. What kind of tips and tricks would you share, you know, having gone through it and working on this for a while? >> Sure. I would, the first thing I would do is, in your first move, keep the projects tightly defined and small with a minimum of input and keep, contain your risk and your risk of failure, and make sure that if you do three projects, at least one of them is going to be a hands down winner. And then once you have a winner, tout it through your organization. A lot of folks get so enamored with the technology that they start talking more about the technology than the business impact. And what you should be touting and bragging about is not the fact that I was able to simultaneously read 5,000 procurement contracts with this tool, you should be saying, it used to take us three weeks in a conference room with a team of one dozen lawyers and now we can do that whole thing in one week with six lawyers. That's what you should talk about, not the technology piece of it. >> Great, great. Well thank you very much for sharing and I'm glad to hear the conference is going so well. Thank you. >> And it's Corpa. >> Corpa? >> The answer to the question? Corpa. >> Peter: Not corpuses. >> With Joe, Peter, and Jeff, you're watching theCUBE. We'll be right back from the IBM chief data operator's strategy summit. Thanks for watching.
SUMMARY :
Brought to you by IBM. He is the global operations analytic solution lead for IBM. It's great to be here. of the event and any surprises or kind of validations the audience members all know that they're at the cusp because IBM has accepted the charter of basically I'm so glad you said that cause most people and demonstrate the values that you're trying to Part of that is the fact that Ginni Rometty, but also the chief analytics officer. that prove out the value of analytics. of helping the business think about use cases, Once Inderpal and the Chief data officer But we have to get there every couple of weeks So in many respects, analytics becomes the capability Yes, that's true. and we bake the models into a business process to make Because, you know, we hear all the time about I'm getting a little more perspective on the fact that we and the big potential is still in front of us. How the two mesh and do they mesh within some of the So, do you want to go the 80%-- So, just, you know, think about that. of cognitive, more specific application? And by the way, you can read 200 a week and you can create But these are kind of capabilities all around Watson. given all these factors is to do x. Then the system will tell me, in that case, you should these are capabilities that we call Watson. You know, regulations around the globe So these are the kind of tools we want. challenges that you have to overcome even given the fact and the chief data officer role. and the system improves. And then you trade it over often, and then you can mine the corpus. But a lot of people seem to believe that you that there's an 85% chance that this is what you should do. What kind of tips and tricks would you share, you know, and make sure that if you do three projects, the conference is going so well. The answer to the question? We'll be right back from the IBM chief data
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
IBM | ORGANIZATION | 0.99+ |
Joe | PERSON | 0.99+ |
Jeff | PERSON | 0.99+ |
Peter Burris | PERSON | 0.99+ |
Jeff Frick | PERSON | 0.99+ |
Ginni Rometty | PERSON | 0.99+ |
Joe Selle | PERSON | 0.99+ |
GBS | ORGANIZATION | 0.99+ |
October | DATE | 0.99+ |
two | QUANTITY | 0.99+ |
Jim Kavanaugh | PERSON | 0.99+ |
20% | QUANTITY | 0.99+ |
one week | QUANTITY | 0.99+ |
Peter | PERSON | 0.99+ |
three weeks | QUANTITY | 0.99+ |
Paul | PERSON | 0.99+ |
10% | QUANTITY | 0.99+ |
10 | QUANTITY | 0.99+ |
80% | QUANTITY | 0.99+ |
85% | QUANTITY | 0.99+ |
50% | QUANTITY | 0.99+ |
six lawyers | QUANTITY | 0.99+ |
six | QUANTITY | 0.99+ |
first | QUANTITY | 0.99+ |
Germany | LOCATION | 0.99+ |
81% | QUANTITY | 0.99+ |
four | QUANTITY | 0.99+ |
Global Business Services | ORGANIZATION | 0.99+ |
12 week | QUANTITY | 0.99+ |
40% | QUANTITY | 0.99+ |
One | QUANTITY | 0.99+ |
two forms | QUANTITY | 0.99+ |
seven years | QUANTITY | 0.99+ |
three projects | QUANTITY | 0.99+ |
30% | QUANTITY | 0.99+ |
Ginni | PERSON | 0.99+ |
San Francisco | LOCATION | 0.99+ |
one dozen lawyers | QUANTITY | 0.99+ |
one case | QUANTITY | 0.99+ |
85 | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
three | QUANTITY | 0.98+ |
two things | QUANTITY | 0.98+ |
a year | QUANTITY | 0.98+ |
5,000 procurement contracts | QUANTITY | 0.98+ |
both | QUANTITY | 0.98+ |
first project | QUANTITY | 0.98+ |
ORGANIZATION | 0.98+ | |
one | QUANTITY | 0.98+ |
Watson | PERSON | 0.98+ |
Corpa | ORGANIZATION | 0.98+ |
Fisherman's Wharf | LOCATION | 0.98+ |
200 a week | QUANTITY | 0.97+ |
three initiatives | QUANTITY | 0.97+ |
Watson | TITLE | 0.96+ |
five different pieces | QUANTITY | 0.96+ |
first summary | QUANTITY | 0.95+ |
Wikibon | ORGANIZATION | 0.93+ |