Image Title

Search Results for Risk Insights:

Joe Selle & Tom Ward, IBM | IBM CDO Fall Summit 2018


 

>> Live from Boston, it's theCUBE! Covering IBM Chief Data Officer Summit, brought to you by IBM. >> Welcome back everyone to the IBM CDO Summit and theCUBE's live coverage, I'm your host Rebecca Knight along with my co-host Paul Gillin. We have Joe Selle joining us. He is the Cognitive Solution Lead at IBM. And Thomas Ward, Supply Chain Cloud Strategist at IBM. Thank you so much for coming on the show! >> Thank you! >> Our pleasure. >> Pleasure to be here. >> So, Tom, I want to start with you. You are the author of Risk Insights. Tell our viewers a little bit about Risk Insights. >> So Risk Insights is a AI application. We've been working on it for a couple years. What's really neat about it, it's the coolest project I've ever worked on. And it really gets a massive amount of data from the weather company, so we're one of the biggest consumers of data from the weather company. We take that and we'd visualize who's at risk from things like hurricanes, earthquakes, things like IBM sites and locations or suppliers. And we basically notify them in advance when those events are going to impact them and it ties to both our data center operations activity as well as our supply chain operations. >> So you reduce your risk, your supply chain risk, by being able to proactively detect potential outages. >> Yeah, exactly. So we know in some cases two or three days in advance who's in harm's way and we're already looking up and trying to mitigate those risks if we need to, it's going to be a real serious event. So Hurricane Michael, Hurricane Florence, we were right on top of it and said we got to worry about these suppliers, these data center locations, and we're already working on that in advance. >> That's very cool. So, I mean, how are clients and customers, there's got to be, as you said, it's the coolest project you've ever worked on? >> Yeah. So right now, we use it within IBM right? And we use it to monitor some of IBM's client locations, and in the future we're actually, there was something called the Call for Code that happened recently within IBM, this project was a semifinalist for that. So we're now working with some non-profit groups to see how they could also avail of it, looking at things like hospitals and airports and those types of things as well. >> What other AI projects are you running? >> Go ahead. >> I can answer that one. I just wanted to say one thing about Risk Insights, which didn't come out from Tom's description, which is that one of the other really neat things about it is that it provides alerts, smart alerts out to supply chain planners. And the alert will go to a supply chain planner if there's an intersection of a supplier of IBM and a path of a hurricane. If the hurricane is vectored to go over that supplier, the supply chain planner that is responsible for those parts will get some forewarning to either start to look for another supplier, or make some contingency plans. And the other nice thing about it is that it launches what we call a Resolution Room. And the Resolution Room is a virtual meeting place where people all over the globe who are somehow impacted by this event can collaborate, share documents, and have a persistent place to resolve this issue. And then, after that's all done, we capture all the data from that issue and the resolution and we put that into a body of knowledge, and we mine that knowledge for a playbook the next time a similar event comes along. So it's a full-- >> It becomes machine learning. >> It's a machine learning-- >> Sort of data source. >> It's a full soup to nuts solution that gets smarter over time. >> So you should be able to measure benefits, you should have measurable benefits by now, right? What are you seeing, fewer disruptions? >> Yes, so in Risk Insights, we know that out of a thousand of events that occurred, there were 25 in the last year that were really the ones we needed to identify and mitigate against. And out of those we know there have been circumstances where, in the past IBM's had millions of dollars of losses. By being more proactive, we're really minimizing that amount. >> That's incredible. So you were going to talk about other kinds of AI that you run. >> Right, so Tom gave an overview of Risk Insights, and we tied it to supply chain and to monitoring the uptime of our customer data centers and things like that. But our portfolio of AI is quite broad. It really covers most of the middle and back and front office functions of IBM. So we have things in the sales domain, the finance domain, the HR domain, you name it. One of the ones that's particularly interesting to me of late is in the finance domain, monitoring accounts receivable and DSO, day sales outstanding. So a company like IBM, with multiple billions of dollars of revenue, to make a change of even one day of day sales outstanding, provides gigantic benefit to the bottom line. So we have been integrating disparate databases across the business units and geographies of IBM, pulling that customer and accounts receivable data into one place, where our CFO can look at an integrated approach towards our accounts receivable and we know where the problems are, and we're going to use AI and other advanced analytic techniques to determine what's the best treatment for that AI, for those customers who are at risk because of our predictive models, of not making their payments on time or some sort of financial risk. So we can integrate a lot of external unstructured data with our own structured data around customers, around accounts, and pull together a story around AR that we've never been able to pull before. That's very impactful. >> So speaking of unstructured data, I understand that data lakes are part of your AI platform. How so? >> For example, for Risk Insights, we're monitoring hundreds of trusted news sources at any given time. So we know, not just where the event is, what locations are at risk, but also what's being reported about it. We monitor Twitter reports about it, we monitor trusted news sources like CNN or MSNBC, or on a global basis, so it gives our risk analyst not just a view of where the event is, where it's located, but also what's being said, how severe it is, how big are those tidal waves, how big was the storm surge, how many people were affected. By applying some of the machine learning insights to these, now we can say, well if there are couple hundred thousand people without power then it's very likely there is going to be multimillions of dollars of impact as a result. So we're now able to correlate those news reports with the magnitude of impact and potential financial impact to the businesses that we're supporting. >> So the idea being that IBM is saying, look what we've done for our own business (laughs), imagine what we could do for you. As Inderpal has said, it's really using IBM as its own test case and trying to figure this all out and learning as it goes and he said, we're going to make some mistakes, we've already made some mistakes but we're figuring it out so you don't have to make those mistakes. >> Yeah that's right. I mean, if you think about the long history of this, we've been investing in AI, really, since, depending on how you look at it, since the days of the 90's, when we were doing Deep Blue and we were trying to beat Garry Kasparov at chess. Then we did another big huge push on the Jeopardy program, where we we innovated around natural language understanding and speed and scale of processing and probability correctness of answers. And then we kind of carry that right through to the current day where we're now proliferating AI across all of the functions of IBM. And there, then, connecting to your comment, Inderpal's comment this morning was around let's just use all of that for the benefit of other companies. It's not always an exact fit, it's never an exact fit, but there are a lot of pieces that can be replicated and borrowed, either people, process or technology, from our experience, that would help to accelerate other companies down the same path. >> One of the questions around AI though is, can you trust it? The insights that it derives, are they trustworthy? >> I'll give a quick answer to that, and then Tom, it's probably something you want to chime in on. There's a lot of danger in AI, and it needs to be monitored closely. There's bias that can creep into the datasets because the datasets are being enhanced with cognitive techniques. There's bias that can creep into the algorithms and any kind of learning model can start to spin on its own axis and go in its own direction and if you're not watching and monitoring and auditing, then it could be starting to deliver you crazy answers. Then the other part is, you need to build the trust of the users, because who wants to take an answer that's coming out of a black box? We've launched several AI projects where the answer just comes out naked, if you will, just sitting right there and there's no context around it and the users never like that. So we've understood now that you have to put the context, the underlying calculations, and the assessment of our own probability of being correct in there. So those are some of the things you can do to get over that. But Tom, do you have anything to add to that? >> I'll just give an example. When we were early in analyzing Twitter tweets about a major storm, what we've read about was, oh, some celebrity's dog was in danger, like uh. (Rebecca laughs) This isn't very helpful insight. >> I'm going to guess, I probably know the celebrity's dog that was in danger. (laughs) >> (laughs) actually stop saying that. So we learned how to filter those things out and say what are the meaningful keywords that we need to extract from and really then can draw conclusions from. >> So is Kardashian a meaningful word, (all laughing) I guess that's the question. >> Trending! (all laughing) >> Trending now! >> I want to follow up on that because as an AI developer, what responsibility do developers have to show their work, to document how their models have worked? >> Yes, so all of our information that we provided the users all draws back to, here's the original source, here's where the information was taken from so we can draw back on that. And that's an important part of having a cognitive data, cognitive enterprise data platform where all this information is stored 'cause then we can refer to that and go deeper as well and we can analyze it further after the fact, right? You can't always respond in the moment, but once you have those records, that's how you can learn from it for the next time around. >> I understand that building test models in some cases, particularly in deep learning is very difficult to build reliable test models. Is that true, and what progress is being made there? >> In our case, we're into the machine learning dimension yet, we're not all the way into deep learning in the project that I'm involved with right now. But one reason we're not there is 'cause you need to have huge, huge, vast amounts of robust data and that trusted dataset from which to work. So we aspire towards and we're heading towards deep learning. We're not quite there yet, but we've started with machine learning insights and we'll progress from there. >> And one of the interesting things about this AI movement overall is that it's filled with very energetic people that's kind of a hacker mindset to the whole thing. So people are grabbing and running with code, they're using a lot of open source, there's a lot of integration of the black box from here, from there in the other place, which all adds to the risk of the output. So that comes back to the original point which is that you have to monitor, you have to make sure that you're comfortable with it. You can't just let it run on its own course without really testing it to see whether you agree with the output. >> So what other best practices, there's the monitoring, but at the same time you do that hacker culture, that's not all bad. You want people who are energized by it and you are trying new things and experimenting. So how do you make sure you let them have, sort of enough rein but not free rein? >> I would say, what comes to mind is, start with the business problem that's a real problem. Don't make this an experimental data thing. Start with the business problem. Develop a POC, a proof of concept. Small, and here's where the hackers come in. They're going to help you get it up and running in six weeks as opposed to six months. And then once you're at the end of that six-week period, maybe you design one more six-week iteration and then you know enough to start scaling it and you scale it big so you've harnessed the hackers, the energy, the speed, but you're also testing, making sure that it's accurate and then you're scaling it. >> Excellent. Well thank you Tom and Joe, I really appreciate it. It's great to have you on the show. >> Thank you! >> Thank you, Rebecca, for the spot. >> I'm Rebecca Knight for Paul Gillin, we will have more from the IBM CDO summit just after this. (light music)

Published Date : Nov 15 2018

SUMMARY :

brought to you by IBM. Thank you so much for coming on the show! You are the author of Risk Insights. consumers of data from the weather company. So you reduce your risk, your supply chain risk, and trying to mitigate those risks if we need to, as you said, it's the coolest project you've ever worked on? and in the future we're actually, there was something called from that issue and the resolution and we put that It's a full soup to nuts solution the ones we needed to identify and mitigate against. So you were going to talk about other kinds of AI that you run. and we know where the problems are, and we're going to use AI So speaking of unstructured data, So we know, not just where the event is, So the idea being that IBM is saying, all of that for the benefit of other companies. and any kind of learning model can start to spin When we were early in analyzing Twitter tweets I'm going to guess, I probably know the celebrity's dog So we learned how to filter those things out I guess that's the question. and we can analyze it further after the fact, right? to build reliable test models. and that trusted dataset from which to work. So that comes back to the original point which is that but at the same time you do that hacker culture, and then you know enough to start scaling it It's great to have you on the show. Rebecca, for the spot. we will have more from the IBM CDO summit just after this.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Paul GillinPERSON

0.99+

Rebecca KnightPERSON

0.99+

TomPERSON

0.99+

IBMORGANIZATION

0.99+

Joe SellePERSON

0.99+

JoePERSON

0.99+

RebeccaPERSON

0.99+

Thomas WardPERSON

0.99+

Garry KasparovPERSON

0.99+

six weeksQUANTITY

0.99+

six-weekQUANTITY

0.99+

Tom WardPERSON

0.99+

MSNBCORGANIZATION

0.99+

25QUANTITY

0.99+

CNNORGANIZATION

0.99+

six monthsQUANTITY

0.99+

BostonLOCATION

0.99+

last yearDATE

0.99+

TwitterORGANIZATION

0.99+

three daysQUANTITY

0.99+

twoQUANTITY

0.99+

multimillions of dollarsQUANTITY

0.98+

bothQUANTITY

0.98+

Risk InsightsTITLE

0.97+

KardashianPERSON

0.97+

Deep BlueTITLE

0.97+

hundreds of trusted news sourcesQUANTITY

0.97+

one dayQUANTITY

0.96+

oneQUANTITY

0.95+

OneQUANTITY

0.95+

one reasonQUANTITY

0.95+

IBM CDO SummitEVENT

0.95+

couple hundred thousand peopleQUANTITY

0.92+

IBM CDO Fall Summit 2018EVENT

0.91+

Risk InsightsORGANIZATION

0.86+

90'sDATE

0.86+

Hurricane FlorenceEVENT

0.86+

Hurricane MichaelEVENT

0.85+

millions of dollarsQUANTITY

0.84+

this morningDATE

0.83+

one placeQUANTITY

0.82+

IBM Chief Data Officer SummitEVENT

0.81+

billions of dollarsQUANTITY

0.8+

InderpalPERSON

0.77+

InderpalORGANIZATION

0.75+

One ofQUANTITY

0.71+

thousand of eventsQUANTITY

0.68+

RiskORGANIZATION

0.68+

CDOEVENT

0.59+

questionsQUANTITY

0.56+

wavesEVENT

0.56+

theCUBEORGANIZATION

0.34+