Image Title

Search Results for AIF360:

John Thomas, IBM | IBM CDO Fall Summit 2018


 

>> Live from Boston, it's theCUBE, covering IBM Chief Data Officer Summit, brought to you by IBM. >> Welcome back everyone to theCUBE's live coverage of the IBM CDO Summit here in Boston, Massachusetts. I'm your host Rebecca Knight*, and I'm joined by cohost, Paul Gillan*. We have a guest today, John Thomas. He is the Distinguished Engineer and Director* at IBM. Thank you so much for coming, returning to theCUBE. You're a CUBE veteran, CUBE alum. >> Oh thank you Rebecca, thank you for having me on this. >> So tell our viewers a little bit about, you're a distinguished engineer. There are only 672 in all of IBM. What do you do? What is your role? >> Well that's a good question. Distinguished Engineer is kind of a technical executive role, which is a combination of applying the technology skills, as well as helping shape IBM strategy in a technical way, working with clients, et cetera. So it is a bit of a jack of all trades, but also deep skills in some specific areas, and I love what I do (laughs lightly). So, I get to work with some very talented people, brilliant people, in terms of shaping IBM technology and strategy. Product strategy, that is part of it. We also work very closely with clients, in terms of how to apply that technology in the context of the client's use status. >> We've heard a lot today about soft skills, the importance of organizational people skills to being a successful Chief Data Officer, but there's still a technical component. How important is the technical side? What is, what are the technical skills that the CDOs need? >> Well, this is a very good question Paul. So, absolutely, so, navigating the organizational structure is important. It's a soft skill. You are absolutely right. And being able to understand the business strategy for the company, and then aligning your data strategy to the business strategy is important, right? But the underlying technical pieces need to be solid. So for example, how do you deal with large volumes of different types of data spread across a company? How do you manage that data? How do you understand the data? How do you govern that data? How do you then master leveraging the value of that data in the context of your business, right? So an understanding, a deep understanding of the technology of collecting, organizing, and analyzing that data is needed for you to be a successful CDO. >> So in terms of, in terms of those skillsets that you're looking for, and one of the things that Inderpal said earlier in his keynote, is that, there are just, it's a rare individual who truly understands the idea of how to collect, store, analyze, curatize, monetize the data, and then also have the soft skills of being able to navigate the organization, being able to be a change agent who is inspiring, inspiring the rank and file. How do you recruit and retain talent? I mean, this seems to be a major challenge. >> Expertise is, and getting the right expertise in place, and Inderpal talked about it in his keynote, which was the very first thing he did was bring in talent. Sometimes it is from outside of your company. Maybe you have a kind of talent that has grown up in your company. Maybe you have to go outside, but you've got to bring in the right skills together. Form the team that understands the technology, and the business side of things, and build this team, and that is essential for you to be a successful CDO. And to some extent, that's what Inderpal has done. That's what the analytic CDO's office has done. Seth Dobrin, my boss, is the analytics CDO , and he and the analytics CDO team actually hired people with different skills. Data engineering skills, data science skills, visualization skills, and then put this team together which understands the, how to collect, govern, curate, and analyze the data, and then apply them in specific situations. >> There's been a lot of talk about AI, at this conference, which seems to be finally happening. What do you see in the field, or perhaps projects that you've worked on, of examples of AI that are really having a meaningful business impact? >> Yeah Paul, that is a very good question because, you know, the term AI is overused a lot as you can imagine, a lot of hype around it. But I think we are past that hype cycle, and people are looking at, how do I implement successful use cases? And I stress the word use case, right? In my experience these, how I'm going to transform my business in one big boil the ocean exercise, does not work. But if you have a very specific bounded use case that you can identify, the business tells you this is relevant. The business tells you what the metrics for success are. And then you focus your attention, your efforts on that specific use case with the skills needed for that use case, then it's successful. So, you know, examples of use cases from across the industries, right? I mean everything that you can think of. Customer-facing examples, like, how do I read the customer's mind? So when, if I'm a business and I interact with my customers, can I anticipate what the customer is looking for, maybe for a cross-sell opportunity, or maybe to reduce the call handing time when a customer calls into my call center. Or trying to segment my customers so I can do a proper promotion, or a campaign for that customer. All of these are specific customer phasing examples. There also are examples of applying this internally to improve precesses, capacity planning for your infrastructure, can I predict when a system is likely to have an outage, or can I predict the traffic coming into my systems, into my infrastructure and provision capacity for that on demand, So all of these are interesting applications of AI in the enterprise. >> So when your trying, what are the things we keep hearing, is that we need to data to tell a story To, the data needs to be compelling enough so that the people, the data scientist get it but then also the other kinds of business decision makers get it to. >> Yep >> So, what are sort of, the best practices that have emerged from your experience? In terms of, being able to, for your data to tell a story that you want it to tell. >> Yeah, well I mean if the pattern doesn't exist in the data then no amount of fancy algorithms can help, you know? and sometimes its like searching for a needle in a haystack but assuming, I guess the first step is, like I said, What is the use case? Once you have a clear understanding of your use case and such metrics for your use case, do you have the data to support that use case? So for example if it's fraud detection, do you actually have the historical data to support the fraud use case? Sometimes you may have transactional data from your, transocular from your core enterprise systems but that may not be enough. You may need to alt mend it with external data, third party data, maybe unstructured data, that goes along with your transaction data. So the question is, can you identify the data that is needed to support the use case and if so can I, is that data clean, is that data, do you understand the lineage of the data, who has touched and modified the data, who owns the data. So then I can start building predictive models and machine learning, deep learning models with that data. So use case, do you have the data to support the use case? Do you understand how that sata reached you? Then comes the process of applying machine learning algorithms and deep learning algorithms against that data. >> What are the risks of machine learning and particularly deep learning, I think because it becomes kind of a black box and people can fall into the trap of just believing what comes back, regardless of whether the algorithms are really sound or the data is. What is the responsibility of data scientist to sort of show their work? >> Yeah, Paul this is fascinating and not completely solid area, right? So, bias detection, can I explain how my model behaved, can I ensure that the models are fair in their predictions. So there is a lot of research, a lot of innovation happening in the space. IBM is investing a lot into space. We call trust and transparency, being able to explain a model, it's got multiple levels to it. You need some level of AI governments itself, just like we talked about data governments that is the notion of AI governments. Which is what motion of the model was used to make a prediction? What were the imports that went into that model? What were the decisions that were, that were the features that were used to make a sudden prediction? What was the prediction? And how did that match up with ground truth. You need to be able to capture all that information but beyond that, we have got actual mechanisms in place that IBM Research is developing to look at bias detection. So pre processing during execution post processing, can I look for bias in how my models behave and do I have mechanisms to mitigate that? So one example is the open source Python library, called AIF360 that comes from IBM Research and has contributed to the open source community. You can look at, there are mechanisms to look at bias and provide some level of bias mitigation as part of your model building exercises. >> And the bias mitigation, does it have to do with, and I'm going to use an IMB term of art here, the human in the loop, is it how much are you actually looking at the humans that are part of this process >> Yeah, humans are at least at this point in time, humans are very much in the loop. This notion of Peoria high where humans are completely outside the loop is, we're not there yet so very much something that the system can for awhile set off recommendations, can provide a set of explanations and can someone who understands the business look at it and make a corrective, take corrective actions. >> There has been, however to Rebecca's point, some prominent people including Bill Gates, who have speculated that the AI could ultimately be a negative for humans. What is the responsibility of company's like IBM to ensure that humans are kept in the loop? >> I think at least at this point IBM's view is humans are an essential part of AI. In fact, we don't even use artificial intelligence that much we call it augmented intelligence. Where the system is pro sending a set of recommendations, expert advise to the human who can then make a decision. For example, you know my team worked with a prominent health care provider on you know, models for predicting patient death in the case of sepsis, sepsis-onset. This is, we are talking literally life and death decisions being made and this is not something you can just automate and throw into a magic black box, and have a decision be made. So this is absolutely a place where people with deep, domain knowledge are supported, are opt mended with, with AI to make better decisions, that's where I think we are today. As to what will happen five years from now, I can't predict that yet. >> Well I actually want to- >> But the question >> bring this up to both of you, the role, so you are helping doctor's make these decisions, not just this is what the computer program says about this patient's symptoms here but this is really, so you are helping the doctor make better decisions. What about the doctors gut, in the, his or her intuition to. I mean, what is the role of that, in the future? >> I think it goes away, I mean I think, the intuition really will be trumped by data in the long term because you can't argue with the facts. Some people do these days. (soft laughter) But I don't remember (everyone laughing) >> We have take break there for some laughter >> Intrested in your perspective onthat is there, will there, should there always be a human on the front line, who is being supported by the back end or would you see a scenario were an AI is making decisions, customer facing decisions that are, really are life and death decisions? >> So I think in the consumer invest way, I can definitely see AI making decisions on it's own. So you know if lets say a recommender system would say as you know I think, you know John Thomas, bought these last five things online. He's likely to buy this other thing, let's make an offer to him. You know, I don't need another human in the loop for that >> No harm right? >> Right. >> It's pretty straight forward, it's already happening, in a big way but when it comes to some of these >> Prepoping a mortgage, how about that one? >> Yeah >> Where bias creeps in a lot. >> But that's one big decision. >> Even that I think can be automated, can be automated if the threshold is set to be what the business is comfortable with, were it says okay, above this probity level, I don't really need a human to look at this. But, and if it is below this level, I do want someone to look at this. That's you know, that is relatively straight forward, right? But if it is a decision about you know life or death situation or something that effects the very fabric of the business that you are in, then you probably want a domain explore to look at it. In most enterprises, enterprises cases will fall, lean toward that category. >> These are big questions. These are hard questions. >> These are hard questions, yes. >> Well John, thank you so much for doing >> Oh absolutely, thank you >> On theCUBE, we really had a great time with you. >> No thank you for having me. >> I'm Rebecca Knight for Paul Gillan, we will have more from theCUBE's live coverage of IBM CDO, here in Boston, just after this. (Upbeat Music)

Published Date : Nov 15 2018

SUMMARY :

brought to you by IBM. of the IBM CDO Summit here in Boston, Massachusetts. What do you do? in the context of the client's use status. How important is the technical side? in the context of your business, right? and one of the things that Inderpal said and that is essential for you to be a successful CDO. What do you see in the field, the term AI is overused a lot as you can imagine, To, the data needs to be compelling enough the best practices that have emerged from your experience? So the question is, can you identify the data and people can fall into the trap of just can I ensure that the models are fair in their predictions. are completely outside the loop is, What is the responsibility of company's being made and this is not something you can just automate What about the doctors gut, in the, his or her intuition to. in the long term because you can't argue with the facts. So you know if lets say a recommender system would say as of the business that you are in, These are hard questions. we really had a great time with you. here in Boston, just after this.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Rebecca KnightPERSON

0.99+

Paul GillanPERSON

0.99+

IBMORGANIZATION

0.99+

Seth DobrinPERSON

0.99+

RebeccaPERSON

0.99+

John ThomasPERSON

0.99+

InderpalPERSON

0.99+

JohnPERSON

0.99+

PaulPERSON

0.99+

Bill GatesPERSON

0.99+

BostonLOCATION

0.99+

IBM ResearchORGANIZATION

0.99+

Boston, MassachusettsLOCATION

0.99+

first stepQUANTITY

0.99+

bothQUANTITY

0.99+

PythonTITLE

0.98+

theCUBEORGANIZATION

0.98+

672QUANTITY

0.98+

todayDATE

0.98+

one exampleQUANTITY

0.98+

IBM CDO SummitEVENT

0.96+

oneQUANTITY

0.95+

BostLOCATION

0.95+

five thingsQUANTITY

0.94+

sepsisOTHER

0.88+

PeoriaLOCATION

0.88+

CUBEORGANIZATION

0.88+

IBM Chief Data Officer SummitEVENT

0.87+

IBMEVENT

0.85+

first thingQUANTITY

0.82+

CDO Fall Summit 2018EVENT

0.81+

AIF360TITLE

0.71+

CDOTITLE

0.66+

five yearsDATE

0.66+