Image Title

Search Results for Ilana Golbin:

Ilana Golbin, PwC | MIT CDOIQ 2018


 

>> Live from the MIT campus in Cambridge, Massachusetts, it's The Cube, covering the 12th annual MIT Chief Data Officer and Information Quality Symposium. Brought to you by Silicon Angle Media. >> Welcome back to The Cube's coverage of MIT CDOIQ, here in Cambridge, Massachusetts. I'm your host, Rebecca Knight, along with my cohost Peter Burris. We're joined by Ilana Golbin. She is the manager of artificial intelligence accelerator PWC... >> Hi. >> Based out of Los Angeles. Thanks so much for coming on the show! >> Thank you for having me. >> So I know you were on the main stage, giving a presentation, really talking about fears, unfounded or not, about how artificial intelligence will change the way companies do business. Lay out the problem for us. Tell our viewers a little bit about how you see the landscape right now. >> Yeah, so I think... We've really all experienced this, that we're generating more data than we ever have in the past. So there's all this data coming in. A few years ago that was the hot topic: big data. That big data's coming and how are we going to harness big data. And big data coupled with this increase in computing power has really enabled us to build stronger models that can provide more predictive power for a variety of use cases. So this is a good thing. The problem is that we're seeing these really cool models come out that are black box. Very difficult to understand how they're making decisions. And it's not just for us as end users, but also developers. We don't really know 100% why some models are making the decisions that they are. And that can be a problem for auditing. It can be a problem for regulation if that comes into play. And as end users for us to trust the model. Comes down to the use case, so why we're building these models. But ultimately we want to ensure that we're building models responsibly so the models are in line with our mission as business, and they also don't do any unintended harm. And so because of that, we need some additional layers to protect ourself. We need to build explainability into models and really understand what they're doing. >> You said two really interesting things. Let's take one and then the other. >> Of course. >> We need to better understand how we build models and we need to do a better job of articulating what those models are. Let's start with the building of models. What does it mean to do a better job of building models? Where are we in the adoption of better? >> So I think right now we're at the point where we just have a lot of data and we're very excited about it and we just want to throw it into whatever models we can and see what we can get that has the best performance. But we need to take a step back and look at the data that we're using. Is the data biased? Does the data match what we see in the real world? Do we have a variety of opinions in both the data collection process and also the model design process? Diversity is not just important for opinions in a room but it's also important for models. So we need to take a step back and make sure that we have that covered. Once we're sure that we have data that's sufficient for our use case and the bias isn't there or the bias is there to the extent that we want it to be, then we can go forward and build these better models. So I think we're at the point where we're really excited, and we're seeing what we can do, but businesses are starting to take a step back and see how they can do that better. >> Now the one B and the tooling, where is the tooling? >> The tooling... If you follow any of the literature, you'll see new publications come out sometimes every minute of the different applications for these really advanced models. Some of the hottest models on the market today are deep learning models and reinforcement learning models. They may not have an application for some businesses yet, but they definitely are building those types of applications, so the techniques themselves are continuing to advance, and I expect them to continue to do so. Mostly because the data is there and the processing power is there and there's so much investment coming in from various government institutions and governments in these types of models. >> And the way typically that these things work is the techniques and the knowledge of techniques advance and then we turn them into tools. So the tools are lagging a little bit still behind the techniques, but it's catching up. Would you agree? >> I would agree with that. Just because commercial tools can't keep up with the pace of academic environment, we wouldn't really expect them to, but once you've invested in a tool you want to try and improve that tool rather than reformat that tool with the best technique that came out yesterday. So there is some kind of iteration that will continue to happen to make sure that our commercially available tools match what we see in the academic space. >> So a second question is, now we've got the model, how do we declare the model? What is the state of the art in articulating metadata, what the model does, what its issues are? How are we doing a better job and what can we do better to characterize these models so they can be more applicable while at the same time maintaining fidelity that was originally intended and embedded? >> I think the first step is identifying your use case. The extent to which we want to explain a model really is dependent on this use case. For instance, if you have a model that is going to be navigating a self-driving car, you probably want to have a lot more rigor around how that model is developed than with a model that targets mailers. There's a lot of middle ground there, and most of the business applications fall into that middle ground, but there're still business risks that need to be considered. So to the extent to which we can clearly articulate and define the use case for an AI application, that will help inform what level of explainability or interpretability we need out of our tool. >> So are you thinking in terms of what it means, how do we successfully define use cases? Do you have templates that you're using at PWC? Or other approaches to ensure that you get the rigor in the definition or the characterization of the model that then can be applied both to a lesser, you know, who are you mailing, versus a life and death situation like, is the car behaving the way it's expected to? >> And yet the mailing, we have the example, the very famous Target example that outed a young teenage girl who was pregnant before. So these can have real life implications. >> And they can, but that's a very rare instance, right? And you could also argue that that's not the same as missing a stop sign and potentially injuring someone in a car. So there are always going to be extremes, but usually when we think about use cases we think about criticality, which is the extent to which someone could be harmed. And vulnerability, which is the willingness for an end user to accept a model and the decision that it makes. A high vulnerability use case could be... Like a few years ago or a year ago I was talking to a professor at UCSC, University of California San Diego, and he was talking to a medical devices company that manufactures devices for monitoring your blood sugar levels. So this could be a high vulnerability case. If you have an incorrect reading, someone's life could be in danger. This medical device was intended to read the blood sugar levels by noninvasive means, just by scanning your skin. But the metric that was used to calculate this blood sugar was correct, it just wasn't the same that an end user was expecting. Because that didn't match, these end users did not accept this device, even though it did operate very well. >> They abandoned it? >> They abandoned it. It didn't sell. And what this comes down to is this is a high vulnerability case. People want to make sure that their lives, the lives of their kids, whoever's using this devices is in good hands, and if they feel like they can't trust it, they're not going to use it. So the use case I do believe is very important, and when we think about use cases, we think of them on those two metrics: vulnerability and criticality. >> Vulnerability and criticality. >> And we're always evolving our thinking on this, but this is our current thinking, yeah. >> Where are we, in terms of the way in which... From your perspective, the way in which corporations are viewing this, do you believe that they have the right amount of trepidation? Or are they too trepidatious when it comes to this? What is the mindset? Speaking in general terms. >> I think everybody's still trying to figure it out. What I've been seeing, personally, is businesses taking a step back and saying, "You know we've been building all these proof of concepts, "or deploying these pilots, "but we haven't done anything enterprise-wide yet." Generally speaking. So what we're seeing are business coming back and saying, "Before we go any further, we need "a comprehensive AI strategy. "We need something central within our organization "that tells us, that defines how we're going to move forward "and build these future tools, so that we're not then "moving backwards and making sure everything aligns." So I think this is really the stage that businesses are in. Once they have a central AI strategy, I think it becomes much easier to evaluate regulatory risks or anything like that. Just because it all reports to a central entity. >> But I want to build on that notion. 'Cause generally we agree. But I want to build on that notion, though. We're doing a good job in the technology world of talking about how we're distributing processing power. We're doing a good job of describing how we're distributing data. And we're even doing a good job of just describing how we're distributing known process. We're not doing a particularly good job of what we call systems of agency. How we're distributing agency. In other words, the degree to which a model is made responsible for acting on behalf of the brand. Now in some domains, medical devices, there is a very clear relationship between what the device says it's going to do, and who ultimately is decided to be, who's culpable. But in the software world, we use copyright law. And copyright law is a speech act. How do we ensure that this notion of agency, we're distributing agency appropriately so that when something is being done on behalf of the brand, that there is a lineage of culpability, a lineage of obligations associated with that? Where are we? >> I think right now we're still... And I can't speak for most organizations, just my personal experience. I think that the companies or the instances I've seen, we're still really early on in that. Because AI is different from traditional software, but it still needs to be audited. So we're at the stage where we're taking a step back and we're saying, "We know we need a mechanism "to monitor and audit our AI." We need controls around this. We need to accurately provide auditing and assurance around our AI applications. But we recognize it's different from traditional software. For a variety of reasons. AI is adaptive. It's not static like traditional software. >> It's probabilistic and not categorical. >> Exactly. So there are a lot of other externalities that need to be considered. And so this is something that a lot of businesses are thinking about. One of the reasons why having a central AI strategy is really important, is that you can also define a central controls framework, some type of centralized assurance and auditing process that's mandated from a high level of the organization that everybody will follow. And that's really the best way to get AI widely adopted. Because otherwise, I think we'll be seeing a lot of challenges. >> So I've got one more question. And one question I have is, if you look out in the next three years, as someone who is working with customers, working with academics, trying to match the need to the expertise, what is the next conversation that's going to pop to the top of the stack in this world, in, say, within the next two years? >> Yeah what we'll we be talking about next year or five years from now, too, at the next CDOIQ? >> I think this topic of explainability will persist. Because I don't think we will necessarily tick all the boxes in the next year. I think we'll uncover new challenges and we'll have to think about new ways to explain how models are operating. Other than that, I think customers will want to see more transparency in the process itself. So not just the model and how it's making its decisions, but what data is feeding into that. How are you using my data to impact how a model is making decisions on my behalf? What is feeding into my credit score? And what can I do to improve it? Those are the types of conversations I think we'll be having in the next two years, for sure. >> Great, well Ilana, thanks so much for coming on The Cube. It was great having you. >> Thank you for having me. >> I'm Rebecca Knight for Peter Burris. We will have more from MIT Chief Data Officer Symposium 2018 just after this. (upbeat electronic music)

Published Date : Jul 19 2018

SUMMARY :

Brought to you by Silicon Angle Media. She is the manager of artificial intelligence accelerator Thanks so much for coming on the show! Lay out the problem for us. are making the decisions that they are. really interesting things. We need to better understand how we build models and look at the data that we're using. and the processing power is there and there's so much So the tools are lagging a little bit still of academic environment, we wouldn't really expect them to, and most of the business applications the very famous Target example and the decision that it makes. So the use case I do believe is very important, And we're always evolving our thinking on this, What is the mindset? I think it becomes much easier to evaluate But in the software world, we use copyright law. So we're at the stage where we're taking a step back And that's really the best way the need to the expertise, So not just the model and how it's making its decisions, It was great having you. We will have more from MIT Chief Data Officer Symposium 2018

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
IlanaPERSON

0.99+

Rebecca KnightPERSON

0.99+

Ilana GolbinPERSON

0.99+

Peter BurrisPERSON

0.99+

PWCORGANIZATION

0.99+

Silicon Angle MediaORGANIZATION

0.99+

100%QUANTITY

0.99+

twoQUANTITY

0.99+

UCSCORGANIZATION

0.99+

Los AngelesLOCATION

0.99+

one questionQUANTITY

0.99+

first stepQUANTITY

0.99+

next yearDATE

0.99+

Cambridge, MassachusettsLOCATION

0.99+

second questionQUANTITY

0.99+

yesterdayDATE

0.99+

one more questionQUANTITY

0.99+

oneQUANTITY

0.99+

two metricsQUANTITY

0.99+

a year agoDATE

0.99+

OneQUANTITY

0.98+

bothQUANTITY

0.98+

The CubeORGANIZATION

0.97+

MITORGANIZATION

0.93+

MIT Chief Data Officer and Information Quality SymposiumEVENT

0.93+

few years agoDATE

0.93+

next two yearsDATE

0.92+

todayDATE

0.92+

TargetORGANIZATION

0.91+

MIT CDOIQORGANIZATION

0.91+

interesting thingsQUANTITY

0.88+

PwCORGANIZATION

0.87+

University of California San DiegoORGANIZATION

0.85+

next three yearsDATE

0.81+

MIT Chief Data Officer Symposium 2018EVENT

0.79+

12th annualQUANTITY

0.75+

MIT CDOIQ 2018EVENT

0.74+

fiveDATE

0.69+

yearsQUANTITY

0.63+

CubeORGANIZATION

0.59+

CDOIQORGANIZATION

0.45+