IBM THINK Thad Promo v1
>>Hi, I'm Fat Wars will check vice president information architecture at IBM. >>Listen, you know, think is undoubtedly going to be different this year. It's gonna be a first in many ways. First of all, it's free for our clients and and we'll be a virtual event. >>That said, >>there's some really dynamic content available at a time where people are able to consume that content digitally. We've provided some of the best content as far as Data AI and the journey between the two across information architecture. You can learn about collecting, organizing and analyzing and infusing data and AI into your business practices the ways that other clients are doing it and how to become most effective at that. Look, if you haven't done so already registered for think we have some really great keynotes and in depth agendas. The content is action packed, and >>it will be a great two days. >>So you call the action for me would be I encourage you to go register and get your agenda signed up
SUMMARY :
Listen, you know, think is undoubtedly going to be different this year. We've provided some of the best content
SENTIMENT ANALYSIS :
Dr. Rumman Chowdhury, Accenture | Accenture Technology Vision Launch 2019
>> From the Salesforce Tower in downtown San Francisco, it's theCUBE, covering Accenture Tech Vision 2019. Brought to you by SiliconANGLE Media. (upbeat techno music) >> Hey welcome back everybody, Jeff Frick here with theCUBE. We are live in downtown San Francisco, the Salesforce office in the brand new Accenture Innovation Hub. It's the grand opening, like I say the soft opening, but we had the ribbon cutting, we're presenting the Accenture Technology Vision 2019 and we're excited to have somebody who's not a technologist who's very important to technology, she's Doctor Rumman Chowdhury, she's the Global Lead For Responsible AI at Accenture. >> I am. >> Great to see you. >> Thank you for having me on your program. >> Absolutely. So I was doing some background research on you and I love you introduce a lot of your talks about the fact that you're not a technologist, you come at this from a very, very different point of view. >> I do. So I am a social scientist by background. I've been working as a data scientist in artificial intelligence for some years but I'm not a computer scientist by trade. I come more from a stats background, which gives me a different perspective. So when I think of AI or data science, I literally think of it as information about people meant to understand trends in human behavior. >> So there's so many issues around responsible AI. We can talk, probably, to all these people, go on above, you know. >> Yeah. >> We don't have too much... And the first one is really a lot in the news right now, about AI is simply a codification of existing biases often, unless you really take a very proactive stance to make sure you're not just codifying biases in software. What are you seeing? >> Absolutely. So we really have to think about two kinds of bias. There's one that comes from our data, from our models. This can mean incomplete data, poorly trained models. But the second one to think about is you can have great data and a perfect model but we come from an imperfect world. We know that the world is not a fair place, some people just get a poor lot in life. We don't want to codify that into our systems and processes, so as we think about ethics and AI it's not just about improving the technology, it's about improving the society behind the technology. >> Right. >> Yeah. Another big topic I think that's really important is if you're doing a project and you want to think through some of the ethical issues, should we be collecting this data, why are we collecting this data, why are we running these algorithms and you make a decision it's for a particular person, purpose and the value outweighs the cost. But I think where the challenge really comes into is the next people that use that data or the next use that you don't necessarily have in mind and I think we hear that a lot in terms of kind of the complaints about the current state of big tech, where everyone is doing their little piece. >> Right. >> But what happens over time as those get rolled into maybe bigger pieces that weren't necessarily what they were starting with in the first place. >> Right. >> Absolutely, it's something I called moral outsourcing. Because what we build is often, we feel like a cog in a machine, we feel sometimes as technologists people aren't willing to take the responsibility for their actions, even though we should be. If we build something that is fundamentally unethical, we need to stop and ask ourselves, just because we can doesn't mean we should. >> Right. >> And think about the implications on society. Right now there's often not enough accountability because everybody feels like they're contributing to this larger machine, who am I to question it and the system will crush me anyway. So we need to empower people to be able to speak their minds and have an ethical conscience. >> So I'm curious in term of the reception of your message when you're talking to clients because clearly there's a lot of pressure to innovate fast. Everyone is telling everybody that data's the new oil and we've got to leverage these micro-experiences, et cetera, et cetera, et cetera. And they don't necessarily take a minute to step back and reflect >> Right. >> Is this the right thing, is this the right way? Are we collecting more data than we really need to achieve the objective? So how receptive are companies to your message? Do they get it? Do they have >> Yeah. >> To get hit upside the head with some problem before they really understand the value? >> So I'll give you a phrase that everybody understands and then they get the point of ethics in AI. Brakes help a car go faster. If we have the right kinds of guard rails, warning mechanisms, systems, to tell us if something is going to derail or get out of control, we feel more comfortable taking risks. So think about driving on the freeway. Because you know you can stop your car if the car in front of you stops abruptly, you feel comfortable driving 90 miles an hour. If you could not stop your car, nobody would go faster than 15. So I actually think of ethics and AI are an ethical implementation of technology as a way of helping companies be more innovative. It sounds contradictory but it actually works very well. If I know where my safe space is, I'm more capable of making true innovations. >> Right. So I want to get your take on another kind of topic, which is really kind of STEM education versus not STEM, or ethics. >> Right. >> And it's interesting, huge push on STEM, it's very, very important thing that's going on now. But as you look not that far down the road, and this events all about looking down the future, reinventing the future. As more and more of those kind of engineering functions are taken over by the machines >> Right. >> It seems like where the void is is really more talking about what are the implications, what are the deeper questions we should be asking, what are the ethics and the moral questions before just building a better mousetrap. >> Right. So you're raising a very hot button issue in the ethics and AI space. Is it simply enough to say all technologists should take an ethics course? I think it is very important to have an interdisciplinary education but, no, I don't think one ethics course, taken out of context in college will help you. So I think that there's a few things to think about. One is that corporations need to have an ethical culture. It needs to be a good thing to be ethical, number one. Number two, we need interdisciplinary teams. Often technologists will say, and rightfully so, "How was I supposed to know thing X would happen?" It's something very specific to a neighborhood or a country or a socio-economic group. And that's absolutely true. So what you should do is bring in a local community, the ACLU, some sort of a regional expert. So we do also need to move towards creating interdisciplinary teams. >> Right. So you brought up another really cool thing I think in one of your talks, FAITH. Fairness, Accountability, Transparency and Explainability >> Yes. >> Which is a, you know nobody likes black box algorithms. >> Yep. >> But fairness, specifically, is such an interesting concept. We all feel very slighted if we perceive things not to be fair. >> Yes. >> The reality is life is not fair, a lot of things are not fair. So as people try to incorporate some of these things into the way they do business, how can they do a better job, what are some of the things they should be thinking about >> Yeah. >> So they can have the faith? >> Fairness is a very complicated, complex thing and I invite you, or whenever someone asks, "What does it mean to be fair?" I point them towards this really great talk from this conference called Fat Star and it's called, 21 Definitions of Fairness. And it's all these different ways in which we can quantify and measure the concept of fairness. Well at Accenture, we took that talk and some other papers and created something called the Fairness Tool. So it's a tool to help guide discussion and show solutions on algorithmic bias and fairness. Now, the way we think about it is not as a decision maker but a decision enabler. So how can you communicate as a data scientist to a non-technical person to explain the potential flaws and problems and then take collective action? So the algorithm can help you make that decision but it's not automating the decision for you. So what it does is it helps smooth conversation and helps pinpoint where there might be bias or unfairness in your algorithm. >> Right. Well we don't have time tonight but another time we're going to >> Sure. >> Dig deeper into this and all the biomechanics and bioengineering >> Yes. >> And a lot of great topics that you've covered in a number of your talks. So I really enjoy getting to meet you and you do terrific work, really enjoy it. >> Thank you, thank you very much. >> Alright, thank you. She's Rumman, I'm Jeff, you're watching theCUBE. We're at the Accenture Innovation Hub in downtown San Francisco. Thanks for watching, see you next time. (upbeat techno music)
SUMMARY :
Brought to you by SiliconANGLE Media. the Salesforce office in the brand new and I love you introduce a lot of your talks about So I am a social scientist by background. We can talk, probably, to all these people, And the first one is really a lot in the news right now, But the second one to think about is you can have great data and I think we hear that a lot in the first place. in a machine, we feel sometimes as technologists and the system will crush me anyway. So I'm curious in term of the reception of your message if the car in front of you stops abruptly, So I want to get your take on another kind of topic, But as you look not that far down the road, is really more talking about what are the implications, So I think that there's a few things to think about. So you brought up another really cool thing I think We all feel very slighted if we perceive things into the way they do business, So the algorithm can help you make that decision Well we don't have time tonight but another time So I really enjoy getting to meet you We're at the Accenture Innovation Hub
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Jeff Frick | PERSON | 0.99+ |
Jeff | PERSON | 0.99+ |
Rumman Chowdhury | PERSON | 0.99+ |
Rumman | PERSON | 0.99+ |
SiliconANGLE Media | ORGANIZATION | 0.99+ |
two kinds | QUANTITY | 0.99+ |
Accenture | ORGANIZATION | 0.99+ |
second one | QUANTITY | 0.99+ |
One | QUANTITY | 0.98+ |
Salesforce | ORGANIZATION | 0.98+ |
tonight | DATE | 0.97+ |
ACLU | ORGANIZATION | 0.97+ |
first | QUANTITY | 0.97+ |
90 miles an hour | QUANTITY | 0.97+ |
one | QUANTITY | 0.97+ |
first one | QUANTITY | 0.96+ |
Accenture Technology Vision 2019 | EVENT | 0.92+ |
San Francisco | LOCATION | 0.92+ |
ntown San Francisco | LOCATION | 0.9+ |
theCUBE | ORGANIZATION | 0.9+ |
21 | TITLE | 0.82+ |
Accenture Tech Vision 2019 | EVENT | 0.79+ |
15 | QUANTITY | 0.76+ |
Accenture Innovation Hub | ORGANIZATION | 0.72+ |
Accenture Technology | EVENT | 0.7+ |
Innovation | LOCATION | 0.68+ |
Tower | LOCATION | 0.64+ |
Hub | ORGANIZATION | 0.6+ |
Number two | QUANTITY | 0.57+ |
Vision Launch | EVENT | 0.56+ |
and Explainability | TITLE | 0.54+ |
2019 | DATE | 0.52+ |
Definitions of Fairness | EVENT | 0.51+ |
Fat | ORGANIZATION | 0.46+ |
Tool | OTHER | 0.44+ |
Star | TITLE | 0.4+ |