Image Title

Search Results for Obermeyer:

Empowerment Through Inclusion | Beyond.2020 Digital


 

>>Yeah, yeah. >>Welcome back. I'm so excited to introduce our next session empowerment through inclusion, reimagining society and technology. This is a topic that's personally very near and dear to my heart. Did you know that there's only 2% of Latinas in technology as a Latina? I know that there's so much more we could do collectively to improve these gaps and diversity. I thought spot diversity is considered a critical element across all levels of the organization. The data shows countless times. A diverse and inclusive workforce ultimately drives innovation better performance and keeps your employees happier. That's why we're passionate about contributing to this conversation and also partnering with organizations that share our mission of improving diversity across our communities. Last beyond, we hosted the session during a breakfast and we packed the whole room. This year, we're bringing the conversation to the forefront to emphasize the importance of diversity and data and share the positive ramifications that it has for your organization. Joining us for this session are thought spots Chief Data Strategy Officer Cindy Housing and Ruhollah Benjamin, associate professor of African American Studies at Princeton University. Thank you, Paola. So many >>of you have journeyed with me for years now on our efforts to improve diversity and inclusion in the data and analytic space. And >>I would say >>over time we cautiously started commiserating, eventually sharing best practices to make ourselves and our companies better. And I do consider it a milestone. Last year, as Paola mentioned that half the room was filled with our male allies. But I remember one of our Panelists, Natalie Longhurst from Vodafone, suggesting that we move it from a side hallway conversation, early morning breakfast to the main stage. And I >>think it was >>Bill Zang from a I G in Japan. Who said Yes, please. Everyone else agreed, but more than a main stage topic, I want to ask you to think about inclusion beyond your role beyond your company toe. How Data and analytics can be used to impact inclusion and equity for the society as a whole. Are we using data to reveal patterns or to perpetuate problems leading Tobias at scale? You are the experts, the change agents, the leaders that can prevent this. I am thrilled to introduce you to the leading authority on this topic, Rou Ha Benjamin, associate professor of African studies at Princeton University and author of Multiple Books. The Latest Race After Technology. Rou ha Welcome. >>Thank you. Thank you so much for having me. I'm thrilled to be in conversation with you today, and I thought I would just kick things off with some opening reflections on this really important session theme. And then we could jump into discussion. So I'd like us to as a starting point, um, wrestle with these buzzwords, empowerment and inclusion so that we can have them be more than kind of big platitudes and really have them reflected in our workplace cultures and the things that we design in the technologies that we put out into the world. And so to do that, I think we have to move beyond techno determinism, and I'll explain what that means in just a minute. Techno determinism comes in two forms. The first, on your left is the idea that technology automation, um, all of these emerging trends are going to harm us, are going to necessarily harm humanity. They're going to take all the jobs they're going to remove human agency. This is what we might call the techno dystopian version of the story and this is what Hollywood loves to sell us in the form of movies like The Matrix or Terminator. The other version on your right is the techno utopian story that technologies automation. The robots as a shorthand, are going to save humanity. They're gonna make everything more efficient, more equitable. And in this case, on the surface, he seemed like opposing narratives right there, telling us different stories. At least they have different endpoints. But when you pull back the screen and look a little bit more closely, you see that they share an underlying logic that technology is in the driver's seat and that human beings that social society can just respond to what's happening. But we don't really have a say in what technologies air designed and so to move beyond techno determinism the notion that technology is in the driver's seat. We have to put the human agents and agencies back into the story, the protagonists, and think carefully about what the human desires worldviews, values, assumptions are that animate the production of technology. And so we have to put the humans behind the screen back into view. And so that's a very first step and when we do that, we see, as was already mentioned, that it's a very homogeneous group right now in terms of who gets the power and the resource is to produce the digital and physical infrastructure that everyone else has to live with. And so, as a first step, we need to think about how to create more participation of those who are working behind the scenes to design technology now to dig a little more a deeper into this, I want to offer a kind of low tech example before we get to the more hi tech ones. So what you see in front of you here is a simple park bench public bench. It's located in Berkeley, California, which is where I went to graduate school and on this particular visit I was living in Boston, and so I was back in California. It was February. It was freezing where I was coming from, and so I wanted to take a few minutes in between meetings to just lay out in the sun and soak in some vitamin D, and I quickly realized, actually, I couldn't lay down on this bench because of the way it had been designed with these arm rests at intermittent intervals. And so here I thought. Okay, the the armrest have, ah functional reason why they're there. I mean, you could literally rest your elbows there or, um, you know, it can create a little bit of privacy of someone sitting there that you don't know. When I was nine months pregnant, it could help me get up and down or for the elderly, the same thing. So it has a lot of functional reasons, but I also thought about the fact that it prevents people who are homeless from sleeping on the bench. And this is the Bay area that we were talking about where, in fact, the tech boom has gone hand in hand with a housing crisis. Those things have grown in tandem. So innovation has grown within equity because we haven't thought carefully about how to address the social context in which technology grows and blossoms. And so I thought, Okay, this crisis is growing in this area, and so perhaps this is a deliberate attempt to make sure that people don't sleep on the benches by the way that they're designed and where the where they're implemented and So this is what we might call structural inequity. By the way something is designed. It has certain effects that exclude or harm different people. And so it may not necessarily be the intense, but that's the effect. And I did a little digging, and I found, in fact, it's a global phenomenon, this thing that architects called hostile architecture. Er, I found single occupancy benches in Helsinki, so only one booty at a time no laying down there. I found caged benches in France. And in this particular town. What's interesting here is that the mayor put these benches out in this little shopping plaza, and within 24 hours the people in the town rallied together and had them removed. So we see here that just because we have, uh, discriminatory design in our public space doesn't mean we have to live with it. We can actually work together to ensure that our public space reflects our better values. But I think my favorite example of all is the meter bench. In this case, this bench is designed with spikes in them, and to get the spikes to retreat into the bench, you have to feed the meter you have to put some coins in, and I think it buys you about 15 or 20 minutes. Then the spikes come back up. And so you'll be happy to know that in this case, this was designed by a German artists to get people to think critically about issues of design, not just the design of physical space but the design of all kinds of things, public policies. And so we can think about how our public life in general is metered, that it serves those that can pay the price and others are excluded or harm, whether we're talking about education or health care. And the meter bench also presents something interesting. For those of us who care about technology, it creates a technical fix for a social problem. In fact, it started out his art. But some municipalities in different parts of the world have actually adopted this in their public spaces in their parks in order to deter so called lawyers from using that space. And so, by a technical fix, we mean something that creates a short term effect, right. It gets people who may want to sleep on it out of sight. They're unable to use it, but it doesn't address the underlying problems that create that need to sleep outside in the first place. And so, in addition to techno determinism, we have to think critically about technical fixes that don't address the underlying issues that technology is meant to solve. And so this is part of a broader issue of discriminatory design, and we can apply the bench metaphor to all kinds of things that we work with or that we create. And the question we really have to continuously ask ourselves is, What values are we building in to the physical and digital infrastructures around us? What are the spikes that we may unwittingly put into place? Or perhaps we didn't create the spikes. Perhaps we started a new job or a new position, and someone hands us something. This is the way things have always been done. So we inherit the spike bench. What is our responsibility when we noticed that it's creating these kinds of harms or exclusions or technical fixes that are bypassing the underlying problem? What is our responsibility? All of this came to a head in the context of financial technologies. I don't know how many of you remember these high profile cases of tech insiders and CEOs who applied for Apple, the Apple card and, in one case, a husband and wife applied and the husband, the husband received a much higher limit almost 20 times the limit as his wife, even though they shared bank accounts, they lived in Common Law State. And so the question. There was not only the fact that the husband was receiving a much better interest rate and the limit, but also that there was no mechanism for the individuals involved to dispute what was happening. They didn't even know what the factors were that they were being judged that was creating this form of discrimination. So in terms of financial technologies, it's not simply the outcome that's the issue. Or that could be discriminatory, but the process that black boxes, all of the decision making that makes it so that consumers and the general public have no way to question it. No way to understand how they're being judged adversely, and so it's the process not only the product that we have to care a lot about. And so the case of the apple cart is part of a much broader phenomenon of, um, racist and sexist robots. This is how the headlines framed it a few years ago, and I was so interested in this framing because there was a first wave of stories that seemed to be shocked at the prospect that technology is not neutral. Then there was a second wave of stories that seemed less surprised. Well, of course, technology inherits its creator's biases. And now I think we've entered a phase of attempts to override and address the default settings of so called racist and sexist robots, for better or worse. And here robots is just a kind of shorthand, that the way people are talking about automation and emerging technologies more broadly. And so as I was encountering these headlines, I was thinking about how these air, not problems simply brought on by machine learning or AI. They're not all brand new, and so I wanted to contribute to the conversation, a kind of larger context and a longer history for us to think carefully about the social dimensions of technology. And so I developed a concept called the New Jim Code, which plays on the phrase Jim Crow, which is the way that the regime of white supremacy and inequality in this country was defined in a previous era, and I wanted us to think about how that legacy continues to haunt the present, how we might be coding bias into emerging technologies and the danger being that we imagine those technologies to be objective. And so this gives us a language to be able to name this phenomenon so that we can address it and change it under this larger umbrella of the new Jim Code are four distinct ways that this phenomenon takes shape from the more obvious engineered inequity. Those were the kinds of inequalities tech mediated inequalities that we can generally see coming. They're kind of obvious. But then we go down the line and we see it becomes harder to detect. It's happening in our own backyards. It's happening around us, and we don't really have a view into the black box, and so it becomes more insidious. And so in the remaining couple minutes, I'm just just going to give you a taste of the last three of these, and then a move towards conclusion that we can start chatting. So when it comes to default discrimination. This is the way that social inequalities become embedded in emerging technologies because designers of these technologies aren't thinking carefully about history and sociology. Ah, great example of this came Thio headlines last fall when it was found that widely used healthcare algorithm affecting millions of patients, um, was discriminating against black patients. And so what's especially important to note here is that this algorithm healthcare algorithm does not explicitly take note of race. That is to say, it is race neutral by using cost to predict healthcare needs. This digital triaging system unwittingly reproduces health disparities because, on average, black people have incurred fewer costs for a variety of reasons, including structural inequality. So in my review of this study by Obermeyer and colleagues, I want to draw attention to how indifference to social reality can be even more harmful than malicious intent. It doesn't have to be the intent of the designers to create this effect, and so we have to look carefully at how indifference is operating and how race neutrality can be a deadly force. When we move on to the next iteration of the new Jim code coded exposure, there's attention because on the one hand, you see this image where the darker skin individual is not being detected by the facial recognition system, right on the camera or on the computer. And so coated exposure names this tension between wanting to be seen and included and recognized, whether it's in facial recognition or in recommendation systems or in tailored advertising. But the opposite of that, the tension is with when you're over included. When you're surveiled when you're to centered. And so we should note that it's not simply in being left out, that's the problem. But it's in being included in harmful ways. And so I want us to think carefully about the rhetoric of inclusion and understand that inclusion is not simply an end point. It's a process, and it is possible to include people in harmful processes. And so we want to ensure that the process is not harmful for it to really be effective. The last iteration of the new Jim Code. That means the the most insidious, let's say, is technologies that are touted as helping US address bias, so they're not simply including people, but they're actively working to address bias. And so in this case, There are a lot of different companies that are using AI to hire, create hiring software and hiring algorithms, including this one higher view. And the idea is that there there's a lot that AI can keep track of that human beings might miss. And so so the software can make data driven talent decisions. After all, the problem of employment discrimination is widespread and well documented. So the logic goes, Wouldn't this be even more reason to outsource decisions to AI? Well, let's think about this carefully. And this is the look of the idea of techno benevolence trying to do good without fully reckoning with what? How technology can reproduce inequalities. So some colleagues of mine at Princeton, um, tested a natural learning processing algorithm and was looking to see whether it exhibited the same, um, tendencies that psychologists have documented among humans. E. And what they found was that in fact, the algorithm associating black names with negative words and white names with pleasant sounding words. And so this particular audit builds on a classic study done around 2003, before all of the emerging technologies were on the scene where two University of Chicago economists sent out thousands of resumes to employers in Boston and Chicago, and all they did was change the names on those resumes. All of the other work history education were the same, and then they waited to see who would get called back. And the applicants, the fictional applicants with white sounding names received 50% more callbacks than the black applicants. So if you're presented with that study, you might be tempted to say, Well, let's let technology handle it since humans are so biased. But my colleagues here in computer science found that this natural language processing algorithm actually reproduced those same associations with black and white names. So, too, with gender coded words and names Amazon learned a couple years ago when its own hiring algorithm was found discriminating against women. Nevertheless, it should be clear by now why technical fixes that claim to bypass human biases are so desirable. If Onley there was a way to slay centuries of racist and sexist demons with a social justice box beyond desirable, more like magical, magical for employers, perhaps looking to streamline the grueling work of recruitment but a curse from any jobseekers, as this headline puts it, your next interview could be with a racist spot, bringing us back to that problem space we started with just a few minutes ago. So it's worth noting that job seekers are already developing ways to subvert the system by trading answers to employers test and creating fake applications as informal audits of their own. In terms of a more collective response, there's a federation of European Trade unions call you and I Global that's developed a charter of digital rights for work, others that touches on automated and a I based decisions to be included in bargaining agreements. And so this is one of many efforts to change their ecosystem to change the context in which technology is being deployed to ensure more protections and more rights for everyday people in the US There's the algorithmic accountability bill that's been presented, and it's one effort to create some more protections around this ubiquity of automated decisions, and I think we should all be calling from more public accountability when it comes to the widespread use of automated decisions. Another development that keeps me somewhat hopeful is that tech workers themselves are increasingly speaking out against the most egregious forms of corporate collusion with state sanctioned racism. And to get a taste of that, I encourage you to check out the hashtag Tech won't build it. Among other statements that they have made and walking out and petitioning their companies. Who one group said, as the people who build the technologies that Microsoft profits from, we refuse to be complicit in terms of education, which is my own ground zero. Um, it's a place where we can we can grow a more historically and socially literate approach to tech design. And this is just one, um, resource that you all can download, Um, by developed by some wonderful colleagues at the Data and Society Research Institute in New York and the goal of this interventionist threefold to develop an intellectual understanding of how structural racism operates and algorithms, social media platforms and technologies, not yet developed and emotional intelligence concerning how to resolve racially stressful situations within organizations, and a commitment to take action to reduce harms to communities of color. And so as a final way to think about why these things are so important, I want to offer a couple last provocations. The first is for us to think a new about what actually is deep learning when it comes to computation. I want to suggest that computational depth when it comes to a I systems without historical or social depth, is actually superficial learning. And so we need to have a much more interdisciplinary, integrated approach to knowledge production and to observing and understanding patterns that don't simply rely on one discipline in order to map reality. The last provocation is this. If, as I suggested at the start, inequity is woven into the very fabric of our society, it's built into the design of our. Our policies are physical infrastructures and now even our digital infrastructures. That means that each twist, coil and code is a chance for us toe. We've new patterns, practices and politics. The vastness of the problems that we're up against will be their undoing. Once we accept that we're pattern makers. So what does that look like? It looks like refusing color blindness as an anecdote to tech media discrimination rather than refusing to see difference. Let's take stock of how the training data and the models that we're creating have these built in decisions from the past that have often been discriminatory. It means actually thinking about the underside of inclusion, which can be targeting. And how do we create a more participatory rather than predatory form of inclusion? And ultimately, it also means owning our own power in these systems so that we can change the patterns of the past. If we're if we inherit a spiked bench, that doesn't mean that we need to continue using it. We can work together to design more just and equitable technologies. So with that, I look forward to our conversation. >>Thank you, Ruth. Ha. That was I expected it to be amazing, as I have been devouring your book in the last few weeks. So I knew that would be impactful. I know we will never think about park benches again. How it's art. And you laid down the gauntlet. Oh, my goodness. That tech won't build it. Well, I would say if the thoughts about team has any saying that we absolutely will build it and will continue toe educate ourselves. So you made a few points that it doesn't matter if it was intentional or not. So unintentional has as big an impact. Um, how do we address that does it just start with awareness building or how do we address that? >>Yeah, so it's important. I mean, it's important. I have good intentions. And so, by saying that intentions are not the end, all be all. It doesn't mean that we're throwing intentions out. But it is saying that there's so many things that happened in the world, happened unwittingly without someone sitting down to to make it good or bad. And so this goes on both ends. The analogy that I often use is if I'm parked outside and I see someone, you know breaking into my car, I don't run out there and say Now, do you feel Do you feel in your heart that you're a thief? Do you intend to be a thief? I don't go and grill their identity or their intention. Thio harm me, but I look at the effect of their actions, and so in terms of art, the teams that we work on, I think one of the things that we can do again is to have a range of perspectives around the table that can think ahead like chess, about how things might play out, but also once we've sort of created something and it's, you know, it's entered into, you know, the world. We need to have, ah, regular audits and check ins to see when it's going off track just because we intended to do good and set it out when it goes sideways, we need mechanisms, formal mechanisms that actually are built into the process that can get it back on track or even remove it entirely if we find And we see that with different products, right that get re called. And so we need that to be formalized rather than putting the burden on the people that are using these things toe have to raise the awareness or have to come to us like with the apple card, Right? To say this thing is not fair. Why don't we have that built into the process to begin with? >>Yeah, so a couple things. So my dad used to say the road to hell is paved with good intentions, so that's >>yes on. In fact, in the book, I say the road to hell is paved with technical fixes. So they're me and your dad are on the same page, >>and I I love your point about bringing different perspectives. And I often say this is why diversity is not just about business benefits. It's your best recipe for for identifying the early biases in the data sets in the way we build things. And yet it's such a thorny problem to address bringing new people in from tech. So in the absence of that, what do we do? Is it the outside review boards? Or do you think regulation is the best bet as you mentioned a >>few? Yeah, yeah, we need really need a combination of things. I mean, we need So on the one hand, we need something like a do no harm, um, ethos. So with that we see in medicine so that it becomes part of the fabric and the culture of organizations that that those values, the social values, have equal or more weight than the other kinds of economic imperatives. Right. So we have toe have a reckoning in house, but we can't leave it to people who are designing and have a vested interest in getting things to market to regulate themselves. We also need independent accountability. So we need a combination of this and going back just to your point about just thinking about like, the diversity on teams. One really cautionary example comes to mind from last fall, when Google's New Pixel four phone was about to come out and it had a kind of facial recognition component to it that you could open the phone and they had been following this research that shows that facial recognition systems don't work as well on darker skin individuals, right? And so they wanted Thio get a head start. They wanted to prevent that, right? So they had good intentions. They didn't want their phone toe block out darker skin, you know, users from from using it. And so what they did was they were trying to diversify their training data so that the system would work better and they hired contract workers, and they told these contract workers to engage black people, tell them to use the phone play with, you know, some kind of app, take a selfie so that their faces would populate that the training set, But they didn't. They did not tell the people what their faces were gonna be used for, so they withheld some information. They didn't tell them. It was being used for the spatial recognition system, and the contract workers went to the media and said Something's not right. Why are we being told? Withhold information? And in fact, they told them, going back to the park bench example. To give people who are homeless $5 gift cards to play with the phone and get their images in this. And so this all came to light and Google withdrew this research and this process because it was so in line with a long history of using marginalized, most vulnerable people and populations to make technologies better when those technologies are likely going toe, harm them in terms of surveillance and other things. And so I think I bring this up here to go back to our question of how the composition of teams might help address this. I think often about who is in that room making that decision about sending, creating this process of the contract workers and who the selfies and so on. Perhaps it was a racially homogeneous group where people didn't want really sensitive to how this could be experienced or seen, but maybe it was a diverse, racially diverse group and perhaps the history of harm when it comes to science and technology. Maybe they didn't have that disciplinary knowledge. And so it could also be a function of what people knew in the room, how they could do that chest in their head and think how this is gonna play out. It's not gonna play out very well. And the last thing is that maybe there was disciplinary diversity. Maybe there was racial ethnic diversity, but maybe the workplace culture made it to those people. Didn't feel like they could speak up right so you could have all the diversity in the world. But if you don't create a context in which people who have those insights feel like they can speak up and be respected and heard, then you're basically sitting on a reservoir of resource is and you're not tapping into it to ensure T to do right by your company. And so it's one of those cautionary tales I think that we can all learn from to try to create an environment where we can elicit those insights from our team and our and our coworkers, >>your point about the culture. This is really inclusion very different from just diversity and thought. Eso I like to end on a hopeful note. A prescriptive note. You have some of the most influential data and analytics leaders and experts attending virtually here. So if you imagine the way we use data and housing is a great example, mortgage lending has not been equitable for African Americans in particular. But if you imagine the right way to use data, what is the future hold when we've gotten better at this? More aware >>of this? Thank you for that question on DSO. You know, there's a few things that come to mind for me one. And I think mortgage environment is really the perfect sort of context in which to think through the the both. The problem where the solutions may lie. One of the most powerful ways I see data being used by different organizations and groups is to shine a light on the past and ongoing inequities. And so oftentimes, when people see the bias, let's say when it came to like the the hiring algorithm or the language out, they see the names associated with negative or positive words that tends toe have, ah, bigger impact because they think well, Wow, The technology is reflecting these biases. It really must be true. Never mind that people might have been raising the issues in other ways before. But I think one of the most powerful ways we can use data and technology is as a mirror onto existing forms of inequality That then can motivate us to try to address those things. The caution is that we cannot just address those once we come to grips with the problem, the solution is not simply going to be a technical solution. And so we have to understand both the promise of data and the limits of data. So when it comes to, let's say, a software program, let's say Ah, hiring algorithm that now is trained toe look for diversity as opposed to homogeneity and say I get hired through one of those algorithms in a new workplace. I can get through the door and be hired. But if nothing else about that workplace has changed and on a day to day basis I'm still experiencing microaggressions. I'm still experiencing all kinds of issues. Then that technology just gave me access to ah harmful environment, you see, and so this is the idea that we can't simply expect the technology to solve all of our problems. We have to do the hard work. And so I would encourage everyone listening to both except the promise of these tools, but really crucially, um, Thio, understand that the rial kinds of changes that we need to make are gonna be messy. They're not gonna be quick fixes. If you think about how long it took our society to create the kinds of inequities that that we now it lived with, we should expect to do our part, do the work and pass the baton. We're not going to magically like Fairy does create a wonderful algorithm that's gonna help us bypass these issues. It can expose them. But then it's up to us to actually do the hard work of changing our social relations are changing the culture of not just our workplaces but our schools. Our healthcare systems are neighborhoods so that they reflect our better values. >>Yeah. Ha. So beautifully said I think all of us are willing to do the hard work. And I like your point about using it is a mirror and thought spot. We like to say a fact driven world is a better world. It can give us that transparency. So on behalf of everyone, thank you so much for your passion for your hard work and for talking to us. >>Thank you, Cindy. Thank you so much for inviting me. Hey, I live back to you. >>Thank you, Cindy and rou ha. For this fascinating exploration of our society and technology, we're just about ready to move on to our final session of the day. So make sure to tune in for this customer case study session with executives from Sienna and Accenture on driving digital transformation with certain AI.

Published Date : Dec 10 2020

SUMMARY :

I know that there's so much more we could do collectively to improve these gaps and diversity. and inclusion in the data and analytic space. Natalie Longhurst from Vodafone, suggesting that we move it from the change agents, the leaders that can prevent this. And so in the remaining couple minutes, I'm just just going to give you a taste of the last three of these, And you laid down the gauntlet. And so we need that to be formalized rather than putting the burden on So my dad used to say the road to hell is paved with good In fact, in the book, I say the road to hell for identifying the early biases in the data sets in the way we build things. And so this all came to light and the way we use data and housing is a great example, And so we have to understand both the promise And I like your point about using it is a mirror and thought spot. I live back to you. So make sure to

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
CindyPERSON

0.99+

RuthPERSON

0.99+

FranceLOCATION

0.99+

Natalie LonghurstPERSON

0.99+

CaliforniaLOCATION

0.99+

BostonLOCATION

0.99+

PaolaPERSON

0.99+

JapanLOCATION

0.99+

thousandsQUANTITY

0.99+

$5QUANTITY

0.99+

Ruhollah BenjaminPERSON

0.99+

ChicagoLOCATION

0.99+

Rou Ha BenjaminPERSON

0.99+

Bill ZangPERSON

0.99+

HelsinkiLOCATION

0.99+

VodafoneORGANIZATION

0.99+

MicrosoftORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

SiennaORGANIZATION

0.99+

USLOCATION

0.99+

Data and Society Research InstituteORGANIZATION

0.99+

Cindy HousingPERSON

0.99+

Last yearDATE

0.99+

nine monthsQUANTITY

0.99+

GoogleORGANIZATION

0.99+

ObermeyerPERSON

0.99+

AppleORGANIZATION

0.99+

European Trade unionsORGANIZATION

0.99+

Berkeley, CaliforniaLOCATION

0.99+

Multiple BooksTITLE

0.99+

twoQUANTITY

0.99+

todayDATE

0.99+

TobiasPERSON

0.99+

FebruaryDATE

0.99+

University of ChicagoORGANIZATION

0.99+

New YorkLOCATION

0.99+

one caseQUANTITY

0.99+

firstQUANTITY

0.99+

Jim CrowPERSON

0.99+

This yearDATE

0.99+

bothQUANTITY

0.99+

20 minutesQUANTITY

0.99+

I GlobalORGANIZATION

0.99+

two formsQUANTITY

0.98+

first stepQUANTITY

0.98+

2%QUANTITY

0.98+

TerminatorTITLE

0.98+

ThioPERSON

0.98+

last fallDATE

0.98+

OneQUANTITY

0.98+

The MatrixTITLE

0.98+

24 hoursQUANTITY

0.98+

The Latest Race After TechnologyTITLE

0.98+

JimPERSON

0.98+

Princeton UniversityORGANIZATION

0.98+

Rou haPERSON

0.97+

oneQUANTITY

0.97+

both endsQUANTITY

0.97+

AccentureORGANIZATION

0.96+

one bootyQUANTITY

0.96+

almost 20 timesQUANTITY

0.96+

HollywoodORGANIZATION

0.95+

centuriesQUANTITY

0.95+

rouPERSON

0.95+

one groupQUANTITY

0.94+

OnleyPERSON

0.93+

about 15QUANTITY

0.93+

one disciplineQUANTITY

0.93+

millions of patientsQUANTITY

0.92+

four distinct waysQUANTITY

0.92+

Pixel fourCOMMERCIAL_ITEM

0.9+

few minutes agoDATE

0.9+

50% moreQUANTITY

0.88+

few years agoDATE

0.88+

couple years agoDATE

0.88+

AfricanOTHER

0.85+

one effortQUANTITY

0.85+

single occupancy benchesQUANTITY

0.84+

African AmericanOTHER

0.82+

4 3 Ruha for Transcript


 

>>Thank you. Thank you so much for having me. I'm thrilled to be in conversation with you today. And I thought I would just kick things off with some opening reflections on this really important session theme, and then we can jump into discussion. So I'd like us to, as a starting point, um, wrestle with these buzz words, empowerment and inclusion so that we can, um, have them be more than kind of big platitudes and really have them reflected in our workplace cultures and the things that we design and the technologies that we put out into the world. And so to do that, I think we have to move beyond techno determinism and I'll explain what that means in just a minute. And techno determinism comes in two forms. The first on your left is the idea that technology automate. Um, all of these emerging trends are going to harm us are going to necessarily, um, harm humanity. >>They're going to take all the jobs they're going to remove human agency. This is what we might call the techno dystopian version of the story. And this is what Hollywood loves to sell us in the form of movies like the matrix or Terminator. The other version on your right is the techno utopian story that technologies automation, the robots, as a shorthand are going to save humanity. They're going to make everything more efficient, more equitable. And in this case, on the surface, they seem like opposing narratives, right? They're telling us different stories. At least they have different endpoints, but when you pull back the screen and look a little bit more closely, you see that they share an underlying logic, that technology is in the driver's seat and that human beings, that social society can just respond to what's happening. But we don't, I really have a say in what technologies are designed. >>And so to move beyond techno determinism, the notion that technology is in the driver's seat, we have to put the human agents and agencies back into the story protagonists and think carefully about what the human desires, worldviews values assumptions are that animate the production of technology. We have to put the humans behind the screen back into view. And so that's a very first step in when we do that. We see as was already mentioned that it's a very homogenous group right now in terms of who gets the power and the resources to produce the digital and physical infrastructure that everyone else has to live with. And so, as a first step, we need to think about how to, to create more participation of those who are working behind the scenes to design technology. Now, to dig a little more deeper into this, I want to offer a kind of low tech example before we get to the more high tech ones. >>So what you see in front of you here is a simple park bench public it's located in Berkeley, California, which is where I went to graduate school. And on this one particular visit, I was living in Boston. And so I was back in California, it was February, it was freezing where I was coming from. And so I wanted to take a few minutes in between meetings to just lay out in the sun and soak in some vitamin D. And I quickly realized actually I couldn't lay down on the bench because of the way it had been designed with these arm rests at intermittent intervals. And so here I thought, okay, th th the armrests have a functional reason why they're there. I mean, you could literally rest your elbows there, or, um, you know, it can create a little bit of privacy of someone sitting there that you don't know. >>Um, when I was nine months pregnant, it could help me get up and down or for the elderly the same thing. So it has a lot of functional reasons, but I also thought about the fact that it prevents people who are, are homeless from sleeping on the bench. And this is the Bay area that we're talking about, where in fact, the tech boom has gone hand in hand with a housing crisis. Those things have grown in tandem. So innovation has grown with inequity because we have, I haven't thought carefully about how to address the social context in which technology grows and blossoms. And so I thought, okay, this crisis is growing in this area. And so perhaps this is a deliberate attempt to make sure that people don't sleep on the benches by the way that they're designed and where the, where they're implemented. And so this is what we might call structural inequity, by the way something is designed. >>It has certain yeah. Affects that exclude or harm different people. And so it may not necessarily be the intent, but that's the effect. And I did a little digging and I found, in fact, it's a global phenomenon, this thing that architect next call, hostile architecture around single occupancy, benches and Helsinki. So only one booty at a time, no Nolan down there. I've found caged benches in France. Yeah. And in this particular town, what's interesting here is that the mayor put these benches out in this little shopping Plaza and within 24 hours, the people in the town rally together and have them removed. So we see here that just because we, we have a discriminatory design in our public space, doesn't mean we have to live with it. We can actually work together to ensure that our public space reflects our better values. But I think my favorite example of all is the metered bench. >>And then this case, this bench is designed with spikes in them and to get the spikes to retreat into the bench, you have to feed the meter. You have to put some coins in, and I think it buys you about 15, 20 minutes, then the spikes come back up. And so you will be happy to know that in this case, uh, this was designed by a German artist to get people to think critically about issues of design, not the design of physical space, but the design of all kinds of things, public policies. And so we can think about how our public life in general is metered, that it serves those that can pay the price and others are excluded or harmed. Whether we're talking about education or healthcare. And the meter bench also presents something interesting for those of us who care about technology, it creates a technical fix for a social problem. >>In fact, it started out as art, but some municipalities in different parts of the world have actually adopted this in their public spaces, in their parks in order to deter so-called loiters from using that space. And so by a technical fix, we mean something that creates a short-term effect, right? It gets people who may want to sleep on it out of sight. They're unable to use it, but it doesn't address the underlying problems that create that need to sleep outside of the first place. And so, in addition to techno determinism, we have to think critically about technical fixes, that don't address the underlying issues that the tech tech technology is meant to solve. And so this is part of a broader issue of discriminatory design, and we can apply the bench metaphor to all kinds of things that we work with, or that we create. >>And the question we really have to continuously ask ourselves is what values are we building in to the physical and digital infrastructures around us? What are the spikes that we may unwittingly put into place? Or perhaps we didn't create the spikes. Perhaps we started a new job or a new position, and someone hands us something, this is the way things have always been done. So we inherit the spiked bench. What is our responsibility? When we notice that it's creating these kinds of harms or exclusions or technical fixes that are bypassing the underlying problem, what is our responsibility? All of this came to a head in the context of financial technologies. I don't know how many of you remember these high profile cases of tech insiders and CEOs who applied for apples, >>The Apple card. And in one case, a husband and wife applied, and the husband, the husband received a much higher limit, almost 20 times the limit as his, >>His wife, even though they shared bank accounts, they lived in common law state. Yeah. >>And so the question there was not only the fact that >>The husband was receiving a much better rate and a high and a better >>The interest rate and the limit, but also that there was no mechanism for the individuals involved to dispute what was happening. They didn't even know how, what the factors were that they were being judged that was creating this form of discrimination. So >>In terms of financial technologies, it's not simply the outcome, that's the issue, or that can be discriminatory, >>But the process that black box is all of the decision-making that makes it so that consumers and the general public have no way to question it, no way to understand how they're being judged adversely. And so it's the process, not only the product that we have to care a lot about. And so the case of the Apple card is part of a much broader phenomenon >>Of, um, races >>And sexist robots. This is how the headlines framed it a few years ago. And I was so interested in this framing because there was a first wave of stories that seemed to be shocked at the prospect, that technology is not neutral. Then there was a second wave of stories that seemed less surprised. Well, of course, technology inherits its creators biases. And now I think we've entered a phase of attempts to override and address the default settings of so-called racist and sexist robots for better or worse than here. Robots is just a kind of shorthand that the way that people are talking about automation and emerging technologies more broadly. And so, as I was encountering these headlines, I was thinking about how these are not problems simply brought on by machine learning or AI. They're not all brand new. And so I wanted to contribute to the conversation, a kind of larger context and a longer history for us to think carefully about the social dimensions of technology. And so I developed a concept called the new Jim code, >>Which plays on the phrase, >>Jim Crow, which is the way that the regime of white supremacy and inequality in this country was defined in a previous era. And I wanted us to think about how that legacy continues to haunt the present, how we might be coding bias into emerging technologies and the danger being that we imagine those technologies to be objective. And so this gives us a language to be able to name this phenomenon so that we can address it and change it under this larger umbrella of the new Jim code are four distinct ways that this phenomenon takes shape from the more obvious engineered inequity. Those are the kinds of inequalities tech mediated in the qualities that we can generally see coming. They're kind of obvious, but then we go down the line and we see it becomes harder to detect it's happening in our own backyards, it's happening around us. And we don't really have a view into the black box. And so it becomes more insidious. And so in the remaining couple of minutes, I'm just, just going to give you a taste of the last three of these, and then a move towards conclusion. Then we can start chatting. So when it comes to default discrimination, this is the way that social inequalities >>Become embedded in emerging technologies because designers of these technologies, aren't thinking carefully about history and sociology. A great example of this, uh, came to, um, uh, the headlines last fall when it was found that widely used healthcare algorithm, effecting millions of patients, um, was discriminating against black patients. And so what's especially important to note here is that this algorithm, healthcare algorithm does not explicitly take note of race. That is to say it is race neutral by using cost to predict healthcare needs this digital triaging system unwittingly reproduces health disparities, because on average, black people have incurred fewer costs for a variety of reasons, including structural inequality. So in my review of this study, by Obermeyer and colleagues, I want to draw attention to how indifference to social reality can be even more harmful than malicious intent. It doesn't have to be the intent of the designers to create this effect. >>And so we have to look carefully at how indifference is operating and how race neutrality can be a deadly force. When we move on to the next iteration of the new Jim code, coded exposure, there's a tension because on the one hand, you see this image where the darker skin individual is not being detected by the facial recognition system, right on the camera, on the computer. And so coded exposure names, this tension between wanting to be seen and included and recognized whether it's in facial recognition or in recommendation systems or in tailored advertising. But the opposite of that, the tension is with when you're over, it >>Included when you're surveilled, when you're >>Too centered. And so we should note that it's not simply in being left out, that's the problem, but it's in being included in harmful ways. And so I want us to think carefully about the rhetoric of inclusion and understand that inclusion is not simply an end point, it's a process, and it is possible to include people in harmful processes. And so we want to ensure that the process is not harmful for it to really be effective. The last iteration of the new Jim code. That means the, the most insidious let's say is technologies that are touted as helping us address bias. So they're not simply including people, but they're actively working to address bias. And so in this case, there are a lot of different companies that are using AI to hire, uh, create hiring, um, software and hiring algorithms, including this one higher view. >>And the idea is that there there's a lot that, um, AI can keep track of that human beings might miss. And so, so the software can make data-driven talent decisions after all the problem of employment discrimination is widespread and well-documented, so the logic goes, wouldn't this be even more reason to outsource decisions to AI? Well, let's think about this carefully. And this is the idea of techno benevolence, trying to do good without fully reckoning with what, how technology can reproduce inequalities. So some colleagues of mine at Princeton, um, tested a natural learning processing algorithm and was looking to see whether it exhibited the same, um, tendencies that psychologists have documented among humans. And what they found was that in fact, the algorithm associated black names with negative words and white names with pleasant sounding words. And so this particular audit builds on a classic study done around 2003 before all of the emerging technologies were on the scene where two university of Chicago economists sent out thousands of resumes to employers in Boston and Chicago. >>And all they did was change the names on those resumes. All of the other work history education were the same. And then they waited to see who would get called back and the applicants, the fictional applicants with white sounding names received 50% more callbacks than the, the black applicants. So if you're presented with that study, you might be tempted to say, well, let's let technology handle it since humans are so biased. But my colleagues here in computer science found that this natural language processing algorithm actually reproduced those same associations with black and white names. So two with gender coded words and names as Amazon learned a couple years ago, when its own hiring algorithm was found discriminating against women, nevertheless, it should be clear by now why technical fixes that claim to bypass human biases are so desirable. If only there was a way to slay centuries of racist and sexist demons with a social justice bot beyond desirable, more like magical, magical for employers, perhaps looking to streamline the grueling work of recruitment, but a curse from any job seekers as this headline puts it. >>Your next interview could be with a racist bot, bringing us back to that problem space. We started with just a few minutes ago. So it's worth noting that job seekers are already developing ways to subvert the system by trading answers to employers tests and creating fake applications as informal audits of their own. In terms of a more collective response. There's a Federation of European trade unions call you and I global that's developed a charter of digital rights for workers that touches on automated and AI based decisions to be included in bargaining agreements. And so this is one of many efforts to change the ecosystem, to change the context in which technology is being deployed to ensure more protections and more rights for everyday people in the U S there's the algorithmic accountability bill that's been presented. And it's one effort to create some more protections around this ubiquity of automated decisions. >>And I think we should all be calling for more public accountability when it comes to the widespread use of automated decisions. Another development that keeps me somewhat hopeful is that tech workers themselves are increasingly speaking out against the most egregious forms of corporate collusion with state sanctioned racism. And to get a taste of that, I encourage you to check out the hashtag tech, won't build it among other statements that they've made and walking out and petitioning their companies. One group said as the, at Google at Microsoft wrote as the people who build the technologies that Microsoft profits from, we refuse to be complicit in terms of education, which is my own ground zero. Um, it's a place where we can, we can grow a more historically and socially literate approach to tech design. And this is just one resource that you all can download, um, by developed by some wonderful colleagues at the data and society research Institute in New York. >>And the, the goal of this intervention is threefold to develop an intellectual understanding of how structural racism operates and algorithms, social media platforms and technologies not yet developed and emotional intelligence concerning how to resolve racially stressful situations within organizations and a commitment to take action, to reduce harms to communities of color. And so as a final way to think about why these things are so important, I want to offer, uh, a couple last provocations. The first is pressed to think a new about what actually is deep learning when it comes to computation. I want to suggest that computational depth when it comes to AI systems without historical or social depth is actually superficial learning. And so we need to have a much more interdisciplinary, integrated approach to knowledge production and to observing and understanding patterns that don't simply rely on one discipline in order to map reality. >>The last provocation is this. If as I suggested at the start in the inequity is woven into the very fabric of our society. It's built into the design of our, our policies, our physical infrastructures, and now even our digital infrastructures. That means that each twist coil and code is a chance for us to weave new patterns, practices, and politics. The vastness of the problems that we're up against will be their undoing. Once we, that we are pattern makers. So what does that look like? It looks like refusing colorblindness as an anecdote to tech media discrimination, rather than refusing to see difference. Let's take stock of how the training data and the models that we're creating. Have these built in decisions from the past that have often been discriminatory. It means actually thinking about the underside of inclusion, which can be targeting and how do we create a more participatory rather than predatory form of inclusion. And ultimately it also means owning our own power in these systems so that we can change the patterns of the past. If we're, if we inherit a spiked bench, that doesn't mean that we need to continue using it. We can work together to design more, just an equitable technologies. So with that, I look forward to our conversation.

Published Date : Nov 25 2020

SUMMARY :

And so to do that, I think we have to move And this is what Hollywood loves And so to move beyond techno determinism, the notion that technology is in the driver's seat, And so I was back in California, it was February, And so this is what we might call structural inequity, And so it may not necessarily be the intent, And so we can think about how our public life in general is metered, And so, in addition to techno determinism, we have to think critically about And the question we really have to continuously ask ourselves is what values And in one case, a husband and wife applied, and the husband, Yeah. the individuals involved to dispute what was happening. And so it's the process, And so I developed a concept called the new Jim code, And so in the remaining couple of minutes, I'm just, just going to give you a taste of the last three of And so what's especially And so we have to look carefully at how indifference is operating and how race neutrality can And so we should note that it's not simply in being left And the idea is that there there's a lot that, um, AI can keep track of that All of the other work history education were the same. And so this is one of many efforts to change the ecosystem, And I think we should all be calling for more public accountability when it comes And so we need to have a much more interdisciplinary, And ultimately it also means owning our own power in these systems so that we can change

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
CaliforniaLOCATION

0.99+

FranceLOCATION

0.99+

BostonLOCATION

0.99+

ChicagoLOCATION

0.99+

MicrosoftORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

ObermeyerPERSON

0.99+

nine monthsQUANTITY

0.99+

New YorkLOCATION

0.99+

GoogleORGANIZATION

0.99+

twoQUANTITY

0.99+

Berkeley, CaliforniaLOCATION

0.99+

todayDATE

0.99+

FebruaryDATE

0.99+

one caseQUANTITY

0.99+

first stepQUANTITY

0.99+

Federation of European trade unionsORGANIZATION

0.99+

one resourceQUANTITY

0.99+

first stepQUANTITY

0.99+

firstQUANTITY

0.99+

two formsQUANTITY

0.99+

Jim CrowPERSON

0.99+

JimPERSON

0.98+

U SLOCATION

0.98+

millions of patientsQUANTITY

0.98+

AppleORGANIZATION

0.97+

20 minutesQUANTITY

0.97+

oneQUANTITY

0.97+

RuhaPERSON

0.97+

applesORGANIZATION

0.97+

TerminatorTITLE

0.96+

each twistQUANTITY

0.96+

one disciplineQUANTITY

0.95+

HelsinkiLOCATION

0.95+

two universityQUANTITY

0.95+

one bootyQUANTITY

0.95+

four distinct waysQUANTITY

0.95+

HollywoodORGANIZATION

0.94+

24 hoursQUANTITY

0.94+

One groupQUANTITY

0.94+

almost 20 timesQUANTITY

0.93+

centuriesQUANTITY

0.93+

few minutes agoDATE

0.93+

about 15QUANTITY

0.92+

one effortQUANTITY

0.92+

firstEVENT

0.92+

GermanOTHER

0.91+

couple years agoDATE

0.9+

NolanPERSON

0.88+

threeQUANTITY

0.88+

50% moreQUANTITY

0.87+

matrixTITLE

0.87+

last fallDATE

0.87+

few years agoDATE

0.86+

BayLOCATION

0.86+

2003DATE

0.82+

data and society research InstituteORGANIZATION

0.8+

single occupancyQUANTITY

0.79+

coupleQUANTITY

0.78+

second waveEVENT

0.76+

thousands of resumesQUANTITY

0.76+

PrincetonORGANIZATION

0.72+

first placeQUANTITY

0.71+

waveEVENT

0.48+