Image Title

Search Results for WiDS conference 2022:

Cecilia Aragon, University of Washington | WiDS Worldwide Conference 2022


 

>>Hey, everyone. Welcome to the cubes coverage of women in data science, 2022. I'm Lisa Martin. And I'm here with one of the key featured keynotes for this year is with events. So the Aragon, the professor and department of human centered design and engineering at the university of Washington Cecilia, it's a pleasure to have you on the cube. >>Thank you so much, Lisa Lisa, it's a pleasure to be here as well. >>You got an amazing background that I want to share with the audience. You are a professor, you are a data scientist, an aerobatic pilot, and an author with expertise in human centered, data science, visual analytics, aviation safety, and analysis of extremely large and complex data sets. That's quite the background. >>Well, thank you so much. It's it's all very interesting and fun. So, >>And as a professor, you study how people make sense of vast data sets, including a combination of computer science and art, which I love. And as an author, you write about interesting things. You write about how to overcome fear, which is something that everybody can benefit from and how to expand your life until it becomes amazing. I need to take a page out of your book. You were also honored by president Obama a few years back. My goodness. >>Thank you so much. Yes. I I've had quite a journey to come here, but I feel really fortunate to be here today. >>Talk about that journey. I'd love to understand if you were always interested in stem, if it was something that you got into later, I know that you are the co-founder of Latinas in computing, a passionate advocate for girls and women in stem. Were you always interested in stem or was it something that you got into in a kind of a non-linear path? >>I was always interested in it when I was a young girl. I grew up in a small Midwestern town and my parents are both immigrants and I was one of the few Latinas in a mostly white community. And I was, um, I loved math, but I also wanted to be an astronaut. And I remember I, when we were asked, I think it was in second grade. What would you like to be when you grow up? I said, oh, I want to be an astronaut. And my teacher said, oh, you can't do that. You're a girl pick something else. And um, so I picked math and she was like, okay. >>Um, so I always wanted to, well, maybe it would be better to say I never really quite lost my love of being up in the air and potentially space. But, um, but I ended up working in math and science and, um, I, I loved it because one of the great advantages of math is that it's kind of like a magic trick for young people, especially if you're a girl or if you are from an underrepresented group, because if you get the answers right on a math test, no one can mark you wrong. It doesn't matter what the color of your skin is or what your gender is. Math is powerful that way. And I will say there's nothing like standing in a room in front of a room of people who think little of you and you silence them with your love with numbers. >>I love that. I never thought about math as power before, but it clearly is. But also, you know, and, and I wish we had more time because I would love to get into how you overcame that fear. And you write books about that, but being told you can't be an astronaut. You're a girl and maybe laughing at you because you liked Matt. How did you overcome that? And so nevermind I'm doing it anyway. >>Well, that's a, it's a, okay. The short answer is I had incredible imposter syndrome. I didn't believe that I was smart enough to get a PhD in math and computer science. But what enabled me to do that was becoming a pilot and I B I learned how to fly small airplanes. I learned how to fly them upside down and pointing straight at the ground. And I know this might sound kind of extreme. So this is not what I recommend to everybody. But if you are brought up in a way where everybody thinks little of you, one of the best things you can possibly do is take on a challenge. That's scary. I was afraid of everything, but by learning to fly and especially learning to fly loops and rolls, it gave me confidence to do everything else because I thought I appointed the airplane at the ground at 250 miles an hour and waited, why am I afraid to get a PhD in computer science? >>Wow. How empowering is that? >>Yeah, it really was. So that's really how I overcame the fear. And I will say that, you know, I encountered situations getting my PhD in computer science where I didn't believe that I was good enough to finish the degree. I didn't believe that I was smart enough. And what I've learned later on is that was just my own emotional, you know, residue from my childhood and from people telling me that they, you know, that they, that I couldn't achieve >>As I look what, look what you've achieved so far. It's amazing. And we're going to be talking about some of the books that you've written, but I want to get into data science and AI and get your thoughts on this. Why is it necessary to think about human issues and data science >>And what are your thoughts there? So there's been a lot of work in data science recently looking at societal impacts. And if you just address data science as a purely technical field, and you don't think about unintended consequences, you can end up with tremendous injustices and societal harms and harms to individuals. And I think any of us who has dealt with an inflexible algorithm, even if you just call up, you know, customer service and you get told, press five for this press four for that. And you say, well, I don't fit into any of those categories, you know, or have the system hang up on you after an hour. I think you'll understand that any type of algorithmic approach, especially on very large data sets has the risk of impacting people, particularly from low income or marginalized groups, but really any of us can be impacted in a negative way. >>And so, as a developer of algorithms that work over very large data sets, I've always found it really important to consider the humans on the other end of the algorithm. And that's why I believe that all data science is truly human centered or should be human centered, should be human centered and also involves both technical issues as well as social issues. Absolutely correct. So one example is that, um, many of us who started working in data science, including I have to admit me when I started out assume that data is unbiased. It's scrubbed of human influence. It is pure in some ways, however, that's really not true as I've started working with datasets. And this is generally known in the field that data sets are touched by humans everywhere. As a matter of fact, in our, in the recent book that we're, that we're coming out with human centered data science, we talk about five important points where humans touch data, no matter how scrubbed of human influence it's support it's supposed to be. >>Um, so the first one is discovery. So when a human encounters, a data set and starts to use it, it's a human decision. And then there's capture, which is the process of searching for a data set. So any data that has to be selected and chosen by an individual, um, then once that data set is brought in there's curation, a human will have to select various data sets. They'll have to decide what is, what is the proper set to use. And they'll be making judgements on this the time. And perhaps one of the most important ways the data is changed and touched by humans is what we call the design of data. And what that means is whenever you bring in a data set, you have to categorize it. No, for example, let's suppose you are, um, a geologist and you are classifying soil data. >>Well, you don't just take whatever the description of the soil data is. You actually may put it into a previously established taxonomy and you're making human judgments on that. So even though you think, oh, geology data, that's just rocks. You know, that's soil. It has nothing to do with people, but it really does. Um, and finally, uh, people will label the data that they have. And this is especially critical when humans are making subjective judgments, such as what race is the person in this dataset. And they may judge it based on looking at the individual skin color. They may try to apply an algorithm to it, but you know what? We all have very different skin colors, categorizing us into race boxes, really diminishes us and makes us less than we truly are. So it's very important to realize that humans touch the data. We interpret the data. It is not scrubbed of bias. And when we make algorithmic decisions, even the very fact of having an algorithm that makes a judgment say on whether a prisoner's likely to offend again, the judge just by having an algorithm, even if the algorithm makes a recommended statement, they are impacted by that algorithms recommendation. And that has obviously an impact on that human's life. So we consider all of this. >>So you just get given five solid reasons why data science and AI are inevitably human centric should be, but in the past, what's led to the separation between data science and humans. >>Well, I think a lot of it simply has to do with incorrect mental models. So many of us grew up thinking that, oh, humans have biases, but computers don't. And so if we just take decision-making out of people's hands and put it into the hands of an algorithm, we will be having less biased results. However, recent work in the field of data science and artificial intelligence has shown that that's simply not true that algorithmic algorithms reinforce human biases. They amplify them. So algorithmic biases can be much worse than human biases and can greater impact. >>So how do we pull ethics into all of this data science and AI and that ethical component, which seems to be that it needs to be foundational. >>It absolutely has to be foundational. And this is why we believe. And what we teach at the university of Washington in our data science courses is that ethical and human centered approaches and ideas have to be brought in at the very beginning of the algorithm. It's not something you slap on at the end or say, well, I'll wait for the ethicists to weigh in on this. Now we are all human. We can all make human decisions. We can all think about the unintended consequences of our algorithms as we develop them. And we should do that at the very beginning. And all algorithm designers really need to spend some time thinking about the impact that their algorithm may have. >>Right. Do you, do you find that people are still in need of convincing of that or is it generally moving in that direction of understanding? We need to bring ethics in from the beginning, >>It's moving in that direction, but there are still people who haven't modified their mental models yet. So we're working on it. And we hope that with the publication of our book, that it will be used as a supplemental textbook in many data science courses that are focused exclusively on the algorithms and that they can open up the idea that considering the human centered approaches at the beginning of learning about algorithms and data science and the mathematical and statistical techniques, that the next generation of data scientists and artificial intelligence developers will be able to mitigate some of the potentially harmful effects. And we're very excited about this. This is why I'm a professor, because I want to teach the next generation of data scientists and artificial intelligence experts, how to make sure that their work really achieves what they intended it to, which is to make the world a better place, not a worse place, but to enable humans to do better and to mitigate biases and really to lead us into this century in a positive way. >>So the book, human centered data science, you can see it there over Sicily, his right shoulder. When does this come out and how can folks get a copy of it? >>So it came out March 1st and it's available in bookstores everywhere. It was published by MIT press, and you can go online or you can go to your local independent bookstore, or you can order it from your university bookstore as well. >>Excellent. Got to, got to get a copy of, get my hands on that. Got cut and get a copy and dig into that. Cause it sounds so interesting, but also so thoughtful and, um, clear in the way that you described that. And also all the opportunities that, that AI data science and humans are gonna unlock for the world and humans and jobs and, and great things like that. So I'm sure there's lots of great information there. Last question I mentioned, you are keynoting at this year's conference. Talk to me about like the top three takeaways that the audience is going to get from your keynote. >>So I'm very excited to have been invited to wins this year, which of course is a wonderful conference to support women in data science. And I've been a big fan of the conference since it was first developed here, uh, here at Stanford. Um, the three, the three top takeaways I would say is to really consider the data. Science can be rigorous and mathematical and human centered and ethical. It's not a trade-off, it's both at the same time. And that's really the, the number one that, that I'm hoping to keynote will bring to, to the entire audience. And secondly, I hope that it will encourage women or people who've been told that maybe you're not a science person or this isn't for you, or you're not good at math. I hope it will encourage them to disbelieve those views. And to realize that if you, as a member of any type of unread, underrepresented group have ever felt, oh, I'm not good enough for this. >>I'm not smart enough. It's not for me that you will reconsider because I firmly believe that everyone can be good at math. And it's a matter of having the information presented to you in a way that honors your, the background you had. So when I started out my, my high school didn't have AP classes and I needed to learn in a somewhat different way than other people around me. And it's really, it's really something. That's what I tell young people today is if you are struggling in a class, don't think it's because you're not good enough. It might just be that the teacher is not presenting it in a way that is best for someone with your particular background. So it doesn't mean they're a bad teacher. It doesn't mean you're unintelligent. It just means the, maybe you need to find someone else that can explain it to you in a simple and clear way, or maybe you need to get some scaffolding that is Tate, learn extra, take extra classes that will help you. Not necessarily remedial classes. I believe very strongly as a teacher in giving students very challenging classes, but then giving them the scaffolding so that they can learn that difficult material. And I have longer stories on that, but I think I've already talked a bit too long. >>I love that. The scaffolding, I th I think the, the one, one of the high level takeaways that we're all going to get from your keynote is inspiration. Thank you so much for sharing your path to stem, how you got here, why humans, data science and AI are, have to be foundationally human centered, looking forward to the keynote. And again, Cecilia, Aragon. Thank you so much for spending time with me today. >>Thank you so much, Lisa. It's been a pleasure, >>Likewise versus silly Aragon. I'm Lisa Martin. You're watching the cubes coverage of women in data science, 2022.

Published Date : Feb 1 2022

SUMMARY :

of Washington Cecilia, it's a pleasure to have you on the cube. You are a professor, you are a data scientist, Well, thank you so much. And as a professor, you study how people make sense of vast data sets, including a combination of computer Thank you so much. if it was something that you got into later, I know that you are the co-founder of Latinas in computing, And my teacher said, oh, you can't do that. And I will say there's nothing like standing in And you write books about that, but being told you can't be an astronaut. And I know this might sound kind of extreme. And I will say that, you know, I encountered situations And we're going to be talking about some of the books that you've written, but I want to get into data science and AI And you say, well, I don't fit into any of those categories, you know, And so, as a developer of algorithms that work over very large data sets, And what that means is whenever you bring in a And that has obviously an impact on that human's life. So you just get given five solid reasons why data science and AI Well, I think a lot of it simply has to do with incorrect So how do we pull ethics into all of this data science and AI and that ethical And all algorithm designers really need to spend some time thinking about the is it generally moving in that direction of understanding? that considering the human centered approaches at the beginning So the book, human centered data science, you can see it there over Sicily, his right shoulder. or you can go to your local independent bookstore, or you can order it from your university takeaways that the audience is going to get from your keynote. And I've been a big fan of the conference since it was first developed here, the information presented to you in a way that honors your, to stem, how you got here, why humans, data science and AI women in data science, 2022.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
CeciliaPERSON

0.99+

Lisa MartinPERSON

0.99+

AragonPERSON

0.99+

March 1stDATE

0.99+

LisaPERSON

0.99+

2022DATE

0.99+

threeQUANTITY

0.99+

Lisa LisaPERSON

0.99+

presidentPERSON

0.99+

Cecilia AragonPERSON

0.99+

SicilyLOCATION

0.99+

MattPERSON

0.99+

bothQUANTITY

0.99+

five important pointsQUANTITY

0.99+

firstQUANTITY

0.99+

todayDATE

0.98+

250 miles an hourQUANTITY

0.98+

oneQUANTITY

0.97+

MIT pressORGANIZATION

0.97+

second gradeQUANTITY

0.97+

five solid reasonsQUANTITY

0.97+

one exampleQUANTITY

0.97+

an hourQUANTITY

0.97+

three top takeawaysQUANTITY

0.97+

fourQUANTITY

0.96+

fiveQUANTITY

0.96+

first oneQUANTITY

0.95+

this yearDATE

0.94+

University of WashingtonORGANIZATION

0.94+

this yearDATE

0.94+

MidwesternLOCATION

0.93+

three takeawaysQUANTITY

0.88+

WiDS Worldwide Conference 2022EVENT

0.87+

few years backDATE

0.8+

university of Washington CeciliaORGANIZATION

0.77+

StanfordLOCATION

0.76+

university of WashingtonORGANIZATION

0.75+

sillyPERSON

0.74+

ObamaPERSON

0.74+

TatePERSON

0.71+

AragonORGANIZATION

0.69+

topQUANTITY

0.6+

LatinasPERSON

0.57+

LatinasOTHER

0.57+

Vidya Setlur, Tableau | WiDS 2022


 

(bright music) >> Hi, everyone. Welcome to theCUBE's coverage of WiDS 2022. I'm Lisa Martin, very happy to be covering this conference. I've got Vidya Setlur here with me, the director of Tableau Research. Vidya, welcome to the program. >> Thanks, Lisa. It's great to be here. >> So this is one of my favorite events. You're a keynote this year. You're going to be talking about what makes intelligent visual analytics tools really intelligent. Talk to me a little bit about some of the key takeaways that the audience is going to glean from your conversation. >> Yeah, definitely. I think we've reached a point where everybody understands that data is important, trying to understand that data is equally important. And we're also getting to that point where technology and AI is really picking up. Algorithms are getting better, computers are getting faster. And so there's a lot of dialogue and conversation around how AI can help with visual analysis to make our jobs easier, help us glean insights. So I thought it was a really timely point where we can really actually talk about it, and distilling into the specifics of how these tools can actually be intelligent beyond just the general buzz of AI. >> And that's a great point that you bring up. There's been a lot of buzz around AI for a long time. The organizations talk about it, software vendors talk about it being integrated into their technologies, but how can AI really help to make visual analytics interpretable in a way that makes sense for the data enthusiast and the business? >> Yeah, so to me, I think my point of view, which tends to be the general agreement among the research community, is AI is getting better. And there are certain types of algorithms, especially these repetitive tasks. We see this with even Instagram, right? You put a picture on Instagram, there are filters that can maybe make the image look better, some fun backgrounds. And those, generally speaking, are AI algorithms at work. So there are these simple, either fun ways or tasks that reduce friction where AI can play a role, and they tend to be really good with these repetitive tasks, right? If I had to upload a picture and constantly edit the background manually, that's a pain. So AI algorithms are really good at figuring out where people tend to do a particular task often, and that's a good place for these algorithms to come into play. But that being said, I think fundamentally speaking, there are going to be tasks where AI can't simply replace a human. Humans have a really strong visual system. We have a very highly cognitive system where we can glean insights and takeaways beyond just the pixels, or just the text. And so how do we actually design systems where algorithms augment a human, where a human can stay in the driver's seat, stay creative, but defer all these mundane or repetitive tasks that simply add friction to the computer? And that's what the keynote is about. >> And talk to me about when you're talking with organizations, where are they in terms of appetite to understand the benefits that natural language processing, AI and humans together, can have on visual analytics, and being able to interpret that data? >> Yeah. So I would say it's really moving fast. So three years ago, organizations were like AI, it's a great buzzword, we're weary because when rubber hits the road, it's really hard to take that into action. But now we're slowly seeing places where it can actually work. So organizations are really thirsty to figure out how do we actually add customer value? How do we actually build products where AI can move from a simple, cute proof of concept working in a lab to actual production? And that is where organizations are right now. And we've already seen that with various types of examples, like machine translation. You open up a Google page in Spanish, and you can hit auto translate and it will convert it into English. Now, is it perfect? Not, but is it good enough? Yes. And I think that's where AI algorithms are heading, and organizations are really trying to figure out what's in it for us, and what's in it for our customers. >> What are some of the cultural, anytime we talk about AI, we always talk about ethics. But what are some of the cultural, or the language specific challenges with respect to natural language techniques that organizations need to be aware of? >> Yeah, that's a great question, and it's a common question, and really important. So as I've said, these AI algorithms are only as good as the data that they're often trained on. And so it's really important, in addition to the cultural aspects of incorporating those into the techniques, is to really figure out what sort of biases come into play, right? So a simple example is there's sarcasm in language, and different cultures have different ways of interpreting it. There are subtleties in language, jokes. My kids have a certain type of language when they're talking with each other that I may not understand. So there's a whole complexity around cultural appropriation generations that, where language constantly evolves, as well as biases. For example, we've had conversations in the news where AI algorithms are trained on a particular data set for detecting crime. And there are hidden biases that go into play with that sort of data. So we're really, it's important to be acknowledged of where the data is, and what sorts of cultural biases come into play. But translation, simple language translation is already more or less a solved problem. But beyond the simple language translation, we also have to account for language subtleties as well. >> Right, and the subtleties can be very dramatic. When you're talking with organizations that are really looking to become data driven. Everybody talks about being data driven, and we hear it on the news all the time, it's mainstream. But what that actually really means, and how an organization actually delivers on that are two different things. When you're talking with customers that are, okay, we've got to talk about ethics. We know that there's biases and data. How do you help them get around that so that they can actually adopt that technology, and make it useful and impactful to the business? >> Yeah. So just as important as figuring out how AI algorithms can help an organization's business, it's equally important for an organization to be more data literate about the data that feeds into these algorithms. So making data as a first class citizen, and figuring out are there hidden biases? Is the data comprehensive enough? Acknowledging where there are limitations in the data and being completely transparent about that. And sharing that with customers, I think, is really key. And coming back to humans being in the driver's seat. If these experiences are designed where humans are, in fact, in the driver's seat, as a human, they can intervene and correct and repair the system if they do see certain types of oddities that come into play with these algorithms. >> Going to ask you in our final few minutes here, I know that you have a PhD in computer graphics from Northwestern, is it? >> Yep. >> Northwestern. >> Go Wildcats, yep. >> Were you always interested in STEM and data? Talk to me a little bit about your background. >> Yeah. I grew up in a family full of academics and female academics. And now, yes, I have boys, including my dog. Everybody's male, but I have a really strong vested interest in supporting women in STEM. And I actually would go further and say, STEAM. I think arts and science are both equally important. In fact, I would say that on our research team, there's a good representation of minorities and women. And data analysis and visual analysis, in particular, is a field that is very conducive for women in the field, because women tend to be naturally meticulous. They're very good at distilling what they're seeing. So I would argue that there are a host of disciplines in this space that make it equally exciting and conducive for women to jump in. >> I'm glad that you said that. That's actually quite exciting, and that's a real positive thing that's going on in the industry, and what you're seeing. So I'm looking forward to your keynote, and I'm sure the audience is as well. Vidya, it was a pleasure to have you on the program talking about intelligent visual analytics tools, and the opportunities that they bring to organizations. Thanks for your time. >> Thanks, Lisa. >> For Vidya Setlur, I'm Lisa Martin. You're watching theCUBE's coverage of WiDS conference 2022. Stick around, more great content coming up next. (bright music)

Published Date : Feb 28 2022

SUMMARY :

Welcome to theCUBE's It's great to be here. that the audience is going to and distilling into the specifics to make visual analytics there are going to be tasks where AI And that is where that organizations need to be aware of? in addition to the cultural Right, and the subtleties and repair the system if they do see Talk to me a little bit and conducive for women to jump in. and I'm sure the audience is as well. coverage of WiDS conference 2022.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Lisa MartinPERSON

0.99+

LisaPERSON

0.99+

VidyaPERSON

0.99+

Vidya SetlurPERSON

0.99+

Tableau ResearchORGANIZATION

0.99+

bothQUANTITY

0.98+

three years agoDATE

0.98+

EnglishOTHER

0.98+

two different thingsQUANTITY

0.97+

WiDS 2022EVENT

0.97+

InstagramORGANIZATION

0.97+

NorthwesternLOCATION

0.93+

theCUBEORGANIZATION

0.91+

WiDS conference 2022EVENT

0.9+

GoogleORGANIZATION

0.86+

SpanishOTHER

0.83+

this yearDATE

0.82+

first classQUANTITY

0.81+

NorthwesternORGANIZATION

0.79+

one of my favorite eventsQUANTITY

0.66+

TableauORGANIZATION

0.64+