John Hennessy, Knight-Hennessy Scholars | ACG SV Grow! Awards 2019
(upbeat techno music) >> From Mountain View California, it's the Cube covering the 15th Annual Grow Awards. Brought to you by ACG SV. >> Hi, Lisa Martin with the Cube on the ground at the Computer History Museum for the 15th annual ACG SV Awards. And in Mountain View California excited to welcome to the Cube for the first time, John Hennessy, the chairman of Alphabet and the co-founder of the Knight-Hennessy Scholars Program at Stanford. JOHN, it's truly a pleasure to have you on the Cube today. >> Well delighted to be here, Lisa. >> So I was doing some research on you. And I see Marc Andreessen has called you the godfather of Silicon Valley. >> Marc very generous (loughs) >> so I thought I was pretty cool I'm going to sit down with the godfather tonight. (loughs) >> I have not done that yet. So you are keynoting the 15th Annual ACG SV Awards tonight. Talk to us a little bit about the takeaways that the audience is going to hear from you tonight. >> Well, they're going to hear some things about leadership the importance of leadership, obviously the importance of innovation. We're in the middle of Silicon Valley innovation is a big thing. And the role that technology plays in our lives and how we should be thinking about that, and how do we ensure the technology is something that serves the public good. >> Definitely. So there's about I think over 230 attendees expected tonight over 100 sea levels, the ACG SV Is has been it's it's much more than a networking organization. there's a lot of opportunities for collaboration for community. Tell me a little bit about your experience with that from a collaboration standpoint? >> Well, I think collaboration is a critical ingredient. I mean, for so many years, you look at the collaboration is gone. Just take between between the universities, my own Stanford and Silicon Valley and how that collaboration has developed over time and lead the founding of great companies, but also collaboration within the valley. This is the place to be a technology person in the whole world it's the best place partly because of this collaboration, and this innovative spirit that really is a core part of what we are as a place. >> I agree. The innovative spirit is one of the things that I enjoy, about not only being in technology, but also living in Silicon Valley. You can't go to a Starbucks without hearing a conversation or many conversations about new startups or cloud technology. So the innovative spirit is pervasive here. And it's also one that I find in an in an environment like ASG SV. You just hear a lot of inspiring stories and I was doing some research on them in the last 18 months. Five CEO positions have been seated and materialized through ACG SV. Number of venture deals initiated several board positions. So a lot of opportunity in this group here tonight. >> Right, well I think that's important because so much of the leadership has got to come by recruiting new young people. And with the increase in concerned about diversity and our leadership core and our boards, I think building that network out and trying to stretch it a little bit from the from perhaps the old boys network of an earlier time in the Valley is absolutely crucial. >> Couldn't agree more. So let's now talk a little bit about the Knight-Hennessy Scholars Program at Stanford. Tell us a little bit about it. When was it founded? >> So we are we are in our very first year, actually, this year, our first year of scholars, we founded it in 2016. The motivation was, I think, an increasing gap we perceived in terms of the need for great leadership and what was available. And it was in government. It was in the nonprofit world, it was in the for profit world. So I being a lifelong educator said, What can we do about this? Let's try to recruit and develop a core of younger people who show that they're committed to the greater good and who are excellent, who are innovative, who are creative, and prepare them for leadership roles in the future. >> So you're looking for are these undergraduate students? >> They are graduate students, so they've completed their undergraduate, it's a little hard to tell when somebody's coming out of high school, what their civic commitment is, what their ability to lead is. But coming out of coming out of undergraduate experience, and often a few years of work experience, we can tell a lot more about whether somebody has the potential to be a future leader. >> So you said, found it just in 2016. And one of the things I saw that was very interesting is projecting in the next 50 years, there's going to be 5000 Knight-Hennessy scholars at various stages of their careers and government organizations, NGOs, as you mentioned, so looking out 50 years you have a strong vision there, but really expect this organization to be able to make a lasting impact. >> That's what our goal is lasting impact over decades, because people who go into leadership positions often take a decade or two to rise to that position. But that's what our investment is our investment is in the in the future. And when I went to Phil Knight who's my co-founder and donor, might lead donor to the program, he was enthusiastic. His view was that we had a we had a major gap in leadership. And we needed to begin training, we need to do multiple things. We need to do things like we're doing tonight. But we also need to think about that next younger generation is up and coming. >> Some terms of inspiring the next generation of innovative diversity thinkers. Talk to me about some of the things that this program is aimed at, in addition to just, you know, some of the knowledge about leadership, but really helping them understand this diverse nature in which we now all find ourselves living. >> So one of the things we do is we try to bring in leaders from all different walks of life to meet and have a conversation with our scholars. This morning, we had the UN High Commissioner for Human Rights in town, Michelle Bachelet, and she sat down and talked about how she thought about her role as addressing human rights, how to move things forward in very complex situations we face around the world with collapse of many governments and many human rights violations. And how do you how do you make that forward progress with a difficult problem? So that kind of exposure to leaders who are grappling with really difficult problems is a critical part of our program. >> And they're really seeing and experiencing real world situations? >> Absolutely. They're seeing them up close as they're really occurring. They see the challenges we had, we had Governor Brown and just before he went out of office here in California, to talk about criminal justice reform a major issue in California and around the country. And how do we make progress on that on that particular challenge? >> So you mentioned a couple of other leaders who the students I've had the opportunity to learn from and engage with, but you yourself are quite the established leader. You went to Stanford as a professor in 1977. You are a President Emeritus you were president of Stanford from 2000 to 2016. So these students also get the opportunity to learn from all that you have experienced as it as a professor of Computer Science, as well as in one of your current roles as chairman of Alphabet. Talk to us a little bit about just the massive changes that you have seen, not just in Silicon Valley, but in technology and innovation over the last 40 plus years. >> Well, it is simply amazing. When I arrived at Stanford, there was no internet. The ARPANET was in its young days, email was something that a bunch of engineers and scientists use to communicate, nobody else did. I still remember going and seeing the first demonstration of what would become Yahoo. Well, while David Filo and Jerry Yang had it set up in their office. And the thing that immediately convinced me Lisa was they showed me that their favorite Pizza Parlor would now allow orders to go online. And when I saw that I said, the World Wide Web is not just about a bunch of scientists and engineers exchanging information. It's going to change our lives and it did. And we've seen wave after wave that with Google and Facebook, social media rise. And now the rise of AI I mean this this is a transformative technology as big as anything I think we've ever seen. In terms of its potential impact. >> It is AI is so transformative. I was I was in Hawaii recently on vacation and Barracuda Networks was actually advertising about AI in Hawaii and I thought that's interesting that the people that are coming to to Hawaii on vacation, presumably, people have you know, many generations who now have AI as a common household word may not understand the massive implications and opportunities that it provides. But it is becoming pervasive at every event we're at at the Cube and there's a lot of opportunity there. It's it's a very exciting subject. Last question for you. You mentioned that this that the Knight-Hennessy Scholars Program is really aimed towards graduate students. What is your advice to those BB stem kids in high school right now who are watching this saying, oh, John, what, what? How do you advise me to be able to eventually get into a program like this? >> Well, I think it begins by really finding your passion, finding something you're really dedicated to pushing yourself challenging yourself, showing that you can do great things with it. And then thinking about the bigger role you want to have with technology. In the after all, technology is not an end in itself. It's a tool to make human lives better and that's the sort of person we're looking for in the knight-Hennessy Scholars Program, >> Best advice you've ever gotten. >> Best advice ever gotten is remember that leadership is about service to the people in the institution you lead. >> It's fantastic not about about yourself but really about service to those. >> About service to others >> JOHN, it's been a pleasure having you on the Cube tonight we wish you the best of luck in your keynote at the 15th annual ACG SV Awards and we thank you for your time. >> Thank you, Lisa. I've enjoyed it. Lisa Martin, you're watching the Cube on the ground. Thanks for watching. (upbeat tech music)
SUMMARY :
Brought to you by ACG SV. and the co-founder of the So I was doing some research on you. so I thought I was pretty cool I'm going to sit down that the audience is going to hear from you tonight. And the role that technology plays in our lives the ACG SV Is has been This is the place to be a technology person is one of the things that I enjoy, because so much of the leadership the Knight-Hennessy Scholars Program at Stanford. the need for great leadership it's a little hard to tell And one of the things I saw and donor, might lead donor to the program, in addition to just, you know, So one of the things we do They see the challenges we had, we had Governor Brown just the massive changes that you have seen, And the thing that immediately convinced me Lisa was that the people that are coming and that's the sort of person we're looking for service to the people in the institution you lead. but really about service to those. and we thank you for your time. the Cube on the ground.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Marc Andreessen | PERSON | 0.99+ |
2016 | DATE | 0.99+ |
Michelle Bachelet | PERSON | 0.99+ |
John Hennessy | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
Hawaii | LOCATION | 0.99+ |
California | LOCATION | 0.99+ |
2000 | DATE | 0.99+ |
1977 | DATE | 0.99+ |
John | PERSON | 0.99+ |
Silicon Valley | LOCATION | 0.99+ |
Alphabet | ORGANIZATION | 0.99+ |
Jerry Yang | PERSON | 0.99+ |
David Filo | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
ORGANIZATION | 0.99+ | |
first year | QUANTITY | 0.99+ |
Lisa | PERSON | 0.99+ |
ACG SV | ORGANIZATION | 0.99+ |
Yahoo | ORGANIZATION | 0.99+ |
50 years | QUANTITY | 0.99+ |
Phil Knight | PERSON | 0.99+ |
Stanford | ORGANIZATION | 0.99+ |
Barracuda Networks | ORGANIZATION | 0.99+ |
Starbucks | ORGANIZATION | 0.99+ |
this year | DATE | 0.99+ |
Governor | PERSON | 0.99+ |
two | QUANTITY | 0.99+ |
JOHN | PERSON | 0.99+ |
Marc | PERSON | 0.99+ |
tonight | DATE | 0.98+ |
first time | QUANTITY | 0.98+ |
15th Annual ACG SV Awards | EVENT | 0.98+ |
Mountain View California | LOCATION | 0.98+ |
15th Annual Grow Awards | EVENT | 0.98+ |
This morning | DATE | 0.98+ |
one | QUANTITY | 0.97+ |
Five CEO | QUANTITY | 0.97+ |
today | DATE | 0.97+ |
a decade | QUANTITY | 0.96+ |
ACG SV | EVENT | 0.96+ |
over 230 attendees | QUANTITY | 0.95+ |
ACG SV Grow! Awards 2019 | EVENT | 0.95+ |
over 100 sea levels | QUANTITY | 0.95+ |
5000 | QUANTITY | 0.95+ |
ASG SV | ORGANIZATION | 0.94+ |
first demonstration | QUANTITY | 0.93+ |
Knight-Hennessy Scholars | ORGANIZATION | 0.92+ |
President | PERSON | 0.92+ |
15th annual ACG SV Awards | EVENT | 0.91+ |
UN | ORGANIZATION | 0.9+ |
last 40 plus years | DATE | 0.9+ |
last 18 months | DATE | 0.9+ |
Cube | COMMERCIAL_ITEM | 0.85+ |
ARPANET | ORGANIZATION | 0.85+ |
knight-Hennessy Scholars Program | TITLE | 0.85+ |
High Commissioner for Human Rights | PERSON | 0.84+ |
Knight-Hennessy Scholars Program | ORGANIZATION | 0.83+ |
Knight-Hennessy Scholars Program | TITLE | 0.81+ |
over decades | QUANTITY | 0.81+ |
Computer History Museum | LOCATION | 0.75+ |
Pizza Parlor | ORGANIZATION | 0.73+ |
Emeritus | PERSON | 0.7+ |
wave after | EVENT | 0.69+ |
wave | EVENT | 0.66+ |
Cube | ORGANIZATION | 0.65+ |
Stanford | LOCATION | 0.64+ |
Brown | PERSON | 0.63+ |
Garry Kasparov | Machine Learning Everywhere 2018
>> [Narrator] Live from New York, it's theCube, covering Machine Learning Everywhere. Build your ladder to AI, brought to you by IBM. >> Welcome back here to New York City as we continue at IBM's Machine Learning Everywhere, build your ladder to AI, along with Dave Vellante, I'm John Walls. It is now a great honor of ours to have I think probably and arguably the greatest chess player of all time, Garry Kasparov now joins us. He's currently the chairman of the Human Rights Foundation, political activist in Russia as well some time ago. Thank you for joining us, we really appreciate the time, sir. >> Thank you for inviting me. >> We've been looking forward to this. Let's just, if you would, set the stage for us. Artificial Intelligence obviously quite a hot topic. The maybe not conflict, the complementary nature of human intelligence. There are people on both sides of the camp. But you see them as being very complementary to one another. >> I think that's natural development in this industry that will bring together humans and machines. Because this collaboration will produce the best results. Our abilities are complementary. The humans will bring creativity and intuition and other typical human qualities like human judgment and strategic vision while machines will add calculation, memory, and many other abilities that they have been acquiring quickly. >> So there's room for both, right? >> Yes, I think it's inevitable because no machine will ever reach 100% perfection. Machines will be coming closer and closer, 90%, 92, 94, 95. But there's still room for humans because at the end of the day even with this massive power you have guide it. You have to evaluate the results and at the end of the day the machine will never understand when it reaches the territory of diminishing returns. It's very important for humans actually to identify. So what is the task? I think it's a mistake that is made by many pundits that they automatically transfer the machine's expertise for the closed systems into the open-ended systems. Because in every closed system, whether it's the game of chess, the game of gall, video games like daughter, or anything else where humans already define the parameters of the problem, machines will perform phenomenally. But if it's an open-ended system then machine will never identify what is the sort of the right question to be asked. >> Don't hate me for this question, but it's been reported, now I don't know if it's true or not, that at one point you said that you would never lose to a machine. My question is how capable can we make machines? First of all, is that true? Did you maybe underestimate the power of computers? How capable to you think we can actually make machines? >> Look, in the 80s when the question was asked I was much more optimistic because we saw very little at that time from machines that could make me, world champion at the time, worry about machines' capability of defeating me in the real chess game. I underestimated the pace it was developing. I could see something was happening, was cooking, but I thought it would take longer for machines to catch up. As I said in my talk here is that we should simply recognize the fact that everything we do while knowing how we do that, machines will do better. Any particular task that human perform, machine will eventually surpass us. >> What I love about your story, I was telling you off-camera about when we had Erik Brynjolfsson and Andrew McAfee on, you're the opposite of Samuel P. Langley to me. You know who Samuel P. Langley is? >> No, please. >> Samuel P. Langley, do you know who Samuel P. Langley is? He was the gentleman that, you guys will love this, that the government paid. I think it was $50,000 at the time, to create a flying machine. But the Wright Brothers beat him to it, so what did Samuel P. Langley do after the Wright Brothers succeeded? He quit. But after you lost to the machine you said you know what? I can beat the machine with other humans, and created what is now the best chess player in the world, is my understanding. It's not a machine, but it's a combination of machines and humans. Is that accurate? >> Yes, in chess actually, we could demonstrate how the collaboration can work. Now in many areas people rely on the lessons that have been revealed, learned from what I call advanced chess. That in this team, human plus machine, the most important element of success is not the strengths of the human expert. It's not the speed of the machine, but it's a process. It's an interface, so how you actually make them work together. In the future I think that will be the key of success because we have very powerful machine, those AIs, intelligent algorithms. All of them will require very special treatment. That's why also I use this analogy with the right fuel for Ferrari. We will have expert operators, I call them the shepherds, that will have to know exactly what are the requirements of this machine or that machine, or that group of algorithms to guarantee that we'll be able by our human input to compensate for their deficiencies. Not the other way around. >> What let you to that response? Was it your competitiveness? Was it your vision of machines and humans working together? >> I thought I could last longer as the undefeated world champion. Ironically, 1997 when you just look at the game and the quality of the game and try to evaluate the Deep Blue real strengths, I think I was objective, I was stronger. Because today you can analyze these games with much more powerful computers. I mean any chess app on your laptop. I mean you cannot really compare with Deep Blue. That's natural progress. But as I said, it's not about solving the game, it's not about objective strengths. It's about your ability to actually perform at the board. I just realized while we could compete with machines for few more years, and that's great, it did take place. I played two more matches in 2003 with German program. Not as publicized as IBM match. Both ended as a tie and I think they were probably stronger than Deep Blue, but I knew it would just be over, maybe a decade. How can we make chess relevant? For me it was very natural. I could see this immense power of calculations, brute force. On the other side I could see us having qualities that machines will never acquire. How about bringing together and using chess as a laboratory to find the most productive ways for human-machine collaboration? >> What was the difference in, I guess, processing power basically, or processing capabilities? You played the match, this is 1997. You played the match on standard time controls which allow you or a player a certain amount of time. How much time did Deep Blue, did the machine take? Or did it take its full time to make considerations as opposed to what you exercised? >> Well it's the standard time control. I think you should explain to your audience at that time it was seven hours game. It's what we call classical chess. We have rapid chess that is under one hour. Then you have blitz chess which is five to ten minutes. That was a normal time control. It's worth mentioning that other computers they were beating human players, myself included, in blitz chess. In the very fast chess. We still thought that more time was more time we could have sort of a bigger comfort zone just to contemplate the machine's plans and actually to create real problems that machine would not be able to solve. Again, more time helps humans but at the end of the day it's still about your ability not to crack under pressure because there's so many things that could take you off your balance, and machine doesn't care about it. At the end of the day machine has a steady hand, and steady hand wins. >> Emotion doesn't come into play. >> It's not about apps and strength, but it's about guaranteeing that it will play at a certain level for the entire game. While human game maybe at one point it could go a bit higher. But at the end of the day when you look at average it's still lower. I played many world championship matches and I analyze the games, games played at the highest level. I can tell you that even the best games played by humans at the highest level, they include not necessarily big mistakes, but inaccuracies that are irrelevant when humans facing humans because I make a mistake, tiny mistake, then I can expect you to return the favor. Against the machine it's just that's it. Humans cannot play at the same level throughout the whole game. The concentration, the vigilance are now required when humans face humans. Psychologically when you have a strong machine, machine's good enough to play with a steady hand, the game's over. >> I want to point out too, just so we get the record straight for people who might not be intimately familiar with your record, you were ranked number one in the world from 1986 to 2005 for all but three months. Three months, that's three decades. >> Two decades. >> Well 80s, 90s, and naughts, I'll give you that. (laughing) That's unheard of, that's phenomenal. >> Just going back to your previous question about why I just look for some new form of chess. It's one of the key lessons I learned from my childhood thanks to my mother who spent her live just helping me to become who I am, who I was after my father died when I was seven. It's about always trying to make the difference. It's not just about winning, it's about making a difference. It led me to kind of a new motto in my professional life. That is it's all about my own quality of the game. As long as I'm challenging my own excellence I will never be short of opponents. For me the defeat was just a kick, a push. So let's come up with something new. Let's find a new challenge. Let's find a way to turn this defeat, the lessons from this defeat into something more practical. >> Love it, I mean I think in your book I think, was it John Henry, the famous example. (all men speaking at once) >> He won, but he lost. >> Motivation wasn't competition, it was advancing society and creativity, so I love it. Another thing I just want, a quick aside, you mentioned performing under pressure. I think it was in the 1980s, it might have been in the opening of your book. You talked about playing multiple computers. >> [Garry] Yeah, in 1985. >> In 1985 and you were winning all of them. There was one close match, but the computer's name was Kasparov and you said I've got to beat this one because people will think that it's rigged or I'm getting paid to do this. So well done. >> It's I always mention this exhibition I played in 1985 against 32 chess-playing computers because it's not the importance of this event was not just I won all the games, but nobody was surprised. I have to admit that the fact that I could win all the games against these 32 chess-playing computers they're only chess-playing machine so they did nothing else. Probably boosted my confidence that I would never be defeated even by more powerful machines. >> Well I love it, that's why I asked the question how far can we take machines? We don't know, like you said. >> Why should we bother? I see so many new challenges that we will be able to take and challenges that we abandoned like space exploration or deep ocean exploration because they were too risky. We couldn't actually calculate all the odds. Great, now we have AI. It's all about increasing our risk because we could actually measure against this phenomenal power of AI that will help us to find the right pass. >> I want to follow up on some other commentary. Brynjolfsson and McAfee basically put forth the premise, look machines have always replaced humans. But this is the first time in history that they have replaced humans in the terms of cognitive tasks. They also posited look, there's no question that it's affecting jobs. But they put forth the prescription which I think as an optimist you would agree with, that it's about finding new opportunities. It's about bringing creativity in, complementing the machines and creating new value. As an optimist, I presume you would agree with that. >> Absolutely, I'm always saying jobs do not disappear, they evolve. It's an inevitable part of the technological progress. We come up with new ideas and every disruptive technology destroys some industries but creates new jobs. So basically we see jobs shifting from one industry to another. Like from agriculture, manufacture, from manufacture to other sectors, cognitive tasks. But now there will be something else. I think the market will change, the job market will change quite dramatically. Again I believe that we will have to look for riskier jobs. We will have to start doing things that we abandoned 30, 40 years ago because we thought they were too risky. >> Back to the book you were talking about, deep thinking or machine learning, or machine intelligence ends and human intelligence begins, you talked about courage. We need fail safes in place, but you also need that human element of courage like you said, to accept risk and take risk. >> Now it probably will be easier, but also as I said the machine's wheel will force a lot of talent actually to move into other areas that were not as attractive because there were other opportunities. There's so many what I call raw cognitive tasks that are still financially attractive. I hope and I will close many loops. We'll see talent moving into areas where we just have to open new horizons. I think it's very important just to remember it's the technological progress especially when you're talking about disruptive technology. It's more about unintended consequences. The fly to the moon was just psychologically it's important, the Space Race, the Cold War. But it was about also GPS, about so many side effects that in the 60s were not yet appreciated but eventually created the world we have now. I don't know what the consequences of us flying to Mars. Maybe something will happen, one of the asteroids will just find sort of a new substance that will replace fossil fuel. What I know, it will happen because when you look at the human history there's all this great exploration. They ended up with unintended consequences as the main result. Not what was originally planned as the number one goal. >> We've been talking about where innovation comes from today. It's a combination of a by-product out there. A combination of data plus being able to apply artificial intelligence. And of course there's cloud economics as well. Essentially, well is that reasonable? I think about something you said, I believe, in the past that you didn't have the advantage of seeing Deep Blue's moves, but it had the advantage of studying your moves. You didn't have all the data, it had the data. How does data fit into the future? >> Data is vital, data is fuel. That's why I think we need to find some of the most effective ways of collaboration between humans and machines. Machines can mine the data. For instance, it's a breakthrough in instantly mining data and human language. Now we could see even more effective tools to help us to mine the data. But at the end of the day it's why are we doing that? What's the purpose? What does matter to us, so why do we want to mine this data? Why do we want to do here and not there? It seems at first sight that the human responsibilities are shrinking. I think it's the opposite. We don't have to move too much but by the tiny shift, just you know percentage of a degree of an angle could actually make huge difference when this bullet reaches the target. The same with AI. More power actually offers opportunities to start just making tiny adjustments that could have massive consequences. >> Open up a big, that's why you like augmented intelligence. >> I think artificial is sci-fi. >> What's artificial about it, I don't understand. >> Artificial, it's an easy sell because it's sci-fi. But augmented is what it is because our intelligent machines are making us smarter. Same way as the technology in the past made us stronger and faster. >> It's not artificial horsepower. >> It's created from something. >> Exactly, it's created from something. Even if the machines can adjust their own code, fine. It still will be confined within the parameters of the tasks. They cannot go beyond that because again they can only answer questions. They can only give you answers. We provide the questions so it's very important to recognize that it is we will be in the leading role. That's why I use the term shepherds. >> How do you spend your time these days? You're obviously writing, you're speaking. >> Writing, speaking, traveling around the world because I have to show up at many conferences. The AI now is a very hot topic. Also as you mentioned I'm the Chairman of Human Rights Foundation. My responsibilities to help people who are just dissidents around the world who are fighting for their principles and for freedom. Our organization runs the largest dissident gathering in the world. It's called the Freedom Forum. We have the tenth anniversary, tenth event this May. >> It has been a pleasure. Garry Kasparov, live on theCube. Back with more from New York City right after this. (lively instrumental music)
SUMMARY :
Build your ladder to AI, brought to you by IBM. He's currently the chairman of the Human Rights Foundation, The maybe not conflict, the complementary nature that will bring together humans and machines. of the day even with this massive power you have guide it. How capable to you think we can actually make machines? recognize the fact that everything we do while knowing P. Langley to me. But the Wright Brothers beat him to it, In the future I think that will be the key of success the Deep Blue real strengths, I think I was objective, as opposed to what you exercised? I think you should explain to your audience But at the end of the day when you look at average you were ranked number one in the world from 1986 to 2005 Well 80s, 90s, and naughts, I'll give you that. For me the defeat was just a kick, a push. Love it, I mean I think in your book I think, in the opening of your book. was Kasparov and you said I've got to beat this one the importance of this event was not just I won We don't know, like you said. I see so many new challenges that we will be able Brynjolfsson and McAfee basically put forth the premise, Again I believe that we will have to look Back to the book you were talking about, deep thinking the machine's wheel will force a lot of talent but it had the advantage of studying your moves. But at the end of the day it's why are we doing that? But augmented is what it is because to recognize that it is we will be in the leading role. How do you spend your time these days? We have the tenth anniversary, tenth event this May. Back with more from New York City right after this.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Vellante | PERSON | 0.99+ |
Samuel P. Langley | PERSON | 0.99+ |
Samuel P. Langley | PERSON | 0.99+ |
John Walls | PERSON | 0.99+ |
Human Rights Foundation | ORGANIZATION | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
$50,000 | QUANTITY | 0.99+ |
Kasparov | PERSON | 0.99+ |
Russia | LOCATION | 0.99+ |
five | QUANTITY | 0.99+ |
Garry Kasparov | PERSON | 0.99+ |
2003 | DATE | 0.99+ |
2005 | DATE | 0.99+ |
1986 | DATE | 0.99+ |
Andrew McAfee | PERSON | 0.99+ |
seven hours | QUANTITY | 0.99+ |
90% | QUANTITY | 0.99+ |
1985 | DATE | 0.99+ |
100% | QUANTITY | 0.99+ |
Ferrari | ORGANIZATION | 0.99+ |
1997 | DATE | 0.99+ |
New York | LOCATION | 0.99+ |
New York City | LOCATION | 0.99+ |
1980s | DATE | 0.99+ |
92 | QUANTITY | 0.99+ |
Mars | LOCATION | 0.99+ |
John Henry | PERSON | 0.99+ |
Space Race | EVENT | 0.99+ |
Three months | QUANTITY | 0.99+ |
seven | QUANTITY | 0.99+ |
three months | QUANTITY | 0.99+ |
94 | QUANTITY | 0.99+ |
Both | QUANTITY | 0.99+ |
both sides | QUANTITY | 0.99+ |
ten minutes | QUANTITY | 0.99+ |
Deep Blue | TITLE | 0.99+ |
one | QUANTITY | 0.99+ |
first time | QUANTITY | 0.99+ |
95 | QUANTITY | 0.99+ |
Cold War | EVENT | 0.99+ |
under one hour | QUANTITY | 0.99+ |
tenth event | QUANTITY | 0.99+ |
two more matches | QUANTITY | 0.98+ |
both | QUANTITY | 0.98+ |
Erik Brynjolfsson | PERSON | 0.98+ |
Garry | PERSON | 0.98+ |
one close match | QUANTITY | 0.98+ |
tenth anniversary | QUANTITY | 0.98+ |
30 | DATE | 0.97+ |
Two decades | QUANTITY | 0.97+ |
32 chess | QUANTITY | 0.97+ |
80s | DATE | 0.97+ |
three decades | QUANTITY | 0.97+ |
today | DATE | 0.96+ |
one point | QUANTITY | 0.95+ |
Wright Brothers | PERSON | 0.95+ |
first sight | QUANTITY | 0.94+ |
Freedom Forum | ORGANIZATION | 0.93+ |
First | QUANTITY | 0.92+ |
one industry | QUANTITY | 0.92+ |
a decade | QUANTITY | 0.92+ |
60s | DATE | 0.92+ |
2018 | DATE | 0.91+ |
this May | DATE | 0.87+ |
McAfee | ORGANIZATION | 0.83+ |
90s | DATE | 0.78+ |
40 years ago | DATE | 0.75+ |
German | OTHER | 0.74+ |
Brynjolfsson | ORGANIZATION | 0.63+ |
more years | QUANTITY | 0.61+ |
theCube | ORGANIZATION | 0.6+ |
80s | QUANTITY | 0.59+ |
number | QUANTITY | 0.57+ |
Learning | ORGANIZATION | 0.35+ |
Blue | OTHER | 0.35+ |
Everywhere | TITLE | 0.32+ |
Megan Price, Human Rights Data Analysis Group - Women in Data Science 2017 - #WiDS2017 - #theCUBE
(upbeat music) >> Voiceover: Live from Stanford University. It's the Cube covering the Women in Data Science Conference, 2017. >> Hi, welcome back to the Cube. I'm Lisa Martin and we are at the second annual Women in Data Science Conference at Stanford University. Such an inspiring day that we've had so far and right now we're joined by Megan Price, the executive director of the human rights data analysis group. Megan, welcome to the Cube. >> Thank you. >> It's so exciting to have you here. Megan, you're background is statistics. You have a PhD as a statistician. The Human Rights Data and Analysis Group, HRDAG, is focused on statistical analysis of mass violence. Talk to us about sort of the merger of your bio statistician or your statistician background with human rights. Was that something that you were always interested in? >> Sure. It was and I have to say I was really lucky. I got my Bachelor's and my Master's in statistics from a very technical engineering school in Ohio, where honestly, a lot of people would sort of, pat me on the head and say, "That's nice, that you're interested in human rights. You'll outgrow that." And fortunately I had one very thoughtful mentor, who said to me, "You know, I really think Public Health school is the direction you should go in", and so I got my PhD in biostatistics from Public Health school and it was really there that I was exposed to people who kind of said, "Yeah, social justice, human rights, do that as a day job. Get on it.", and so that was really great that I was exposed to that as something I can move into as a career. >> Exposed to them, but also you had the confidence. You obviously had a mentor that was very influential, but that takes some courage and some guts to go, you know what, yeah, this is needed. >> It's true, yeah. (laughs) >> So talk to us about some of the ... The HRDAG, we talked about it a little bit before we went live. The evolution. Show to our viewers, how it's evolved to what it is today. >> Sure. So the organization, the name and work started with work that my colleague, Dr. Patrick Ball started doing in El Salvador and in Guatemala in the 90s. And at the time, he was working ... He's formed a team to do the work at the American Association for the Advancement of Science. And so that was about 25 years ago. And then the work evolved and the team just kept kind of moving to where the right home was to get that work done and so in nearly 2000s, they moved out here to Paul Walter just up the street to Benetech, another technical non-profit. And they provided us a really nice home for our work for nine years. And then in 2013, the time had really come to be the right time for Patrick and I to spin out HRDAG as it's own non-profit organization. We're fiscally sponsored right now, but we're our own institution, which we're really excited about. >> So you mentioned some of the projects that Patrick was working on. What are some of the things that were really compelling to you, specifically within human rights, that really are catalysts for the work that you're doing today? >> Sure. I think that there are a lot of quantitative questions that get raised in looking at these questions about widespread patterns of violence, and asking questions about accountability and responsibility for violence. And to answer those questions, you have to look at statistical patterns, and so you need to bring a deep understanding of the data that are available and the appropriate way to analyze and answer those questions. >> How do you from an accuracy perspective, I understand that that's incredibly vital, especially where these important issues are concerned, how does HRDAG eliminate, mitigate inaccuracy issues with respect to data? >> Yeah, well we're always thinking about each of our projects as taking place in an adversarial environment, because we ultimately assume that at the end of the day our results are going to be either subjected to the kind of deep scrutiny that comes along with any kind of socially and politically sensitive topic, or with the kind of scrutiny that happens in a court room. And so that's really what motivates the level of rigor that we require in our work. And we maintain that by maintaining our relationship with mostly academicians, who are really pushing these methods forward and staying on top of what is the most cutting edge approach to this problem and how can we really know that we're being as transparent as possible in the way this data were collected, the way they were analyzed, the way they were processed and the limitations of those analysis. You know, the uncertainty present in any estimates that we put out. >> Give us an example of some of the type data sources that you're evaluating, say for the conflict in Syria. >> So in the case of Syria, we have relationships with four organizations that are all collecting information about victims who've been killed in the ongoing conflict in Syria. Those groups are the Syrian Center for Statistics and Research, Syrian Network for Human Rights, the Damascus Center for Human Rights Studies, and the Violations Documentation Center. And those are all citizen led, by groups that are maintaining networks collecting that information to the best of their ability. And they share with us, largely Excel spreadsheets that contain names of victims and any other information they were able to collect about those victims. >> You mentioned University collaboration a minute ago. From a methodology standpoint. Give me an insight into ... You're getting data from these various sources, largely Excel, where we know with Excel comes humans, comes sometimes, "Oops". How are you working with universities to help evaluate the data or what are some of the methodologies that they're recommending, given the data sources and the tools that you have? >> So there's really two stages that the data go through and the first one is within the groups themselves, who do that first layer of verification, and that is the human verification prior to, kind of all the risks of data entry problems. And so they're doing the on the ground, making sure that they've collected and confirmed that information, but then you're absolutely right, we get this data that's been hand entered and with all of the risks and potential down sides of hand entered data and so primarily what we do is fairly conventional data processing and data cleaning to just check for things like outliers, contradictory information. We'll do that using Python and using R. And then our friends and colleagues in academia, where they're really helping us out is, because there are these multiple sources collecting names of individual victims, what we have is a record linkage problem. And so we have multiple records that refer to the same individual. >> Okay. >> And so we work a lot with our academic partners to stay on top of the latest ways to de-duplicate databases, that might have multiple entries that refer to the same person. And so that's been really great lately. >> Okay. What are some of the methods that you've used in Syria to quantify mass violence and what have some of the outcomes been to date? >> So we rely primarily on methods from record linkage and that gets us to what we know and can observe. And then from there we need to build an estimate, what we don't know and what we can't observe, because inevitably in conflict violence, some of that violence is hidden. Some of those victims have not been identified or their stories have not been told yet. And it's our job as data scientists to use the tools at our disposal to estimate how much we don't know. And so for that step we use a class of statistical tools, called multiple systems estimation. And essentially what that does, is it builds on the patterns of data as they're collected by these multiple sources to model what the underlying population must have been. To generate what we were able to see. >> Okay. >> And so that's been the primary analysis we've done in Syria. And what we found from that analysis, is that as valuable and important as the documented data are, they often are overwhelmed, for example when violence peaks. It may be too dangerous and it may be impossible to accurately record how many people have been killed. >> Okay. >> And so we need a statistical model that can help us identify when data we observe seem to plateau, but perhaps our estimates tell us no, in fact that was a very violent period. And then we can dig in with field experts and interpret, was that a time when we know that territorial control was in contention. Or was that a time when we know, that there were clashes between certain groups. And so then we can infer further from that about responsibility for violence. >> So applying some additional attributers. Things that are attributing to this. What are some of the differences that you think that this has made so far? >> What I hope this has done so far, is simply to raise awareness about the scale of the violence that's happening in Syria. And what I hope ultimately, is that it helps to attribute accountability to those who are responsible for this violence. >> You've also got some projects going on in Guatemala. Can you share a little bit about that? >> We do. Yeah, we have a couple of projects in Guatemala. The one that I've worked on most closely, is looking at the historic archive of the national police in Guatemala. And that's actually the project that I started working on when I joined HRDAG. And Guatemala suffered an armed internal conflict from 1960 to 1996. And during that time period, many witnesses came forward and said that the national police force participated in the violence, but at the time that the UN, the United Nations broke our peace treaties, they weren't able to find any documentary evidence of the role the police played. And then in 2005, quite by accident, this archive, that's this cache of the police forces bureaucratic documents was discovered. And so we've been studying it since then. And it's been this really fascinating problem, if you have this building full of millions and millions and millions of pieces of paper, that are not really organized in any way. And how do you go about studying that? And so we partnered with other experts from the American Statistical Association, to design a random sample of the archive, so that we could learn about it as quickly as possible. >> What are some of the learnings that you've discovered so far? >> What we've discovered so far is just the sheer magnitude of the archive and in particular the amount of documents that were generated during the conflict. And then the other thing that we have discovered is the communication flow. The pattern of documents being sent to and from leadership the National Police Force. And specifically, Patrick Ball testified about that communication flow, to help establish command responsibility for the former chief of police, for a kidnapping that occurred in 1984. >> Wow, incredibly impactful work. But you've got some things on the domestic frontier. With us a little bit about what you're working on stateside. >> We do, yeah. In the past year, we've started our first US based project, which we're really excited about. And it's looking at the algorithms that are being used both in predictive policing and in criminal justice risk assessment. So decisions like whether or not someone should get bail or pre trial hearings, things like that. And we've been working with partners, primarily lawyers, to help assess, sort of, how are those algorithms working and what's the underlying data that's being fed into those algorithms. And what's the ways in which that data are biased. And so the algorithms are replicating the bias that exists in the data. >> Tell me, how does that conversation go, as a statistician with a lawyer, who is, you know, a business person. What sort of educating do you need to do to them about the impact that this data can make and how imperative it is that it'd be accurate. >> Yeah, well those conversations are really interesting, because there's so much education going in both directions. Where both we are helping them to turn their substantive question into an analytical question and sort of develop it in a way that we can do an analysis to get at that question, but then they're also helping us to understand, what's the way in which this information needs to be conveyed, so that it holds up in court, and so that it establish some sort of precedence, so that they can make policy change. >> It makes me think of, sort of the topic or the skill of communication. A number of our guests this morning on the program and those that we've heard speaking today, talk about the traditional data scientist skills. You know hybrid, hacker, someone that has statistics, mathematical skills, but now really looking at somebody who also has to have other behavioral skills. Be able to be creative, interpretive, but also to communicate it. I'd love to get your perspective as you've seen data science evolve in your own career. How have you maybe trained your team on the importance of communicating this information, so that it has a value and it has impact? >> Absolutely. I think creativity and communication are probably the two most important skills for a data scientist to have these days and that's definitely something that on our team, you know, it's always a painful process, but every time we give a talk, if we're fortunate enough that it's been videoed, we always have to go back and watch that. And I recommend to my teammates to do it quietly at home alone, maybe with their preferred beverage of choice, but that's the way that you learn and you discover, oh I could have said that differently or I could have said that another way, or I could have thought about a different way to present that, because I do think that that's absolutely vital. >> I'm just curious what you're perspective is from a curriculum standpoint, we've got a lot of students here, we've got some professors here. Is there something that you would recommend as part of ... Look back to your education. Would you think, you know what, being able to understand statistics is one thing, I need to be able to communicate it. Was that something that was part of your curriculum or something that you think, you know what, that's a vital component of this? >> It's absolutely a vital component. It was not part of my formal curriculum, but it was something that I got out of graduate school, because I was very lucky that I got to teach, essentially statistics 101 to introductory Public Health students. So they were graduate students, but there were a lot of students who maybe hadn't had a math class in a decade and were fairly math phobic. >> Lisa: Sounds like me. (both laughing) >> We could, you know, hold hands and get through it together. >> Okay, oh good. Beverage of my choice, awesome. (laughs) >> Exactly. And I really feel like that was what improved my communication skills, was experience with those students and thinking about how to convey the information to that class and going in day after day and designing that curriculum and really thinking about how to teach that class, is really the way that I have learned my communication skills. >> Oh that's fab. That real world experience, there's nothing that beats that. What are some of the things that have excited you about participating in (mumbles) this year? >> Oh my gosh, it is so much fun to be in an audience and to speak to an audience, that is so predominantly female. I mean of course, that's not something that we get to do very often. And so young, I mean this audience is really full of very energetic, ready to go tackle the world's problems women and it's very invigorating for me. It helps me to kind of go back and think, alright how can we do more and do bigger and create more opportunities for these folks to fill? >> It's a very symbiotic relationship, I think. They learn so much from you and you're learning so much from them. It's really nice. You can feel it. Right, you can feel it here in this environment. >> Absolutely. >> Well, Megan, thank you so much for joining us on the program today. We wish you the best of luck with HRDAG and your impending new little girl. >> Thank you. (laughs) I appreciate that. >> Absolutely. Well we thank you for watching the Cube. Again, we're live at the Women and Data Science Conference at Stanford University, second annual event. Stick around, we'll be right back. (upbeat music)
SUMMARY :
It's the Cube covering are at the second annual It's so exciting to have you here. school is the direction you should go in", and some guts to go, It's true, yeah. So talk to us about some of the ... And so that was about 25 years ago. What are some of the things And to answer those questions, you have to that at the end of the day say for the conflict in Syria. and the Violations Documentation Center. and the tools that you have? and that is the human And so we work a lot of the outcomes been to date? And so for that step we use And so that's been the primary analysis And so then we can infer further from that Things that are attributing to this. is that it helps to Can you share a little bit about that? forward and said that the that we have discovered on the domestic frontier. that exists in the data. the impact that this data can and so that it establish so that it has a value and it has impact? that's the way that you learn or something that you that I got to teach, Lisa: Sounds like me. We could, you know, hold hands Beverage of my choice, awesome. that was what improved What are some of the things and to speak to an audience, They learn so much from you and you're the program today. I appreciate that. Well we thank you for watching the Cube.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Megan | PERSON | 0.99+ |
Patrick Ball | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
Patrick | PERSON | 0.99+ |
2005 | DATE | 0.99+ |
Ohio | LOCATION | 0.99+ |
Guatemala | LOCATION | 0.99+ |
American Statistical Association | ORGANIZATION | 0.99+ |
Lisa | PERSON | 0.99+ |
El Salvador | LOCATION | 0.99+ |
Patrick Ball | PERSON | 0.99+ |
1984 | DATE | 0.99+ |
Megan Price | PERSON | 0.99+ |
National Police Force | ORGANIZATION | 0.99+ |
Syria | LOCATION | 0.99+ |
American Association for the Advancement of Science | ORGANIZATION | 0.99+ |
Syrian Network for Human Rights | ORGANIZATION | 0.99+ |
2013 | DATE | 0.99+ |
Violations Documentation Center | ORGANIZATION | 0.99+ |
United Nations | ORGANIZATION | 0.99+ |
Damascus Center for Human Rights Studies | ORGANIZATION | 0.99+ |
Excel | TITLE | 0.99+ |
Syrian Center for Statistics and Research | ORGANIZATION | 0.99+ |
1960 | DATE | 0.99+ |
HRDAG | ORGANIZATION | 0.99+ |
first | QUANTITY | 0.99+ |
nine years | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
1996 | DATE | 0.99+ |
Python | TITLE | 0.99+ |
US | LOCATION | 0.99+ |
each | QUANTITY | 0.99+ |
Stanford University | ORGANIZATION | 0.99+ |
both | QUANTITY | 0.99+ |
Human Rights Data and Analysis Group | ORGANIZATION | 0.99+ |
millions | QUANTITY | 0.98+ |
UN | ORGANIZATION | 0.98+ |
one | QUANTITY | 0.97+ |
Women in Data Science Conference | EVENT | 0.97+ |
today | DATE | 0.97+ |
past year | DATE | 0.97+ |
Women and Data Science Conference | EVENT | 0.97+ |
#WiDS2017 | EVENT | 0.97+ |
90s | DATE | 0.96+ |
Women in Data Science Conference | EVENT | 0.96+ |
two stages | QUANTITY | 0.96+ |
first one | QUANTITY | 0.95+ |
first layer | QUANTITY | 0.94+ |
both directions | QUANTITY | 0.94+ |
this morning | DATE | 0.93+ |
Stanford University | LOCATION | 0.93+ |
millions of pieces | QUANTITY | 0.91+ |
Benetech | ORGANIZATION | 0.91+ |
Public Health school | ORGANIZATION | 0.9+ |
Women in Data Science 2017 | EVENT | 0.9+ |
this year | DATE | 0.88+ |
2017 | DATE | 0.86+ |
about 25 years ago | DATE | 0.85+ |
Human Rights Data Analysis Group | ORGANIZATION | 0.81+ |
second annual | QUANTITY | 0.81+ |
Public Health school | ORGANIZATION | 0.81+ |
HRDAG | PERSON | 0.8+ |
101 | QUANTITY | 0.78+ |
human rights | ORGANIZATION | 0.77+ |
one thing | QUANTITY | 0.76+ |
Cube | ORGANIZATION | 0.74+ |
Paul Walter | LOCATION | 0.73+ |
2000s | DATE | 0.72+ |
couple | QUANTITY | 0.68+ |
paper | QUANTITY | 0.65+ |
a minute | DATE | 0.64+ |
analysis | ORGANIZATION | 0.55+ |
Dr. | PERSON | 0.53+ |