Image Title

Search Results for Terminator 2:

Around theCUBE, Unpacking AI Panel, Part 2 | CUBEConversation, October 2019


 

(upbeat music) >> From our studios in the heart of Silicon Valley, Palo Alto, California, this is a CUBE Conversation. >> Welcome everyone to this special CUBE Conversation Around the CUBE segment, Unpacking AI, number two, sponsored by Juniper Networks. We've got a great lineup here to go around the CUBE and unpack AI. We have Ken Jennings, all-time Jeopardy champion with us. Celebrity, great story there, we'll dig into that. John Hinson, director of AI at Evotek and Charna Parkey, who's the applied scientist at Textio. Thanks for joining us here for Around the CUBE Unpacking AI, appreciate it. First question I want to get to, Ken, you're notable for being beaten by a machine on Jeopardy. Everyone knows that story, but it really brings out the question of AI and the role AI is playing in society around obsolescence. We've been hearing gloom and doom around AI replacing people's jobs, and it's not really that way. What's your take on AI and replacing people's jobs? >> You know, I'm not an economist, so I can't speak to how easy it's going to be to retrain and re-skill tens of millions of people once these clerical and food prep and driving and whatever jobs go away, but I can definitely speak to the personal feeling of being in that situation, kind of watching the machine take your job on the assembly line and realizing that the thing you thought made you special no longer exists. If IBM throws enough money at it, your skill essentially is now obsolete. And it was kind of a disconcerting feeling. I think that what people need is to feel like they matter, and that went away for me very quickly when I realized that a black rectangle can now beat me at a game show. >> Okay John, what's your take on AI replacing jobs? What's your view on this? >> I think, look, we're all going to have to adapt. There's a lot of changes coming. There's changes coming socially, economically, politically. I think it's a disservice to us all to get to too indulgent around the idea that these things are going to change. We have to absorb these things, we have to be really smart about how we approach them. We have to be very open-minded about how these things are going to actually change us all. But ultimately, I think it's going to be positive at the end of the day. It's definitely going to be a little rough for a couple of years as we make all these adjustments, but I think what AI brings to the table is heads above kind of where we are today. >> Charna, your take around this, because the role of humans versus machines are pretty significant, they help each other. But is AI going to dominate over humans? >> Yeah, absolutely. I think there's a thing that we see over and over again in every bubble and collapse where, you know, in the automotive industry we certainly saw a bunch of jobs were lost, but a bunch of jobs were gained. And so we're just now actually getting into the phase where people are realizing that AI isn't just replacement, it has to be augmentation, right? We can't simply use images to replace recognition of people, we can't just use black box to give our FICO credit scores, it has to be inspectable. So there's a new field coming up now called explainable AI that actually is where we're moving towards and it's actually going to help society and create jobs. >> All right so let's stay on that next point for the next round, explainable AI. This points to a golden age. There's a debate around are we in a bubble or a golden age. A lot of people are negative right now on tech. You can see all the tech backlash. Amazon, the big tech companies like Apple and Facebook, there's a huge backlash around this so-called tech for society. Is this an indicator of a golden age coming? >> I think so, absolutely. We can take two examples of this. One would be where, you remember when Amazon built a hiring algorithm based upon their own resume data and they found that it was discriminating against women because they had only had men apply for it. Now with Textio we're building augmented writing across the audience and not from a single company and so companies like Johnson and Johnson are increasing the pipeline by more than nine percent which converts to 90,000 more women applying for their jobs. And so part of the difference there is one is explainable, one isn't, and one is using the right data set representing the audience that is consuming it and not a single company's hiring. So I think we're absolutely headed into more of a golden age, and I think these are some of the signs that people are starting to use it in the right way. >> John, what's your take? Obviously golden age doesn't look that to us right now. You see Facebook approving lies as ads, Twitter banning political ads. AI was supposed to solve all these problems. Is there light at the end of this dark tunnel we're on? >> Yeah, golden age for sure. I'm definitely a big believer in that. I think there's a new era amongst us on how we handle data in general. I think the most important thing we have here though is education around what this stuff is, how it works, how it's affecting our lives individually and at the corporate level. This is a new era of informing and augmenting literally everything we do. I see nothing but positives coming out of this. We have to be obviously very careful with our approaching all the biases that already exist today that are only going to be magnified with these types of algorithms at mass scale. But ultimately if we can get over that hurdle, which I believe collectively we all need to do together, I think we'd live in much better, less wasteful world just by approaching the data that's already at hand. >> Ken, what's your take on this? It's like a daily double question. Is it going to be a golden age? >> Laughs >> It's going to come sooner or later. We have to have catastrophe before, we have to have reality hit us in the face before we realize that tech is good, and shaping it? It's pretty ugly right now in some of the situations out there, especially in the political scene with the election in the US. You're seeing some negative things happening. What's your take on this? >> I'm much more skeptical than John and Charna. I feel like that kind of just blinkered, it's going to be great, is something you have to actually be in the tech industry and hearing all day to actually believe. I remember seeing kind of lay-person's exposure to Watson when Watson was on Jeopardy and hearing the questions reporters would ask and seeing the memes that would appear, and everyone's immediate reaction just to something as innocuous as a AI algorithm playing on a game show was to ask, is this Skynet from Terminator 2? Is this the computer from The Matrix? Is this HAL pushing us out of the airlock? Everybody immediately first goes to the tech is going to kill us. That's like everybody's first reaction, and it's weird. I don't know, you might say it's just because Hollywood has trained us to expect that plot development, but I almost think it's the other way around. Like that's a story we tell because we're deeply worried about our own meaning and obsolescence when we see how little these skills might be valued in 10, 20, 30 years. >> I can't tell you how much, by the way, Star Trek, Star Wars and Terminators probably affected the nomenclature of the technology. Everyone references Skynet. Oh my God, we're going to be taken over and killed by aliens and machines. This is a real fear. I thinks it's an initial reaction. You felt that Ken, so I've got to ask you, where do you think the crossover point is for people to internalize the benefits of say, AI for instance? Because people will say hey, look back at life before the iPhone, look at life before these tools were out there. Some will say society's gotten better, but yet there's this surveillance culture, things... And on and on. So what do you guys think the crossover point is for the reaction to change from oh my God, it's Skynet, gloom and doom to this actually could be good? >> It's incredibly tricky because as we've seen, the perception of AI both in and out of the industry changes as AI advances. As soon as machine learning can actually do a task, there's a tendency to say there's this no true Scotsman problem where we say well, that clearly can't be AI because I see how the trick worked. And yeah, humans lose at chess now. So when these small advances happen, the reaction is often oh, that's not really AI. And by the same token, it's not a game-changer when your email client can start to auto-complete your emails. That's a minor convenience to you. But you don't think oh, maybe Skynet is good. I really do think it's going to have to be, maybe the inflection point is when it starts to become so disruptive that actually public policy has to change. So we get serious about >> And public policy has started changing. >> whatever their reactions are. >> Charna, your thoughts. >> The public policy has started changing though. We just saw, I think it was in September, where California banned the use of AI in the body cameras, both real-time and after the fact. So I think that's part of the pivot point that we're actually seeing is that public policy is changing.` The state of Washington currently has a task force for AI who's making a set of recommendations for policy starting in December. But I think part of what we're missing is that we don't have enough digital natives in office to even attempt to, to your point Ken, predict what we're even going to be able to do with it, right? There is this fear because of misunderstanding, but we also don't have a respect of our political climate right now by a lot of our digital natives, and they need to be there to be making this policy. >> John, weigh in on this because you're director of AI, you're seeing positive, you have to deal with the uncertainty as well, the growth of machine learning. And just this week Google announced more TensorFlow for everybody. You're seeing Open Source. So there's a tech push, almost a democratization, going on with AI. So I think this crossover point might be sooner in front of us than people think. What's your thoughts? >> Yeah it's here right now. All these things can be essentially put into an environment. You can see these into products, or making business decisions or political decisions. These are all available right now. They're available today and its within 10 to 15 lines of code. It's all about the data sets, so you have to be really good stewards of the data that you're using to train your models. But I think the most important thing, back to the Skynet and all this science-fiction side, we have to collectively start telling the right stories. We need better stories than just this robots are going to take us over and destroy all of our jobs. I think more interesting stories really revolve around, what about public defenders who can have this informant augmentation algorithm that's going to help them get their job done? What about tailor-made medicine that's going to tell me exactly what the conditions are based off of a particular treatment plan instead of guessing? What about tailored education that's going to look at all of my strengths and weaknesses and present a plan for me? These are things that AI can do. Charna's exactly right, where if we don't get this into the right political atmosphere that's helping balance the capitalist side with the social side, we're going to be in trouble. So that's got to be embedded in every layer of enterprise as well as society in general. It's here, it's now, and it's real. >> Ken, before we move on to the ethics question, I want to get your thoughts on this because we have an Alexa at home. We had an Alexa at home; my wife made me get rid of it. We had an Apple device, what they're called... the Home pods, that's gone. I bought a Portal from Facebook because I always buy the earliest stuff, that's gone. We don't want listening devices in our house because in order to get that AI, you have to give up listening, and this has been an issue. What do you have to give to get? This has been a big question. What's your thoughts on all this? >> I was at an Amazon event where they were trumpeting how no technology had ever caught on faster than these personal digital assistants, and yet every time I'm in a use case, a household that's trying to use them, something goes terribly wrong. My friend had to rename his because the neighbor kids kept telling Alexa to do awful things. He renamed it computer, and now every time we use the word computer, the wall tells us something we don't want to know. >> (laughs) >> This is just anecdata, but maybe it speaks to something deeper, the fact that we don't necessarily like the feeling of being surveilled. IBM was always trying to push Watson as the star Trek computer that helpfully tells you exactly what you need to know in the right moment, but that's got downsides too. I feel like we're going to, if nothing else, we may start to value individual learning and knowledge less when we feel like a voice from the ceiling can deliver unto us the fact that we need. I think decision-making might suffer in that kind of a world. >> All right, this brings up ethics because I bring up the Amazon and the voice stuff because this is the new interface people want to have with machines. I didn't mention phones, Androids and Apple, they need to listen in order to make decisions. This brings up the ethics question around who sets the laws, what society should do about this, because we want the benefits of AI. John, you point out some of them. You got to give to get. Where are we on ethics? What's the opinion, what's the current view on this? John, we'll start with you on your ethics view on what needs to change now to move the ball faster. >> Data is gold. Data is gold at an exponential rate when you're talking about AI. There should be no situation where these companies get to collect data at no cost or no benefit to the end consumer. So ultimately we should have the option to opt out of any of these products and any of this type of surveillance wherever we can. Public safety is a little bit different situation, but on the commercial side, there is a lot of more expensive and even more difficult ways to train these models with a data set that isn't just basically grabbing everything our of your personal lives. I think that should be an option for consumers and that's one of those ethical check-marks. Again, ethics in general, the way that data's trained, the way that data's handled, the way models actually work, it has to be a primary reason for and approach of how you actually go about developing and delivering AI. That said, we cannot get over-indulgent in the fact that we can't do it because we're so fearful of the ethical outcomes. We have to find some middle ground and we have to find it quickly and collectively. >> Charna, what's your take on this? Ethics is super important to set the agenda for society to take advantage of all this. >> Yeah. I think we've got three ethical components here. We certainly have, as John mentioned, the data sets. However, it's also what behavior we're trying to change. So I believe the industry could benefit from a lot more behavioral science, so that we can understand whether or not the algorithms that we're building are changing behaviors that we actually want to change, right? And if we aren't, that's unethical. There is an entire field of ethics that needs to start getting put into our companies. We need an ethics board internally. A few companies are doing this already actually. I know a lot of the military companies do. I used to be in the defense industry, and so they've got a board of ethics before you can do things. The challenge is also though that as we're democratizing the algorithms themselves, people don't understand that you can't just get a set of data that represents the population. So this is true of image processing, where if we only used 100 images of a black woman, and we used 1,000 images of a white man because that was the distribution in our population, and then the algorithm could not detect the difference between skin tones for people of color, then we end up with situations where we end up in a police state where you put in an image of one black woman and it looks like ten of them and you can't distinguish between them. And yet, the confidence rate for the humans are actually higher, because they now have a machine backing their decision. And so they stop questioning, to your point, Ken, about what is the decision I'm making, they're like I'm so confident, this data told me so. And so there's a little bit of you need some expert in the loop and you also can't just have experts, because then you end up with Cambridge Analytica and all of the political things that happened there, not just in the US, but across 200 different elections and 30 different countries. And we are upset because it happened in the US, but this has been happening for years. So its just this ethical challenge of behavior change. It's not even AI and we do it all the time. Its why the cigarette industry is regulated (laughs). >> So Ken, what's your take on this? Obviously because society needs to have ethics. Who runs that? Companies? The law-makers? Someone's got to be responsible. >> I'm honestly a little pessimistic the general public will even demand this the way we're maybe hoping that they will. When I think about an example like Facebook, people just being able to, being willing to give away insane amounts of data through social media companies for the smallest of benefits: keeping in touch with people from high school they don't like. I mean, it really shows how little we value not being a product in this kind of situation. But I would like to see this kind of ethical decisions being made at the company-level. I feel like Google kind of surreptitiously moved away from it's little don't be evil mantra with the subtext that eh, maybe we'll be a little evil now. It just reminds me of Manhattan Project era thinking, where you could've gone to any of these nuclear scientists and said you're working on a real interesting puzzle here, it might advance the field, but like 200,000 civilians might die this summer. And I feel like they would've just looked at you and thought that's not really my bailiwick. I'm just trying to solve the fission problem. I would like to see these 10 companies actually having that kind of thinking internally. Not being so busy thinking if they can do something that they don't wonder if they should. >> That's a great point. This brings up the point of who is responsible. Almost as if who is less evil than the other person? Google, they don't do evil, but they're less evil than Amazon and Facebook and others. Who is responsible? The companies or the law-makers? Because if you look up some of the hearings in Washington, D.C., some of the law-makers we see up there, they don't know how the internet works, and it's pretty obvious that this is a problem. >> Yeah, well that's why Jack Dorsey of Twitter posted yesterday that he banned not just political ads, but also issue ads. This isn't something that they're making him do, but he understands that when you're using AI to target people, that it's not okay. At some point, while Mark is sitting on (laughs) this committee and giving his testimony, he's essentially asking to be regulated because he can't regulate himself. He's like well, everyone's doing it, so I'm going to do it too. That's not an okay excuse. We see this in the labor market though actually, where there's existing laws that prevent discrimination. It's actually the company's responsibility to make sure that the products that they purchase from any vendor isn't introducing discrimination into that process. So its not even the vendor that's held responsible, it's the company and their use of it. We saw in the NYPD actually that one of those image recognition systems came up and someone said well, he looked like, I forget the name of what the actor was, but some actor's name is what the perpetrator looked like and so they used an image of the actor to try and find the person who actually assaulted someone else. And that's, it's also the user problem that I'm super concerned about. >> So John, what's your take on this? Because these are companies are in business to make money, for profit, they're not the government. And who's the role, what should the government do? AI has to move forward. >> Yeah, we're all responsible. The companies are responsible. The companies that we work with, I have yet to interact with customers, or with our customers here, that have some insidious goal, that they're trying to outsmart their customers. They're not. Everyone's looking to do the best and deliver the most relevant products in the marketplace. The government, they absolutely... The political structure we have, it has to be really intelligent and it's got to get up-skilled in this space and it needs to do it quickly, both at the economy level, as well as for our defense. But the individuals, all of us as individuals, we are already subjected to this type of artificial intelligence in our everyday lives. Look at streaming, streaming media. Right now every single one of us goes out through a streaming source, and we're getting recommendations on what we should watch next. And we're already adapting to these things, I am. I'm like stop showing me all the stuff you know I want to watch, that's not interesting to me. I want to find something I don't know I want to watch, right? So we all have to get educated, we're all responsible for these things. And again, I see a much more positive side of this. I'm not trying to get into the fear-mongering side of all the things that could go wrong, I want to focus on the good stories, the positive stories. If I'm in a courtroom and I lose a court case because I couldn't afford the best attorney and I have the bias of a judge, I would certainly like artificial intelligence to make a determination that allows me to drive an appeal, as one example. Things like that are really creative in the world that we need to do. Tampering down this wild speculation we have on the markets. I mean, we are all victims of really bad data decisions right now, almost the worst data decisions. For me, I see this as a way to actually improve all those things. Fraud fees will be reduced. That helps everybody, right? Less speculation and these wild swings, these are all helpful things. >> Well Ken, John and Charna, thank- (audio feedback) >> Go ahead, finish. Get that word in. >> Sorry. I think that point you were making though John, is we are still a capitalist society, but we're no longer a shareholder capitalist society, we are a stakeholder capitalist society and the stakeholder is the society itself. It is us, it what we want to see. And so yes, I still want money. Obviously there are things that I want to buy, but I also care about well-being. I think it's that little shift that we're seeing that is actually you and I holding our own teams accountable for what they do. >> Yeah, culture first is a whole new shift going on in these companies that's a for-profit, mission-based. Ken, John, Charna, thanks for coming on Around the CUBE, Unpacking AI. Let's go around the CUBE Ken, John and Charna in that order, and just real quickly, unpacking AI, what's your final word? >> (laughs) I really... I'm interested in John's take that there's a democratization coming provided these tools will be available to everyone. I would certainly love to believe that. It seems like in the past, we've seen no, that access to these kind of powerful, paradigm-changing tools tend to be concentrated among a very small group of people and the benefits accrue to a very small group of people. But I hope that doesn't happen here. You know, I'm optimistic as well. I like the utopian side where we all have this amazing access to information and so many new problems can get solved with amazing amounts of data that we never could've touched before. Though you know, I think about that. I try to let that help me sleep at night, and not the fact that, you know... every public figure I see on TV is kind of out of touch about technology and only one candidate suggests the universal basic income, and it's kind of a crackpot idea. Those are the kind of things that keep me up at night. >> All right, John, final word. >> I think it's beautiful, AI's beautiful. We're on the cusp of a whole new world, it's nothing but positivity I see. We have to be careful. We're all nervous about it. None of us know how to approach these things, but as human beings, we've been here before. We're here all the time. And I believe that we can all collectively get a better lives for ourselves, for the environment, for everything that's out there. It's here, it's now, it's definitely real. I encourage everyone to hurry up on their own education. Every company, every layer of government to start really embracing these things and start paying attention. It's catching us all a little bit by surprise, but once you see it in production, you see it real, you'll be impressed. >> Okay, Charna, final word. >> I think one thing I want to leave people with is what we incentivize is what we end up optimizing for. This is the same for human behavior. You're training a new employee, you put incentives on the way that they sell, and that's, they game the system. AI's specifically find the optimum route, that is their job. So if we don't understand more complex cost functions, more complex representative ways of training, we're going to end up in a space, before we know it, that we can't get out of. And especially if we're using uninspectable AI. We really need to move towards augmentation. There are some companies that are implementing this now that you may not even know. Zillow, for example, is using AI to give you a cost for your home just by the photos and the words that you describe it, but they're also purchasing houses without a human in the loop in certain markets, based upon an inspection later by a human. And so there are these big bets that we're making within these massive corporations, but if you're going to do it as an individual, take a Coursera class on AI and take a Coursera class on ethics so that you can understand what the pitfalls are going to be, because that cost function is incredibly important. >> Okay, that's a wrap. Looks like we have a winner here. Charna, you got 18, John 16. Ken came in with 12, beaten again! (both laugh) Okay, Ken, seriously, great to have you guys on, a pleasure to meet everyone. Thanks for sharing on Around the CUBE Unpacking AI, panel number two. Thank you. >> Thanks a lot. >> Thank you. >> Thanks. I've been defeated by artificial intelligence again! (all laugh) (upbeat music)

Published Date : Oct 31 2019

SUMMARY :

in the heart of Silicon Valley, and the role AI is playing in society around obsolescence. and realizing that the thing you thought made you special I think it's going to be positive But is AI going to dominate over humans? in the automotive industry we certainly saw You can see all the tech backlash. that people are starting to use it in the right way. Obviously golden age doesn't look that to us right now. that are only going to be magnified Is it going to be a golden age? We have to have catastrophe before, the tech is going to kill us. for the reaction to change from I really do think it's going to have to be, And public policy their reactions are. and they need to be there to be making this policy. the growth of machine learning. So that's got to be embedded in every layer of because in order to get that AI, the wall tells us something we don't want to know. the fact that we don't necessarily like the feeling they need to listen in order to make decisions. that we can't do it because we're so fearful Ethics is super important to set the agenda for society There is an entire field of ethics that needs to start Obviously because society needs to have ethics. And I feel like they would've just looked at you in Washington, D.C., some of the law-makers we see up there, I forget the name of what the actor was, Because these are companies are in business to make money, and I have the bias of a judge, Get that word in. and the stakeholder is the society itself. Ken, John and Charna in that order, and the benefits accrue to a very small group of people. And I believe that we can all collectively and the words that you describe it, Okay, Ken, seriously, great to have you guys on, (upbeat music)

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Jack DorseyPERSON

0.99+

AppleORGANIZATION

0.99+

Ken JenningsPERSON

0.99+

John HinsonPERSON

0.99+

AmazonORGANIZATION

0.99+

FacebookORGANIZATION

0.99+

JohnPERSON

0.99+

KenPERSON

0.99+

DecemberDATE

0.99+

CharnaPERSON

0.99+

October 2019DATE

0.99+

MarkPERSON

0.99+

SeptemberDATE

0.99+

EvotekORGANIZATION

0.99+

10QUANTITY

0.99+

IBMORGANIZATION

0.99+

100 imagesQUANTITY

0.99+

1,000 imagesQUANTITY

0.99+

Silicon ValleyLOCATION

0.99+

yesterdayDATE

0.99+

USLOCATION

0.99+

GoogleORGANIZATION

0.99+

Washington, D.C.LOCATION

0.99+

more than nine percentQUANTITY

0.99+

200,000 civiliansQUANTITY

0.99+

iPhoneCOMMERCIAL_ITEM

0.99+

Star TrekTITLE

0.99+

10 companiesQUANTITY

0.99+

Terminator 2TITLE

0.99+

Juniper NetworksORGANIZATION

0.99+

12QUANTITY

0.99+

30 different countriesQUANTITY

0.99+

TextioORGANIZATION

0.99+

TwitterORGANIZATION

0.99+

NYPDORGANIZATION

0.99+

20QUANTITY

0.99+

The MatrixTITLE

0.99+

HollywoodORGANIZATION

0.99+

WatsonPERSON

0.99+

Star WarsTITLE

0.99+

WashingtonLOCATION

0.99+

200 different electionsQUANTITY

0.99+

First questionQUANTITY

0.99+

firstQUANTITY

0.99+

FICOORGANIZATION

0.99+

15 linesQUANTITY

0.99+

Johnson and JohnsonORGANIZATION

0.99+

SkynetORGANIZATION

0.98+

18QUANTITY

0.98+

first reactionQUANTITY

0.98+

oneQUANTITY

0.98+

30 yearsQUANTITY

0.98+

one exampleQUANTITY

0.98+

bothQUANTITY

0.98+