Manoj Suvarna, Deloitte LLP & Arte Merritt, AWS | Amazon re:MARS 2022
(upbeat music) >> Welcome back, everyone. It's theCUBE's coverage here in Las Vegas. I'm John Furrier, your host of theCUBE with re:MARS. Amazon re:MARS stands for machine learning, automation, robotics, and space. Lot of great content, accomplishment. AI meets meets robotics and space, industrial IoT, all things data. And we've got two great guests here to unpack the AI side of it. Manoj Suvarna, Managing Director at AI Ecosystem at Deloitte and Arte Merritt, Conversational AI Lead at AWS. Manoj, it's great to see you CUBE alumni. Art, welcome to theCUBE. >> Thanks for having me. I appreciate it. >> So AI's the big theme. Actually, the big disconnect in the industry has been the industrial OT versus IT, and that's happening. Now you've got space and robotics meets what we know is machine learning and AI which we've been covering. This is the confluence of the new IoT market. >> It absolutely is. >> What's your opinion on that? >> Yeah, so actually it's taking IoT beyond the art of possible. One area that we have been working very closely with AWS. We're strategic alliance with them. And for the past six years, we have been investing a lot in transformations. Transformation as it relate to the cloud, transformation as it relate to data modernization. The new edge is essentially on AI and machine learning. And just this week, we announced a new solution which is more focused around enhancing contact center intelligence. So think about the edge of the contact center, where we all have experiences around dealing with customer service and how to really take that to the next level, challenges that clients are facing in every part of that business. So clearly. >> Well, Conversational AI is a good topic. Talk about the relationship with Deloitte and Amazon for a second around AI because you guys have some great projects going on right now. That's well ahead of the curve on solving the scale problem 'cause there's a scale and problem, practical problem and then scale. What's the relationship with Amazon and Deloitte? >> We have a great alliance and relationship. Deloitte brings that expertise to help folks build high quality, highly effective conversational AI and enterprises are implementing these solutions to really try to improve the overall customer experience. So they want to help agents improve productivity, gain insights into the reasons why folks are calling but it's really to provide that better user experience being available 24/7 on channels users prefer to interact. And the solutions that Deloitte is building are highly advanced, super exciting. Like when we show demos of them to potential customers, the eyes light up and they want those solutions. >> John: Give an example when their eyes light up. What are you showing there? >> One solution, it's called multimodal interfaces. So what this is, is when you're call into like a voice IVR, Deloitte's solution will send the folks say a mobile app or a website. So the person can interact with both the phone touching on the screen and the voice and it's all kept in sync. So imagine you call the doctor's office or say I was calling a airline and I want to change my flight or sorry, change the seat. If they were to say, seat 20D is available. Well, I don't know what that means, but if you see the map while you're talking, you can say, oh, 20D is the aisle. I'm going to select that. So Deloitte's doing those kind of experiences. It's incredible. >> Manoj, this is where the magic comes into play when you bring data together and you have integration like this. Asynchronously or synchronously, it's all coming together. You have different platforms, phone, voice, silo databases potentially, the old way. Now, the new ways integrating. What makes it all work? What's the key to success? >> Yeah, it's certainly not a trivial feat. Bringing together all of these ecosystems of relationships, technologies all put together. We cannot do it alone. This is where we partner with AWS with some of our other partners like Salesforce and OneReach and really trying to bring a symphony of some of these solutions to bear. When you think about, going back to the example of contact center, the challenges that the pandemic posed in the last couple of years was the fact that who's a humongous rise in volume of number of calls. You can imagine people calling in asking for all kinds of different things, whether it's airlines whether it is doctor's office and retail. And then couple with that is the fact that there's the labor shortage. And how do you train agents to get them to be productive enough to be able to address hundreds or thousands of these calls? And so that's where we have been starting to, we have invested in those solutions bringing those technologies together to address real client problems, not just slideware but actual production environments. And that's where we launched this solution called TrueServe as of this week, which is really a multimodal solution that is built with preconceived notions of technologies and libraries where we can then be industry agnostic and be able to deliver those experiences to our clients based on whatever vertical or industry they're in. >> Take me through the client's engagement here because I can imagine they want to get a practical solution. They're going to want to have it up and running, not like a just a chatbot, but like they completely integrated system. What's the challenge and what's the outcome first set of milestones that you see that they do first? Do they just get the data together? Are they deploying a software solution? What's the use cases? >> There's a couple different use cases. We see there's the self-service component that we're talking about with the chatbots or voice IVR solutions. There's also use cases for helping the agents, so real-time agent assist. So you call into a contact center, it's transcribed in real time, run through some sort of knowledge base to give the agents possible answers to help the user out, tying in, say the Salesforce data, CRM data, to know more about the user. Like if I was to call the airline, it's going to say, "Are you calling about your flight to San Francisco tomorrow?" It knows who I am. It leverages that stuff. And then the key piece is the analytics knowing why folks are calling, not just your metrics around, length of calls or deflections, but what were the reasons people were calling in because you can use that data to improve your underlying products or services. These are the things that enterprise are looking for and this is where someone like Deloitte comes in, brings that expertise, speeds up the time to market and really helps the customers. >> Manoj, what was the solution you mentioned that you guys announced? >> Yeah, so this is called Deloitte TrueServe. And essentially, it's a combination of multiple different solutions combinations from AWS, from Salesforce, from OneReach. All put together with our joint engineering and really delivering that capability. Enhancing on that is the analytics component, which is really critical, especially because when you think about the average contact center, less than 10% of the data gets analyzed today, and how do you then extract value out of that data and be able to deliver business outcomes. >> I was just talking to some of the other day about Zoom. Everyone records their zoom meetings, and no one watches them. I mean, who's going to wade through that. Call center is even more high volume. We're talking about massive data. And so will you guys automate that? Do you go through every single piece of data, every call and bring it down? Is that how it works? >> Go ahead. >> There's just some of the things you can do. Analyze the calls for common themes, like figuring out like topic modeling, what are the reasons people are calling in. Summarizing that stuff so you can see what those underlying issues are. And so that could be, like I was mentioning, improving the product or service. It could also be for helping train the agents. So here's how to answer that question. And it could even be reinforcing positive experiences maybe an agent had a particular great call and that could be a reference for other folks. >> Yeah, and also during the conversation, when you think about within 60 to 90 seconds, how do you identify the intonation, the sentiments of the client customer calling in and be able to respond in real time for the challenges that they might be facing and the ability to authenticate the customer at the same time be able to respond to them. I think that is the advancements that we are seeing in the market. >> I think also your point about the data having residual values also excellent because this is a long tail of value in this data, like for predictions and stuff. So NASA was just on before you guys came on, talking about the Artemis project and all the missions and they have to run massive amounts of simulations. And this is where I've kind of seen the dots connect here. You can run with AI, run all the heavy lifting without human touching it to get that first ingestion or analysis, and then iterating on the data based upon what else happens. >> Manoj: Absolutely. >> This is now the new normal, right? Is this? >> It is. And it's transverse towards across multiple domains. So the example we gave you was around Conversational AI. We're now looking at that for doing predictive analytics. Those are some examples that we are doing jointly with AWS SageMaker. We are working on things like computer vision with some of the capabilities and what computer vision has to offer. And so when you think about the continuum of possibilities of what we can bring together from a tools, technology, services perspective, really the sky is the limit in terms of delivering these real experiences to our clients. >> So take me through a customer. Pretending I'm a customer, I get it. I got to do this. It's a competitive advantage. What are the outcomes that they are envisioning? What are some of the patterns you're seeing with customers? What outcomes are they expecting and what kind of high level upside you see them envisioning coming out of the data? >> So when you think about the CxOs today and the board, a lot of them are thinking about, okay, how do you build more efficiency in those system? How do you enable a technology or solution for them to not only increase their top line but as well as their bottom line? How do you enhance the customer experience, which in this case is spot on because when you think about, when customers go repeat to a vendor, it's based on quality, it's based on price. Customer experience is now topping that where your first experience, whether it's through a chat or a virtual assistant or a phone call is going to determine the longevity of that customer with you as a vendor. And so clearly, when you think about how clients are becoming AI fuel, this is where we are bringing in new technologies, new solutions to really push the art to the limit and the art of possible. >> You got a playbook too to do this? >> Yeah, yeah, absolutely. We have done that. And in fact, we are now taking that to the next level up. So something that I've mentioned about this before, which is how do you trust an AI system as it's building up. >> Hold on, I need to plug in. >> Yeah, absolutely. >> I put this here for a reason to remind me. No, but also trust is a big thing. Just put that trustworthy. This is an AI ethics question. >> Arte: It's a big. >> Let's get into it. This is huge. Data's data. Data can be biased from coming in >> Part of it, there are concerns you have to look at the bias in the data. It's also how you communicate through these automated channels, being empathetic, building trust with the customer, being concise in the answers and being accessible to all sorts of different folks and how they might communicate. So it's definitely a big area. >> I mean, you think about just normal life. We all lived situations where we got a text message from a friend or someone close to us where, what the hell, what are you saying? And they had no contextual bad feelings about it or, well, there's misunderstandings 'cause the context isn't there 'cause you're rapid fire them on the subway. I'm riding my bike. I stop and text, okay, I'm okay. Church response could mean I'm busy or I'm angry. Like this is now what you said about empathy. This is now a new dynamic in here. >> Oh, the empathy is huge, especially if you're say a financial institution or building that trust with folks and being empathetic. If someone's reaching out to a contact center, there's a good chance they're upset about something. So you have to take that. >> John: Calm them down first. >> Yeah, and not being like false like platitude kind of things, like really being empathetic, being inclusive in the language. Those are things that you have conversation designers and linguistics folks that really look into that. That's why having domain expertise from folks like Deloitte come in to help with that. 'Cause maybe if you're just building the chat on your own, you might not think of those things. But the folks with the domain expertise will say like, Hey, this is how you script it. It's the power of words, getting that message across clearly. >> The linguistics matter? >> Yeah, yeah. >> It does. >> By vertical too, I mean, you could pick any the tribe, whatever orientation and age, demographics, genders. >> All of those things that we take for granted as a human. When you think about trust, when you think about bias, when you think about ethics, it just gets amplified. Because now you're dealing with millions and millions of data points that may or may not be the right direction in terms of somebody's calling in depending on what age group they're in. Some questions might not be relevant for that age group. Now a human can determine that, but a bot cannot. And so how do you make sure that when you look at this data coming in, how do you build models that are ethically aware of the contextual algorithms and the alignment with it and also enabling that experience to be much enhanced than taking it backwards, and that's really. >> I can imagine it getting better with as people get scaled up a bit 'cause then you're going to have to start having AI to watch the AI at some point, as they say. Where are we in the progress in the industry right now? Because I know there's been a lot of news stories around, ethics and AI and bias and it's a moving train actually, but still problems are going to be solved. Are we at the tipping point yet? Are we still walking in before we crawl or crawling before we walk? I should say, I mean, where are we? >> I think we are in between a crawling or walk phase. And the reason for that is because it varies depending on whether you're regulated industry or unregulated. In the regulated industry, there are compliance regulations requirements, whether it's government whether it's banking, financial institutions where they have to meet Sarbanes-Oxley and all kinds of compliance requirements, whereas an unregulated industry like retail and consumer, it is anybody's gain. And so the reality of it is that there is more of an awareness now. And that's one of the reasons why we've been promoting this jointly with AWS. We have a framework that we have established where there are multiple pillars of trust, bias, privacy, and security that companies and organizations need to think about. Our data scientists, ML engineers need to be familiar with it, but because while they're super great in terms of model building and development, when it comes to the business, when it comes to the client or a customer, it is super important for them to trust this platform, this algorithm. And that is where we are trying to build that momentum, bring that awareness. One of my colleagues has written this book "Trustworthy AI". We're trying to take the message out to the market to say, there is a framework. We can help you get there. And certainly that's what we are doing. >> Just call Deloitte up and you're going to take care of them. >> Manoj: Yeah. >> On the Amazon side, Amazon Web Services. I always interview Swami every year at re:Invent and he always get the updates. He's been bullish on this for a long time on this Conversational AI. What's the update on the AWS side? Where are you guys at? What's the current trends that you're riding? What wave are you riding right now? >> So some of the trends we see in customer interest, there's a couple of things. One is the multimodal interfaces we we're just chatting about where the voice IVA is synced with like a web or mobile experience, so you take that full advantage of the device. The other is adding additional AI into the Conversational AI. So one example is a customer that included intelligent document processing as part of the chatbot. So instead of typing your name and address, take a photo of your driver's license. It was an insurance onboarding chatbot, so you could take a photo of your existing insurance policy. It'll extract that information to build the new insurance policy. So folks get excited about that. And the third area we see interest is what's called multi-bot orchestration. And this is where you can have one main chatbot. Marshall user across different sub-chatbots based on the use case persona or even language. So those things get people really excited and then AWS is launching all sorts of new features. I don't know which one is coming out. >> I know something's coming out tomorrow. He's right at corner. He's big smile on his face. He wouldn't tell me. It's good. >> We have for folks like the closer alliance relationships, we we're able to get previews. So there a preview of all the new stuff. And I don't know what I could, it's pretty exciting stuff. >> You get in trouble if you spill the beans here. Don't, be careful. I'll watch you. We'll talk off camera. All exciting stuff. >> Yeah, yeah. I think the orchestrator bot is interesting. Having the ability to orchestrate across different contextual datasets is interesting. >> One of the areas where it's particularly interesting is in financial services. Imagine a bank could have consumer accounts, merchant accounts, investment banking accounts. So if you were to chat with the chatbot and say I want to open account, well, which account do you mean? And so it's able to figure out that context to navigate folks to those sub-chatbots behind the scenes. And so it's pretty interesting style. >> Awesome. Manoj while we're here, take a minute to quickly give a plug for Deloitte. What your program's about? What customers should expect if they work with you guys on this project? Give a quick commercial for Deloitte. >> Yeah, no, absolutely. I mean, Deloitte has been continuing to lead the AI field organization effort across our client base. If you think about all the Fortune 100, Fortune 500, Fortune 2000 clients, we certainly have them where they are in advanced stages of multiple deployments for AI. And we look at it all the way from strategy to implementation to operational models. So clients don't have to do it alone. And we are continuing to build our ecosystem of relationships, partnerships like the alliances that we have with AWS, building the ecosystem of relationships with other emerging startups, to your point about how do you continue to innovate and bring those technologies to your clients in a trustworthy environment so that we can deliver it in production scale. That is essentially what we're driving. >> Well, Arte, there's a great conversation and the AI will take over from here as we end the segment. I see a a bot coming on theCUBE later and there might be CUBE be replaced with robots. >> Right, right, right, exactly. >> I'm John Furrier, calling from Palo Alto. >> Someday, CUBE bot. >> You can just say, Alexa do my demo for me or whatever it is. >> Or digital twin for John. >> We're going to have a robot on earlier do a CUBE interview and that's Dave Vellante. He'd just pipe his voice in and be fun. Well, thanks for coming on, great conversation. >> Thank you. Thanks for having us. >> CUBE coverage here at re:MARS in Las Vegas. Back to the event circle. We're back in the line. Got re:Inforce and don't forget re:Invent at the end of the year. CUBE coverage of this exciting show here. Machine learning, automation, robotics, space. That's MARS, it's re:MARS. I'm John Furrier. Thanks for watching. (gentle music)
SUMMARY :
Manoj, it's great to see you CUBE alumni. I appreciate it. of the new IoT market. And for the past six years, on solving the scale problem And the solutions that What are you showing there? So the person can interact What's the key to success? and be able to deliver those What's the use cases? it's going to say, "Are you and be able to deliver business outcomes. of the other day about Zoom. the things you can do. and the ability to and they have to run massive So the example we gave you What are some of the patterns And so clearly, when you that to the next level up. a reason to remind me. Data can be biased from coming in being concise in the answers 'cause the context isn't there Oh, the empathy is huge, But the folks with the domain you could pick any the tribe, and the alignment with it in the industry right now? And so the reality of it is that you're going to take care of them. and he always get the updates. So some of the trends we I know something's coming out tomorrow. We have for folks like the if you spill the beans here. Having the ability to orchestrate One of the areas where with you guys on this project? So clients don't have to do it alone. and the AI will take over from I'm John Furrier, You can just say, We're going to have a robot Thanks for having us. We're back in the line.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Deloitte | ORGANIZATION | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
John | PERSON | 0.99+ |
NASA | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Amazon Web Services | ORGANIZATION | 0.99+ |
millions | QUANTITY | 0.99+ |
John Furrier | PERSON | 0.99+ |
hundreds | QUANTITY | 0.99+ |
Manoj Suvarna | PERSON | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
San Francisco | LOCATION | 0.99+ |
less than 10% | QUANTITY | 0.99+ |
Manoj | PERSON | 0.99+ |
AI Ecosystem | ORGANIZATION | 0.99+ |
first experience | QUANTITY | 0.99+ |
Swami | PERSON | 0.99+ |
One | QUANTITY | 0.99+ |
Arte Merritt | ORGANIZATION | 0.99+ |
tomorrow | DATE | 0.99+ |
one | QUANTITY | 0.99+ |
OneReach | ORGANIZATION | 0.99+ |
90 seconds | QUANTITY | 0.98+ |
Palo Alto | LOCATION | 0.98+ |
one example | QUANTITY | 0.98+ |
first | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
both | QUANTITY | 0.97+ |
Alexa | TITLE | 0.97+ |
this week | DATE | 0.97+ |
thousands | QUANTITY | 0.97+ |
Deloitte LLP | ORGANIZATION | 0.97+ |
One solution | QUANTITY | 0.96+ |
two great guests | QUANTITY | 0.95+ |
theCUBE | ORGANIZATION | 0.95+ |
Salesforce | ORGANIZATION | 0.94+ |
Sarbanes-Oxley | ORGANIZATION | 0.94+ |
one main chatbot | QUANTITY | 0.94+ |
third area | QUANTITY | 0.94+ |
One area | QUANTITY | 0.92+ |
Fortune 500 | ORGANIZATION | 0.92+ |
60 | QUANTITY | 0.92+ |
first set | QUANTITY | 0.91+ |
Trustworthy AI | TITLE | 0.9+ |
CUBE | TITLE | 0.89+ |
last couple of years | DATE | 0.88+ |
CUBE | ORGANIZATION | 0.88+ |
pandemic | EVENT | 0.84+ |
MARS | TITLE | 0.83+ |
past six years | DATE | 0.8+ |
Fortune 100 | ORGANIZATION | 0.78+ |
Invent | EVENT | 0.76+ |
Around theCUBE, Unpacking AI Panel, Part 3 | CUBEConversation, October 2019
(upbeat music) >> From our studios in the heart of Silicon Valley, Palo Alto, California, this is a CUBE conversation. >> Hello, and welcome to theCUBE Studios here in Palo Alto, California. We have a special Around theCUBE segment, Unpacking AI. This is a Get Smart Series. We have three great guests. Rajen Sheth, VP of AI and Product Management at Google. He knows well the AI development for Google Cloud. Dr. Kate Darling, research specialist at MIT media lab. And Professor Barry O'Sullivan, Director SFI Centre for Training AI, University of College Cork in Ireland. Thanks for coming on, everyone. Let's get right to it. Ethics in AI as AI becomes mainstream, moves out to the labs and computer science world to mainstream impact. The conversations are about ethics. And this is a huge conversation, but first thing people want to know is, what is AI? What is the definition of AI? How should people look at AI? What is the definition? We'll start there, Rajen. >> So I think the way I would define AI is any way that you can make a computer intelligent, to be able to do tasks that typically people used to do. And what's interesting is that AI is something, of course, that's been around for a very long time in many different forms. Everything from expert systems in the '90s, all the way through to neural networks now. And things like machine learning, for example. People often get confused between AI and machine learning. I would think of it almost the way you would think of physics and calculus. Machine learning is the current best way to use AI in the industry. >> Kate, your definition of AI, do you have one? >> Well, I find it interesting that there's no really good universal definition. And also, I would agree with Rajen that right now, we're using kind of a narrow definition when we talk about AI, but the proper definition is probably much more broad than that. So probably something like a computer system that can make decisions independent of human input. >> Professor Barry, your take on the definition of AI, is there one? What's a good definition? >> Well, you know, so I think AI has been around for 70 years, and we still haven't agreed the definition for it, as Kate said. I think that's one of those very interesting things. I suppose it's really about making machines act and behave rationally in the world, ideally autonomously, so without human intervention. But I suppose these days, AI is really focused on achieving human level performance in very narrowly defined tasks, you know, so game playing, recommender systems, planning. So we do those in isolation. We don't tend to put them together to create the fabled artificial general intelligence. I think that's something that we don't tend to focus on at all, actually if that made sense. >> Okay the question is that AI is kind of elusive, it's changing, it's evolving. It's been around for awhile, as you guys pointed out, but now that it's on everyone's mind, we see it in the news every day from Facebook being a technology program with billions of people. AI was supposed to solve the problem there. We see new workloads being developed with cloud computing where AI is a critical software component of all this. But that's a geeky world. But the real world, as an ethical conversation, is not a lot of computer scientists have taken ethics classes. So who decides what's ethical with AI? Professor Barry, let's start with you. Where do we start with ethics? >> Yeah, sure, so one of the things I do is I'm the Vice-Chair of the European Commission's High-Level Expert Group on Artificial Intelligence, and this year we published the Ethics Guidelines for Trustworthy AI in Europe, which is all about, you know, setting an ethical standard for what AI is. You're right, computer scientists don't take ethical standards, but I suppose what we are faced with here is a technology that's so pervasive in our lives that we really do need to think carefully about the impact of that technology on, you know, human agency, and human well-being, on societal well-being. So I think it's right and proper that we're talking about ethics at this moment in time. But, of course, we do need to realize that ethics is not a panacea, right? So it's certainly something we need to talk about, but it's not going to solve, it's not going to rid us of all of the detrimental applications or usages of AI that we might see today. >> Kate, your take on ethics. Start all over, throw out everything, build on it, what do we do? >> Well, what we do is we get more interdisciplinary, right? I mean, because you asked, "Who decides?". Until now it has been the people building the technology who have had to make some calls on ethics. And it's not, you know, it's not necessarily the way of thinking that they are trained in, and so it makes a lot of sense to have projects like the one that Barry is involved in, where you bring together people from different areas of expert... >> I think we lost Kate there. Rajen, why don't you jump in, talk about-- >> (muffled speaking) you decide issues of responsibility for harm. We have to look at algorithmic bias. We have to look at supplementing versus replacing human labor, we have to look at privacy and data security. We have look at the things that I'm interested in like the ways that people anthropomorphize the technology and use it in a way that's perhaps different than intended. So, depending on what issue we're looking at, we need to draw from a variety of disciplines. And fortunately we're seeing more support for this within companies and within universities as well. >> Rajen, your take on this. >> So, I think one thing that's interesting is to step back and understand why this moment is so compelling and why it's so important for us to understand this right now. And the reason for that is that this is the moment where AI is starting to have an impact on the everyday person. Anytime I present, I put up a slide of the Mosaic browser from 1994 and my point is that, that's where AI is today. It's at the very beginning stages of how we can impact people, even though it's been around for 70 years. And what's interesting about ethics, is we have an opportunity to do that right from the beginning right now. I think that there's a lot that you can bring in from the way that we think about ethics overall. For example, in our company, can you all hear me? >> Yep. >> Mm-hmm. >> Okay, we've hired an ethicist within our company, from a university, to actually bring in the general principles of ethics and bring that into AI. But I also do think that things are different so for example, bias is an ethical problem. However, bias can be encoded and actually given more legitimacy when it could be encoded in an algorithm. So, those are things that we really need to watch out for where I think it is a little bit different and a little bit more interesting. >> This is a great point-- >> Let me just-- >> Oh, go ahead. >> Yeah, just one interesting thing to bear in mind, and I think Kate said this, and I just want to echo it, is that AI is becoming extremely multidisciplinary. And I think it's no longer a technical issue. Obviously there are massive technical challenges, but it's now become as much an opportunity for people in the social sciences, the humanities, ethics people. Legal people, I think need to understand AI. And in fact, I gave a talk recently at a legal symposium, and the idea of this on a parallel track of people who have legal expertise and AI expertise, I think that's a really fantastic opportunity that we need to bear in mind. So, unfortunately us nerds, we don't own AI anymore. You know, it's something we need to interact with the real world on a significant basis. >> You know, I want to ask a question, because you know, the algorithms, everyone talks about the algorithms and the bias and all that stuff. It's totally relevant, great points on interdisciplinary, but there's a human component here. As AI starts to infiltrate the culture and hit everyday life, the reaction to AI sometimes can be, "Whoa, my job's going to get automated away." So, I got to ask you guys, as we deal with AI, is that a reflection on how we deal with it to our own humanity? Because how we deal with AI from an ethics standpoint ultimately is a reflection on our own humanity. Your thoughts on this. Rajen, we'll start with you. >> I mean it is, oh, sorry, Rajen? >> So, I think it is. And I think that there are three big issues that I see that I think are reflective of ethics in general, but then also really are particular to AI. So, there's bias. And bias is an overall ethical issue that I think this is particular here. There's what you mentioned, future of work, you know, what does the workforce look like 10 years from now. And that changes quite a bit over time. If you look at the workforce now versus 30 years ago, it's quite a bit different. And AI will change that radically over the next 10 years. The other thing is what is good use of AI, and what's bad use of AI? And I think one thing we've discovered is that there's probably 10% of things that are clearly bad, and 10% of things that are clearly good, and 80% of things that are in that gray area in between where it's up to kind of your personal view. And that's the really really tough part about all this. >> Kate, you were going to weigh in. >> Well, I think that, I'm actually going to push back a little, not on Rajen, but on the question. Because I think that one of the fallacies that we are constantly engaging in is we are comparing artificial intelligence to human intelligence, and robots to people, and we're failing to acknowledge sufficiently that AI has a very different skillset than a person. So, I think it makes more sense to look at different analogies. For example, how have we used and integrated animals in the past to help us with work? And that lets us see that the answer to questions like, "Will AI disrupt the labor market?" "Will it change infrastructures and efficiencies?" The answer to that is yes. But will it be a one-to-one replacement of people? No. That said, I do think that AI is a really interesting mirror that we're holding up to ourselves to answer certain questions like, "What is our definition of fairness?" for example. We want algorithms to be fair. We want to program ethics into machines. And what it's really showing us is that we don't have good definitions of what these things are even though we thought we did. >> All right, Professor Barry, your thoughts? >> Yeah, I think there's many points one could make here. I suppose the first thing is that we should be seeing AI, not as a replacement technology, but as an assistive technology. It's here to help us in all sorts of ways to make us more productive, and to make us more accurate in how we carry out certain tasks. I think, absolutely the labor force will be transformed in the future, but there isn't going to be massive job loss. You know, the technology has always changed how we work and play and interact with each other. You know, look at the smart phone. The smart phone is 12 years old. We never imagined in 2007 that our world would be the way it is today. So technology transforms very subtly over long periods of time, and that's how it should be. I think we shouldn't fear AI. I think the thing we should fear most, in fact, is not Artificial Intelligence, but is actual stupidity. So I think we need to, I would encourage people not to think, it's very easy to talk negatively and think negatively about AI because it is such a impactful and promising technology, but I think we need to keep it real a little bit, right? So there's a lot of hype around AI that we need to sort of see through and understand what's real and what's not. And that's really some of the challenges we have to face. And also, one of the big challenges we have, is how do we educate the ordinary person on the street to understand what AI is, what it's capable of, when it can be trusted, and when it cannot be trusted. And ethics gets of some of the way there, but it doesn't have to get us all of the way there. We need good old-fashioned good engineering to make people trust in the system. >> That was a great point. Ethics is kind of a reflection of that mirror, I love that. Kate, I want to get to that animal comment about domesticating technology, but I want to stay in this culture question for a minute. If you look at some of the major tech companies like Microsoft and others, the employees are revolting around their use of AI in certain use cases. It's a knee-jerk reaction around, "Oh my God, "We're using AI, we're harming the world." So, we live in a culture now where it's becoming more mission driven. There's a cultural impact, and to your point about not fearing AI, are people having a certain knee-jerk reaction to AI because you're seeing cultures inside tech companies and society taking an opinion on AI. "Oh my God, it's definitely bad, our company's doing it. "We should not service those contracts. "Or, maybe I shouldn't buy that product "because it's listening to me." So, there's a general fear. Does this impact the ethical conversation? How do you guys see this? Because this is an interplay that we see that's a personal, it's a human reaction. >> Yeah, so if I may start, I suppose, absolutely there are, you know, the ethics debates is a critical one, and people are certainly fearful. There is this polarization in debate about good AI and bad AI, but you know, AI is good technology. It's one of these dual-use technologies. It can be applied to bad situation in ways that we would prefer it wasn't. And it can also, it's a force for tremendous good. So, we need to think about the regulation of AI, what we want it to do from a legal point of view, who is responsible, where does liability lie? We also think about what our ethical framework is, and of course, there is no international agreement on what is, there is no universal code of ethics, you know? So this is something that's very very heavily contextualized. But I think we certainly, I think we generally agree that we want to promote human well-being. We want to compute, we want to have a prosperous society. We want to protect the well-being of society. We don't want technology to impact society in any negative way. It's actually very funny. If you look back about 25-30 years ago, there was a technology where people were concerned that privacy was going to be a thing of the past. That computer systems were going to be tremendously biased because data was going to be incomplete and not representative. And there was a huge concern that good old-fashioned databases were going to be the technology that will destroy the fabric of society. That didn't happen. And I don't think we're going to have AI do that either. >> Kate? >> Yeah, we've seen a lot of technology panic, that may or may not be warranted, in the past. I think that AI and robotics suffers from a specific problem that people are influenced by science fiction and pop culture when they're thinking about the technology. And I feel like that can cause people to be worried about some things that maybe perhaps aren't the thing we should be worrying about currently. Like robots and jobs, or artificial super-intelligence taking over and killing us all, aren't maybe the main concerns we should have right now. But, algorithmic bias, for example, is a real thing, right? We see systems using data sets that disadvantage women, or people of color, and yet the use of AI is seen as neutral even though it's impinging existing biases. Or privacy and data security, right? You have technologies that are collecting massive amounts of data because the way learning works is you use lots of data. And so there's a lot of incentive to collect data. As a consumer, there's not a lot of incentive for me to want to curb that, because I want the device to listen to me and to be able to perform better. And so the question is, who is thinking about consumer protection in this space if all the incentives are toward collecting and using as much data as possible. And so I do think there is a certain amount of concern that is warranted, and where there are problems, I endorse people revolting, right? But I do think that we are sometimes a little bit skewed in our, you know, understanding where we currently are at with the technology, and what the actual problems are right now. >> Rajen, I want your thoughts on this. Education is key. As you guys were talking about, there's some things to pay attention to. How do you educate people about how to shape AI for good, and at the same time calm the fears of people at the same time, from revolting around misinformation or bad data around what could be? >> Well I think that the key thing here is to organize kind of how you evaluate this. And back to that one thing I was saying a little bit earlier, it's very tough to judge kind of what is good and what is bad. It's really up to personal perception. But then the more that you organize how to evaluate this, and then figure out ways to govern this, the easier it gets to evaluate what's in or out . So one thing that we did, was that we created a set of AI principles, and we kind of codified what we think AI should do, and then we codified areas that we would not go into as a company. The thing is, it's very high level. It's kind of like the constitution, and when you have something like the constitution, you have to get down to actual laws of what you would and wouldn't do. It's very hard to bucket and say, these are good use cases, these are bad use cases. But what we now have is a process around how do we actually take things that are coming in and figure out how do we evaluate them? A last thing that I'll mention, is that I think it's very important to have many many different viewpoints on it. Have viewpoints of people that are taking it from a business perspective, have people that are taking it from kind of a research and an ethics perspective, and all evaluate that together. And that's really what we've tried to create to be able to evaluate things as they come up. >> Well, I love that constitution angle. We'll have that as our last final question in a minute, that do we do a reset or not, but I want to get to that point that Kate mentioned. Kate, you're doing research around robotics. And I think robotics is, you've seen robotics surge in popularity from high schools have varsity teams now. You're seeing robotics with software advances and technology advances really become kind of a playful illustration of computer technology and software where AI is playing a role, and you're doing a lot of work there. But as intelligence comes into, say robotics, or software, or AI, there's a human reaction to all of this. So there's a psychology interaction to either AI and robotics. Can you guys share your thoughts on the humanization interaction between technology, as people stare at their phones today, that could be relationships in the future. And I think robotics might be a signal. You mentioned domesticating animals as an example back in the early days of when we were (laughing) as a society, that happened. Now we all have pets. Are we going to have robots as pets? Are we going to have AI pets? >> Yes, we are. (laughing) >> Is this kind of the human relationship? Okay, go ahead, share your thoughts. >> So, okay, the thing that I love about robots, and you know, in some applications to AI as well, is that people will treat these technologies like they're alive. Even though they know that they're just machine. And part of that is, again, the influence of science fiction and pop culture, that kind of primes us to do this. Part of it is the novelty of the technology moving into shared spaces, but then there's this actual biological element to it, where we have this inherent tendency to anthropomorphize, project human-like traits, behaviors, qualities, onto non-humans. And robots lend themselves really well to that because our brains are constantly scanning our environments and trying to separate things into objects and agents. And robots move like agents. We are evolutionarily hardwired to project intent onto the autonomous movement in our physical space. And this is why I love robots in particular as an AI use case, because people end up treating robots totally differently. Like people will name their Roomba vacuum cleaner and feel bad for it when it gets stuck, which they would never do with their normal vacuum cleaner, right? So, this anthropomorphization, I think, makes a huge difference when you're trying to integrate the technology, because it can have negative effects. It can lead to inefficiencies or even dangerous situations. For example, if you're using robots in the military as tools, and they're treating them like pets instead of devices. But then there are also some really fantastic use cases in health and education that rely specifically on this socialization of the robot. You can use a robot as a replacement for animal therapy where you can't use real animals. We're seeing great results in therapy with autistic children, engaging them in ways that we haven't seen previously. So there are a lot of really cool ways that we can make this work for us as well. >> Barry, your thoughts, have you ever thought that we'd be adopting AI as pets some day? >> Oh yeah, absolutely. Like Kate, I'm very excited about all of this too, and I think there's a few, I agree with everything Kate has said. Of course, you know, coming back to the remark you made at the beginning about people putting their faces in their smartphones all the time, you know, we can't crowdsource our sense of dignity, or that we can't have social media as the currency for how we value our lives or how we compare ourselves with others. So, you know, we do have to be careful here. Like, one of the really nice things about, one of the really nice examples of an AI system that was given some significant personality was, quite recently, Tuomas Sandholm and others at Carnegie Mellon produced this Liberatus poker playing bot, and this AI system was playing against these top-class Texas hold' em players. And all of these Texas hold 'em players were attributing a level of cunning and sophistication and mischief on this AI system that clearly it didn't have because it was essentially trying to just behave rationally. But we do like to project human characteristics onto AI systems. And I think what would be very very nice, and something we need to be very very careful of, is that when AI systems are around us, and particularly robots, you know, we do need to treat them with respect. And what I mean is, we do make sure that we treat those things that are serving society in as positive and nice a way as possible. You know, I do judge people on how they interact with, you know, sort of the least advantaged people in society. And you know, by golly, I will judge you on how you interact with a robot. >> Rajen-- >> We've actually done some research on that, where-- >> Oh, really-- >> We've shown that if you're low empathy, you're more willing to hit a robot, especially if it has a name. (panel laughing) >> I love all my equipment here, >> Oh, yeah? >> I got to tell ya, it's all beautiful. Rajen, computer science, and now AIs having this kind of humanization impact, this is an interesting shift. I mean, this is not what we studied in computer science. We were writin' code. We were going to automate things. Now there's notions of math, and not just math cognition, human relations, your thoughts on this? >> Yeah, you know what's interesting is that I think ultimately it boils down to the user experience. And I think there is this part of this which is around humanization, but then ultimately it boils down to what are you trying to do? And how well are you doing it with this technology? And I think that example around the Roomba is very interesting, where I think people kind of feel like this is more, almost like a person. But, also I think we should focus as well on what the technology is doing, and what impact it's having. My best example of this is Google Photos. And so, my whole family uses Google Photos, and they don't know that underlying it is some of the most powerful AI in the world. All they know is that they can find pictures of our kids and their grandkids on the beach anytime that they want. And so ultimately, I think it boils down to what is the AI doing for the people? And how is it? >> Yeah, expectations become the new user experience. I love that. Okay, guys, final question, and also humanization, we talked about the robotics, but also the ethics here. Ethics reminds me of the old security debate, and security in the old days. Do you increase the security, or do you throw it all away and start over? So with this idea of how do you figure out ethics in today's modern society with it being a mirror? Do we throw it all away and do a do-over, can we recast this? Can we start over? Do we augment? What's the approach that you guys see that we might need to go through right now to really, not hold back AI, but let it continue to grow and accelerate, educate people, bring value and user experience to the table? What is the path? We'll start with Barry, and then Kate, and then Rajen. >> Yeah, I can kick off. I think ethics gets us some of the way there, right? So, obviously we need to have a set of principles that we sign up to and agree upon. And there are literally hundreds of documents on AI ethics. I think in Europe, for example, there are 128 different documents around AI ethics, I mean policy documents. But, you know, we have to think about how are we actually going to make this happen in the real world? And I think, you know, if you take the aviation industry, that we trust in airplanes, because we understand that they're built to the highest standards, that they're tested rigorously, and that the organizations that are building these things are held account when things go wrong. And I think we need to do something similar in AI. We need good strong engineering, and you know, ethics is fantastic, and I'm a strong believer in ethical codes, but we do need to make it practical. And we do need to figure out a way of having the public trust in AI systems, and that comes back to education. So, I think we need the general public, and indeed ourselves, to be a little more cynical and questioning when we hear stories in the media about AI, because a lot of it is hyped. You know, and that's because researchers want to describe their research in an exciting way, but also, newspaper people and media people want to have a sticky subject. But I think we do need to have a society that can look at these technologies and really critique them and understand what's been said. And I think a healthy dose of cynicism is not going to do us any harm. >> So, modernization, do you change the ethical definition? Kate, what's your thoughts on all this? >> Well, I love that Barry brought up the aviation industry because I think that right now we're kind of an industry in its infancy, but if we look at how other industries have evolved to deal with some thorny ethical issues, like for example, medicine. You know, medicine had to develop a whole code of ethics, and develop a bunch of standards. If you look at aviation or other transportation industries, they've had to deal with a lot of things like public perception of what the technology can and can't do, and so you look at the growing pains that those industries have gone through, and then you add in some modern insight about interdisciplinary, about diversity, and tech development generally. Getting people together who have different experiences, different life experiences, when you're developing the technology, and I think we don't have to rebuild the wheel here. >> Yep. >> Rajen, your thoughts on the path forward, throw it all away, rebuild, what do we do? >> Yeah, so I think this is a really interesting one because of all the technologies I've worked in within my career, everything from the internet, to mobile, to virtualization, this is probably the most powerful potential for human good out there. And AI, the potential of what it can do is greater than almost anything else that's out there. However, I do think that people's perception of what it's going to do is a little bit skewed. So when people think of AI, they think of self-driving cars and robots and things like that. And that's not the reality of what AI is today. And so I think two things are important. One is to actually look at the reality of what AI is doing today and where it impacts people lives. Like, how does it personalize customer interactions? How does it make things more efficient? How do we spot things that we never were able to spot before? And start there, and then apply the ethics that we've already known for years and years and years, but adapt it to a way that actually makes sense for this. >> Okay, like that's it for the Around theCUBE. Looks like we've tallied up. Looks like Professor Barry 11, third place, Kate in second place with 13. Rajen with 16 points, you're the winner, so you get the last word on the segment here. Share your final thoughts on this panel. >> Well, I think it's really important that we're having this conversation right now. I think about back to 1994 when the internet first started. People did not have that conversation nearly as much at that point, and the internet has done some amazing things, and there have been some bad side effects. I think with this, if we have this conversation now, we have this opportunity to shape this technology in a very very positive way as we go forward. >> Thank you so much, and thanks everyone for taking the time to come in. All the way form Cork, Ireland, Professor Barry O'Sullivan. Dr. Kate Darling doing some amazing research at MIT on robotics and human psychology and like a new book coming out. Kate, thanks for coming out. And Rajen, thanks for winning and sharing your thoughts. Thanks everyone for coming. This is Around theCUBE here, Unpacking AI segment around ethics and human interaction and societal impact. I'm John Furrier with theCUBE. Thanks for watching. (upbeat music)
SUMMARY :
in the heart of Silicon Valley, What is the definition of AI? is any way that you can make a computer intelligent, but the proper definition is probably I think that's something that we don't tend Where do we start with ethics? that we really do need to think carefully about the impact what do we do? And it's not, you know, I think we lost Kate there. we need to draw from a variety of disciplines. from the way that we think about ethics overall. and bring that into AI. that we need to bear in mind. is that a reflection on how we deal with it And I think that there are three big issues and integrated animals in the past to help us with work? And that's really some of the challenges we have to face. and to your point about not fearing AI, But I think we certainly, I think we generally agree But I do think that we are sometimes a little bit skewed and at the same time calm the fears of people and we kind of codified what we think AI should do, that do we do a reset or not, Yes, we are. the human relationship? that we can make this work for us as well. and something we need to be very very careful of, that if you're low empathy, I mean, this is not what we studied in computer science. And I think there is this part of this that we might need to go through right now And I think we need to do something similar in AI. and I think we don't have to rebuild the wheel here. And that's not the reality of what AI is today. Okay, like that's it for the Around theCUBE. I think about back to 1994 when the internet first started. and thanks everyone for taking the time to come in.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Microsoft | ORGANIZATION | 0.99+ |
Kate | PERSON | 0.99+ |
Barry | PERSON | 0.99+ |
Rajen Sheth | PERSON | 0.99+ |
Carnegie Mellon | ORGANIZATION | 0.99+ |
Rajen | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
1994 | DATE | 0.99+ |
Europe | LOCATION | 0.99+ |
2007 | DATE | 0.99+ |
October 2019 | DATE | 0.99+ |
16 points | QUANTITY | 0.99+ |
Palo Alto, California | LOCATION | 0.99+ |
Barry O'Sullivan | PERSON | 0.99+ |
Tuomas Sandholm | PERSON | 0.99+ |
80% | QUANTITY | 0.99+ |
10% | QUANTITY | 0.99+ |
first | QUANTITY | 0.99+ |
Kate Darling | PERSON | 0.99+ |
European Commission | ORGANIZATION | 0.99+ |
University of College | ORGANIZATION | 0.99+ |
third place | QUANTITY | 0.99+ |
ORGANIZATION | 0.99+ | |
today | DATE | 0.99+ |
second place | QUANTITY | 0.99+ |
MIT | ORGANIZATION | 0.99+ |
one | QUANTITY | 0.99+ |
two things | QUANTITY | 0.99+ |
ORGANIZATION | 0.98+ | |
this year | DATE | 0.98+ |
hundreds of documents | QUANTITY | 0.98+ |
30 years ago | DATE | 0.98+ |
billions of people | QUANTITY | 0.98+ |
Professor | PERSON | 0.98+ |
three big issues | QUANTITY | 0.97+ |
SFI Centre for Training AI | ORGANIZATION | 0.97+ |
128 different documents | QUANTITY | 0.96+ |
first thing | QUANTITY | 0.96+ |
three great guests | QUANTITY | 0.95+ |
12 years old | QUANTITY | 0.94+ |
70 years | QUANTITY | 0.94+ |
one thing | QUANTITY | 0.94+ |
Dr. | PERSON | 0.93+ |
Ireland | LOCATION | 0.92+ |
Ethics Guidelines for Trustworthy AI | TITLE | 0.91+ |
theCUBE Studios | ORGANIZATION | 0.91+ |
Silicon Valley, | LOCATION | 0.9+ |
One | QUANTITY | 0.89+ |