The Truth About AI and RPA | UiPath
>> From the SiliconANGLE Media Office in Boston, Massachusets, it's theCUBE! (techno music) Now, here's your host, Stu Miniman. >> Hi. I'm Stu Miniman and this is a Cube Conversation from our Boston area studio. Welcome back to the program. Bobby Patrick, who is the Chief Marketing Officer of UiPath. Bobby, good to see you. >> Great to be here Stu. >> Alright. Bobby, we're going to tackle head-on an interesting discussion that's been going on in the industry. Of course, Artificial Intelligence is this wave that is impacting a lot when you look at earnings reports, everyone's talking about it. Most companies are understanding how they're doing it. It is not a new term. I go back reading my history of technology, Ada Lovelace, 150 years ago when she was helping to define what a computer was. She made the Lovelace objective, I believe they said - >> Right. >> Which was later quoted by Turing and the like is that if we can describe it in code, it's probably not Artificial Intelligence cause their not building new things - >> Right. >> And being able to change on there, so there's hype around AI itself, but UiPath is one of the leaders in Robotic Process Automation and how that fits in with AI and Machine Learning, all of these other terms it can get a bit of an acronym soup and we all can't agree on what the terms are. So, let's start with some of the basics Bobby. Please give us RPA and AI and we'll get into it from there. >> Well, Robotic Process Automation, according to the analysts, like Forester are part of the overall AI broader kind of massive, massive market. AI itself has many different, different, routes. Deep learning right, and machine learning, natural language processing, right and so on. I think AI is a term that covers many different grounds. And RPA, AI applies two ways. It applies within RPA and that we have a technology called Computer Vision. It's how a robot looks at a screen like how a human does, which is very, very difficult actually. You look at a citrix terminal session, or a VDI session, different than an Excel sheet, different than as SASAB, and most processes across all of those, so a robot has to be able to look at all of, all of those screen elements, and understand them right. AI within Computer Vision around understanding documents, looking at unstructured data, looking at handwriting. Conversational understanding. Looking at text in an email determining context, helping with chatbots. But a number of those components, doesn't mean we have to build that all ourselves. What RPA does is we bring it all together. We make it easy to automate and build and create the data flow of a process. Then you can apply AI to that, right. So, I think, two years ago when I first joined UiPath, putting RPA and AI in the same sentence people laughed. Year ago we said, ya know what, RPA is really the path to AI in business operations. Now, ya know we say that we're the most highly valued AI company in the world and no one has ever disagreed. >> Yeah, so it's good to lay out some of the adopting cause one of the things to look at and say if I looked at this product two or three years ago, it's not the product that it is today. We know how fast software - >> Right. Is making changes along the line. Second thing, automation itself is something we've been talking about my entire career. >> Right. When I look at things we were doing 5, 10, 15 years ago, and calling automation, we kind of laugh at it. Because today, automation absolutely is making a lot of changes. RPA is taking that automation in a very strategic direction for many companies there. It's the conversation we had last year at your conference was, RPA is the gateway drug if you will. >> Right. >> Of that environment because automation has scared a lot of people. Am I just doing scripts, what do I control, what do I set? Maybe just give us that first grounding of where that automation path is, has come and is going. >> So, there's different kinds of automation right as you said. We've had automation for decades, primarily in IT. Automation was primarily around API to API integration. And that's really hard, right. It requires developers, engineers, it requires them to keep it current. It's expensive and takes a longer time. Along comes the technology, RPA and UiPath, right were you can automate fairly quickly. There's built in recorders and you can do it with a drag and drop, like a flow chart. You can automate a process, and that, that automation is immediately beneficial. Meaning that outcome, is immediate. And, the cost to doing that is small in comparison. And I think, maybe it's the longtail of automation in some ways. It's all of these things that we do around a SAP process. The reality is if you have SAP, or you have Oracle, or you have Workday, the human processes around that involve still a Spreadsheet. It involves PDF documents. A great, one of my favorite examples right now on YouTube with Microsoft is Chevron. Chevron has hundreds of thousands of PDF's that is generated from every oil rig every day. It has all kinds of data in different formats. Tables, different structured and semi-structured data. They would actually extract that data, manually. To be able to process that and analyze that, right. Working with Microsoft AI and UiPath RPA they're able to automate that entire massive process. And now they're on stage talking about it, Microsoft and UiPath events right. And, they call that AI. That's applying AI to a massive problem for them. They need the robot to be completely accurate though. You don't to worry that the data that is being extracted from the PDF's is inaccurate, right. So, Machine Learning goes into that. There's exception management that's a part of that process as well. They call it AI. >> Yeah, some of this is just, people in the industry, the industry watchers is, we get very particular on different terminology. Let's not conflate Artificial Intelligence, or Augmented Intelligence with Machine Learning, because their different environments. I've heard Forester talk about, right, it's a spectrum though, there's an umbrella for some of these. So, we like to get not too pedantic on individual terms itself. >> Right. >> Um - >> Let me give you more examples. I think the term robotic and RPA, yes, it's true that the vast majority of the last couple of years with RPA have been very rules based, right. Because most processes today like in a call center, there's a rule. Do this and this, then this and this. And so, you're automating that same rules based structure. But once that data's flowing through, you can actually then look at the history of that data and then turn a rules based automation into an experience based automation. And how do you do that? You apply Machine Learning algorithms. You apply Data Robot, LMAI, IBM Watson to it, right. But, it's still the RPA platform that is driving that automation, it's just no longer rules based it's experience based. A great example at UiPath Together Dubai recently, was Dubai customs. They had a process where when you declared something, let's say you box of chocolate, they had to open up a binder and find a classification code for that box of chocolate. Well, they use our RPA product and they make a call out to IBM Watson as a part of the automation, and they just write in, pink box of candy filled chocolate. And it takes its Deep Learning, it comes back with a classification code, all part of an automated process. What happens? Dubai customs lines go from being a two hours to a few minutes, right. It's a combination of our RPA capability and our automation board capability and the ability to bring in IBM Watson. Dubai customs says they applied AI now and solved a big problem. >> One of the things I was reading through the recent Gartner Magic Quadrant on RPA, and they had two classifications. One was, kind of the automation does it all, and the other was the people and machines. Things like chatbox, some of the examples you've been giving there seem to be that combination. Where do those two fit together or are those distinctions that you make? >> Yeah, I mean Gartner's interesting. Gartner's a very IT-centric analyst firm, right and IT often in my view are often very conventional thinkers and not the fastest to adopt breakthrough technologies. They weren't the fastest to adopt Cloud, they weren't the fastest to adopt on-demand CRM, and they weren't the fastest to jump onto RPA because they believe, why can't we use API for everything. And the Gartner analysts is kind of, in the beginning of the process of the Magic Quadrant, they spent a lot of time with us and they were trying hard to say that was, you should solve everything with an API. That's just not reality, right? It's not feasible, and it's not affordable, right? But, RPA is just not the automation of a task or process, it's then applying a whole other set of other technologies. We have 700 partners today in our ecosystem. Natural Language processing partners, right. Machine learning partners. Chatbox partners, you mentioned. So we want to be, we want to make it very easy. In a drag and drop way. To be able to apply these great technologies to an automation to solve some big problem. What's fun to me right now is there's a lot of great startups. They come out of say insurance, or they come out of financial services and they've got a great algorithm and they know the business really well. And they probably have one or two amazing customers, and they're stuck. We, for them, this came from a partner of ours, you're becoming, you UiPath, you're becoming our best route to market because you have the data. You have the work flow. Our job I think in some ways, is to make it easy to bring these technologies together to apply them to an automation to make that through a democratized way where a non-engineer can do this, and I think that's what's happening. >> Yeah, those integrations between environments can be very powerful something we see. Every shop has lots of applications, has lots of technical data and they're not just sweeping the floor of everything they have. What are some of the limits of AI and RPA today, what do you see things going? >> I think, Deep Learning we see very little of that. It's probably applied to some kind of science project and things within companies. I think for the vast majority of our customers, they use machine learning within RPA for Computer Vision by default. But, ya know they're still not really at a stage of mass adoption of what algorithms do I want to apply to a process. I think we're trying to make it easier for you to be able to drag and drop AI we call it, to make it easier to apply. But, I think we're in very early days. And as you mentioned, there's market confusion on it. I know one thing from our 90 plus customers that are in our advisory boards. I know from them they say their companies struggles with finding an ROI in AI, and, you know, I think we're helping there cause we're applying to real operations. They say the same thing about Blockchain. I don't know Stu. Do you know of a single example of a Blockchain ROI, great example? >> Yeah, it reminds me, Big Data was one of those, over half of the people failed to get the ROI they want. It's one of those promises of certain technology - >> Right. >> That high-level, you know let's poo-poo Bobby things that actually have tangible results - >> Yeah. >> And get things done. But you weren't following the strict guidelines of the API economy. >> Right, well true, exactly right. What I find amazing is, I mentioned in another one of our talks conversations that 23,000 have come to UiPath events this year. To our own events, not trade events and other shows, that's different. They want to get on stage and talk. They're delighted about this. And their talking about, generally speaking, RPA's helping them go digital. But they're all saying their ambition is to apply AI to make those processes smarter. To learn from - to go from rules based to experience based. I think what's beautiful about UiPath, is that we're a platform that you can get there overtime. You can apply - you can predict perhaps the algorithm 's you're going to want to use in two or three years. We're not going to force you, you can apply any algorithm you want to an automation work going through. I think that flexibility is actually for customers, they find it very comforting. >> It's one of those things I say, most companies have a cloud strategy. That needs to be written in, not etched in stone. You need to revisit it every quarter. Same thing with what happening AI and in your space things are changing so fast and they need to be agile. >> That's right. >> They need to be able to make changes. In October, you're going to have a lot of those customers up on stage talking. Where will this AI discussion fit into UiPath forward in Las Vegas. We talk a lot about our AI fabric, framework it's around document understanding, getting heavy robots getting smarter and smarter, what they see on the screen, what they see on a document, what they see with handwriting, and improving the accuracy of visual understanding. Looking at the, face recognition and other types of images and being able to understand the images. Conversational understanding. The tone of an email. Is this person really upset? How upset? Or, a conversational chatbot. Really evolving from mimicking humans with RPA to augmenting humans and I think that story, both in the innovations, the customer examples on stage, I think you're going to see the sophistication of automation's that are being used through UiPath grow exponentially. >> Okay, so I want to give you the final word on this. And I don't want to talk to the people that might poo-poo or argue RPA and AI and ML and all these things. Bring us inside your customers. What...where, how does that conversation start? Are they coming it from AI, ML, RPA or is there, ya know a business discussion that usually catalyzes this engagement? >> Our customer's are starting with digital. They're trying to go digital. They know they need digital transformation, it's been very, very hard. There's a real outcome that comes quickly from taking a mundane task that is expensive, and automating that. The outcomes are quick, often projects that involve our partners like Accenture and others. The payback period on the entire project with RPA can be 6 months, it's self-funding. What other technologies doing B2B is self-funding in one year? That's part of the incredible adoption birth. But, every single customer doesn't stop there. They say okay, I also want to know that this automation is, I want to know that I can go apply AI to this. It's in every conversation. So there's two big booms with UiPath and our RPA. The first is when you go digital, there's some great outcome. There's productivity gain, it's immediate, right. I guess I said the payback period is quick. The second big one is when you go and turn it from a rules based to an experience based process, or you apply AI to it, there's another set of business benefits down the road. As more algorithms come out and things, you keep applying to it. This is sort of the gift that keeps on giving. I think if we didn't have that connection to Machine Learning or AI, I think the enthusiasm level of the majority of our customers would not be anywhere near what it is today. >> Alright, well Bobby really appreciate digging into the customerality, RPA, AI all the acronym soup that was going on and we look forward to UiPath Forward at the Bellagio in Las Vegas this October. >> It'll be fun. Alright, I'm Stu Miniman, as always thank you so much for watching theCube.
SUMMARY :
From the SiliconANGLE Media Office Welcome back to the program. that is impacting a lot when you look at but UiPath is one of the leaders in RPA is really the path to AI in business operations. cause one of the things to look at and say Is making changes along the line. RPA is the gateway drug if you will. Am I just doing scripts, They need the robot to be completely accurate though. people in the industry, they had to open up a binder and find a and the other was the people and machines. But, RPA is just not the automation of a task the floor of everything they have. They say the same thing about Blockchain. over half of the people failed to get of the API economy. is that we're a platform that you can get there overtime. things are changing so fast and they need to be and improving the accuracy of visual understanding. I want to give you the final word on this. I guess I said the payback period is quick. all the acronym soup that was going on thank you so much
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Ada Lovelace | PERSON | 0.99+ |
Bobby | PERSON | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
one | QUANTITY | 0.99+ |
Bobby Patrick | PERSON | 0.99+ |
Gartner | ORGANIZATION | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
October | DATE | 0.99+ |
Boston | LOCATION | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
6 months | QUANTITY | 0.99+ |
Accenture | ORGANIZATION | 0.99+ |
one year | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
UiPath | ORGANIZATION | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
two hours | QUANTITY | 0.99+ |
last year | DATE | 0.99+ |
700 partners | QUANTITY | 0.99+ |
Stu | PERSON | 0.99+ |
Excel | TITLE | 0.99+ |
two | DATE | 0.99+ |
first | QUANTITY | 0.99+ |
5 | DATE | 0.99+ |
two ways | QUANTITY | 0.99+ |
YouTube | ORGANIZATION | 0.98+ |
three years | QUANTITY | 0.98+ |
two years ago | DATE | 0.98+ |
90 plus customers | QUANTITY | 0.98+ |
One | QUANTITY | 0.98+ |
Chevron | ORGANIZATION | 0.98+ |
this year | DATE | 0.98+ |
Oracle | ORGANIZATION | 0.98+ |
today | DATE | 0.98+ |
two classifications | QUANTITY | 0.98+ |
150 years ago | DATE | 0.97+ |
Dubai | LOCATION | 0.97+ |
both | QUANTITY | 0.97+ |
23,000 | QUANTITY | 0.97+ |
Turing | PERSON | 0.97+ |
three years ago | DATE | 0.97+ |
10 | DATE | 0.97+ |
two big booms | QUANTITY | 0.95+ |
Year ago | DATE | 0.95+ |
SiliconANGLE Media Office | ORGANIZATION | 0.95+ |
single | QUANTITY | 0.95+ |
one thing | QUANTITY | 0.94+ |
Second thing | QUANTITY | 0.93+ |
15 years ago | DATE | 0.93+ |
hundreds of thousands | QUANTITY | 0.91+ |
Forester | ORGANIZATION | 0.89+ |
Massachusets | LOCATION | 0.86+ |
LMAI | ORGANIZATION | 0.85+ |
second big | QUANTITY | 0.84+ |
SAP | ORGANIZATION | 0.84+ |
over half | QUANTITY | 0.79+ |
SASAB | TITLE | 0.78+ |
single customer | QUANTITY | 0.78+ |
customers | QUANTITY | 0.77+ |
Cortnie Abercrombie & Carl Gerber | MIT CDOIQ 2018
>> Live from the MIT campus in Cambridge, Massachusetts, it's theCUBE, covering the 12th Annual MIT Chief Data Officer and Information Quality Symposium. Brought to you by SiliconANGLE Media. >> Welcome back to theCUBE's coverage of MIT CDOIQ here in Cambridge, Massachusetts. I'm your host Rebecca Knight along with my cohost Peter Burris. We have two guests on this segment. We have Cortnie Abercrombie, she is the founder of the nonprofit AI Truth, and Carl Gerber, who is the managing partner at Global Data Analytics Leaders. Thanks so much for coming on theCUBE Cortnie and Carl. >> Thank you. >> Thank you. >> So I want to start by just having you introduce yourselves to our viewers, what you do. So tell us a little bit about AI Truth, Cortnie. >> So this was born out of a passion. As I, the last gig I had at IBM, everybody knows me for chief data officer and what I did with that, but the more recent role that I had was developing custom offerings for Fortune 500 in the AI solutions area, so as I would go meet and see different clients, and talk with them and start to look at different processes for how you implement AI solutions, it became very clear that not everybody is attuned, just because they're the ones funding the project or even initiating the purpose of the project, the business leaders don't necessarily know how these things work or run or what can go wrong with them. And on the flip side of that, we have very ambitious up-and-comer-type data scientists who are just trying to fulfill the mission, you know, the talent at hand, and they get really swept up in it. To the point where you can even see that data's getting bartered back and forth with any real governance over it or policies in place to say, "Hey, is that right? Should we have gotten that kind of information?" Which leads us into things like the creepy factor. Like, you know target (laughs) and some of these cases that are well-known. And so, as I saw some of these mistakes happening that were costing brand reputation, our return on investment, or possibly even creating opportunities for risk for the companies and for the business leaders, I felt like someone's got to take one for the team here and go out and start educating people on how this stuff actually works, what the issues can be and how to prevent those issues, and then also what do you do when things do go wrong, how do you fix it? So that's the mission of AI Truth and I have a book. Yes, power to the people, but you know really my main concern was concerned individuals, because I think we've all been affected when we've sent and email and all of a sudden we get a weird ad, and we're like, "Hey, what, they should not, is somebody reading my email?" You know, and we feel this, just, offense-- >> And the answer is yes. >> Yes, and they are, they are. So I mean, we, but we need to know because the only way we can empower ourselves to do something is to actually know how it works. So, that's what my missions is to try and do. So, for the concerned individuals out there, I am writing a book to kind of encapsulate all the experiences that I had so people know where to look and what they can actually do, because you'll be less fearful if you know, "Hey, I can download DuckDuckGo for my browser, or my search engine I mean, and Epic for my browser, and some private, you know, private offerings instead of the typical free offerings. There's not an answer for Facebook yet though. >> So, (laughs) we'll get there. Carl, tell us a little bit about Global Data Analytics Leaders. >> So, I launched Analytics Leaders and CDO Coach after a long career in corporate America. I started building an executive information system when I was in the military for a four-star commander, and I've really done a lot in data analytics throughout my career. Most recently, starting a CDO function at two large multinational companies in leading global transformation programs. And, what I've experienced is even though the industries may vary a little bit, the challenges are the same and the patterns of behavior are the same, both the good and bad behavior, bad habits around the data. And, through the course of my career, I've developed these frameworks and playbooks and just ways to get a repeatable outcome and bring these new technologies like machine learning to bear to really overcome the challenges that I've seen. And what I've seen is a lot of the current thinking is we're solving these data management problems manually. You know, we all hear the complaints about the people who are analysts and data scientists spending 70, 80% of their time being a data gatherer and not really generating insight from the data itself and making it actionable. Well, that's why we have computer systems, right? But that large-scale technology in automation hasn't really served us well, because we think in silos, right? We fund these projects based on departments and divisions. We acquire companies through mergers and acquisitions. And the CDO role has emerged because we need to think about this, all the data that an enterprise uses, horizontally. And with that, I bring a high degree of automation, things like machine learning, to solve those problems. So, I'm now bottling that and advising my clients. And at the same time, the CDO role is where the CIO role was 20 years ago. We're really in it's infancy, and so you see companies define it differently, have different expectations. People are filling the roles that may have not done this before, and so I provide the coaching services there. It's like a professional golfer who has a swing coach. So I come in and I help the data executives with upping their game. >> Well, it's interesting, I actually said the CIO role 40 years ago. But, here's why. If we look back in the 1970s, hardcore financial systems were made possible by the technology which allowed us to run businesses like a portfolio: Jack Welch, the GE model. That was not possible if you didn't have a common asset management system, if you didn't have a common cached management system, etc. And so, when we started creating those common systems, we needed someone that could describe how that shared asset was going to be used within the organization. And we went from the DP manager in HR, the DP manager within finance, to the CIO. And in many respects, we're doing the same thing, right? We're talking about data in a lot of different places and now the business is saying, "We can bring this data together in new and interesting ways into more a shared asset, and we need someone that can help administer that process, and you know, navigate between different groups and different needs and whatnot." Is that kind of what you guys are seeing? >> Oh yeah. >> Yeah. >> Well you know once I get to talking (laughs). For me, I can going right back to the newer technologies like AI and IOT that are coming from externally into your organization, and then also the fact that we're seeing bartering at an unprec... of data at an unprecedented level before. And yet, what the chief data officer role originally did was look at data internally, and structured data mostly. But now, we're asking them to step out of their comfort zone and start looking at all these unknown, niche data broker firms that may or may not be ethical in how they're... I mean, I... look I tell people, "If you hear the word scrape, you run." No scraping, we don't want scraped data, no, no, no (laugh). But I mean, but that's what we're talking about-- >> Well, what do you mean by scraped data, 'cause that's important? >> Well, this is a well-known data science practice. And it's not that... nobody's being malicious here, nobody's trying to have a malintent, but I think it's just data scientists are just scruffy, they roll up their sleeves and they get data however they can. And so, the practice emerged. Look, they're built off of open-source software and everything's free, right, for them, for the most part? So they just start reading in screens and things that are available that you could see, they can optical character read it in, or they can do it however without having to have a subscription to any of that data, without having to have permission to any of that data. It's, "I can see it, so it's mine." But you know, that doesn't work in candy stores. We can't just go, or jewelry stores in my case, I mean, you can't just say, "I like that diamond earring, or whatever, I'm just going to take it because I can see it." (laughs) So, I mean, yeah we got to... that's scraping though. >> And the implications of that are suddenly now you've got a great new business initiative and somebody finds out that you used their private data in that initiative, and now they've got a claim on that asset. >> Right. And this is where things start to get super hairy, and you just want to make sure that you're being on the up-and-up with your data practices and you data ethics, because, in my opinion, 90% of what's gone wrong in AI or the fear factor of AI is that your privacy's getting violated and then you're labeled with data that you may or may not know even exists half the time. I mean. >> So, what's the answer? I mean as you were talking about these data scientists are scrappy, scruffy, roll-up-your-sleeves kind of people, and they are coming up with new ideas, new innovations that sometimes are good-- >> Oh yes, they are. >> So what, so what is the answer? Is this this code of ethics? Is it a... sort of similar to a Hippocratic Oath? I mean how would you, what do you think? >> So, it's a multidimensional problem. Cortnie and I were talking earlier that you have to have more transparency into the models you're creating, and that means a significant validation process. And that's where the chief data officer partners with folks in risk and other areas and the data science team around getting more transparency and visibility into what's the data that's feeding into it? Is it really the authoritative data of the company? And as Cortnie points out, do we even have the rights to that data that's feeding our models? And so, by bringing that transparency and a little more validation before you actually start making key, bet-the-business decisions on the outcomes of these models, you need to look at how you're vetting them. >> And the vetting process is part technology, part culture, part process, it goes back to that people process technology trying. >> Yeah, absolutely, know where your data came from. Why are you doing this model? What are you doing to do with the outcomes? Are you actually going to do something with it or are you going to ignore it? Under what conditions will you empower a decision-maker to use the information that is the output of the model? A lot of these things, you have to think through when you want to operationalize it. It's not just, "I'm going to go get a bunch of data wherever I can, I put a model together. Here, don't you like the results?" >> But this is Silicon Valley way, right? An MVP for everything and you just let it run until... you can't. >> That's a great point Cortnie (laughs) I've always believed, and I want to test this with you, we talk about people process technology about information, we never talk about people process technology and information of information. There's a manner of respects what we're talking about is making explicit the information about... information, the metadata, and how we manage that and how we treat that, and how we defuse that, and how we turn that, the metadata itself, into models to try to govern and guide utilization of this. That's especially important in AI world, isn't it? >> I start with this. For me, it's simple, I mean, but everything he said was true. But, I try to keep it to this: it's about free will. If I said you can do that with my data, to me it's always my data. I don't care if it's on Facebook, I don't care where it is and I don't care if it's free or not, it's still my data. Even if it's X23andMe, or 23andMe, sorry, and they've taken the swab, or whether it's Facebook or I did a google search, I don't care, it's still my data. So if you ask me if it's okay to do a certain type of thing, then maybe I will consent to that. But I should at least be given an option. And no, be given the transparency. So it's all about free will. So in my mind, as long as you're always providing some sort of free will (laughs), the ability for me to having a decision to say, "Yes, I want to participate in that," or, "Yes, you can label me as whatever label I'm getting, Trump or a pro-Hillary or Obam-whatever, name whatever issue of the day is," then I'm okay with that as long as I get a choice. >> Let's go back to it, I want to build on that if I can, because, and then I want to ask you a question about it Carl, the issue of free will presupposes that both sides know exactly what's going into the data. So for example, if I have a medical procedure, I can sit down on that form and I can say, "Whatever happens is my responsibility." But if bad things happen because of malfeasance, guess what? That piece of paper's worthless and I can sue. Because the doctor and the medical provider is supposed to know more about what's going on than I do. >> Right. >> Does the same thing exist? You talked earlier about governance and some of the culture imperatives and transparency, doesn't that same thing exist? And I'm going to ask you a question: is that part of your nonprofit is to try to raise the bar for everybody? But doesn't that same notion exist, that at the end of the day, you don't... You do have information asymmetries, both sides don't know how the data's being used because of the nature of data? >> Right. That's why you're seeing the emergence of all these data privacy laws. And so what I'm advising executives and the board and my clients is we need to step back and think bigger about this. We need to think about as not just GDPR, the European scope, it's global data privacy. And if we look at the motivation, why are we doing this? Are we doing it just because we have to be regulatory-compliant 'cause there's a law in the books, or should we reframe it and say, "This is really about the user experience, the customer experience." This is a touchpoint that my customers have with my company. How transparent should I be with what data I have about you, how I'm using it, how I'm sharing it, and is there a way that I can turn this into a positive instead of it's just, "I'm doing this because I have to for regulatory-compliance." And so, I believe if you really examine the motivation and look at it from more of the carrot and less of the stick, you're going to find that you're more motivated to do it, you're going to be more transparent with your customers, and you're going to share, and you're ultimately going to protect that data more closely because you want to build that trust with your customers. And then lastly, let's face it, this is the data we want to analyze, right? This is the authenticated data we want to give to the data scientists, so I just flip that whole thing on its head. We do for these reasons and we increase the transparency and trust. >> So Cortnie, let me bring it back to you. >> Okay. >> That presupposes, again, an up-leveling of knowledge about data privacy not just for the executive but also for the consumer. How are you going to do that? >> Personally, I'm going to come back to free will again, and I'm also going to add: harm impacts. We need to start thinking impact assessments instead of governance, quite frankly. We need to start looking at if I, you know, start using a FICO score as a proxy for another piece of information, like a crime record in a certain district of whatever, as a way to understand how responsible you are and whether or not your car is going to get broken into, and now you have to pay more. Well, you're... if you always use a FICO score, for example, as a proxy for responsibility which, let's face it, once a data scientist latches onto something, they share it with everybody 'cause that's how they are, right? They love that and I love that about them, quite frankly. But, what I don't like is it propagates, and then before you know it, the people who are of lesser financial means, it's getting propagated because now they're going to be... Every AI pricing model is going to use FICO score as a-- >> And they're priced out of the market. >> And they're priced out of the market and how is that fair? And there's a whole group, I think you know about the Fairness Accountability Transparency group that, you know, kind of watch dogs this stuff. But I think business leaders as a whole don't really think through to that level like, "If I do this, then this this and this could incur--" >> So what would be the one thing you could say if, corporate America's listening. >> Let's do impact. Let's do impact assessments. If you're going to cost someone their livelihood, or you're going to cost them thousands of dollars, then let's put more scrutiny, let's put more government validation. To your point, let's put some... 'cause not everything needs the nth level. Like, if I present you with a blue sweater instead of a red sweater on google or whatever, (laughs) You know, that's not going to harm you. But it will harm you if I give you a teacher assessment that's based on something that you have no control over, and now you're fired because you've been laid off 'cause your rating was bad. >> This is a great conversation. Let me... Let me add something different, 'cause... Or say it a different way, and tell me if you agree. In many respects, it's: Does this practice increase inclusion or does this practice decrease inclusion? This is not some goofy, social thing, this is: Are you making your market bigger or are you making your market smaller? Because the last thing you want is that the participation by people ends with: You can't play because of some algorithmic response we had. So maybe the question of inclusion becomes a key issue. Would you agree with that? >> I do agree with it, and I still think there's levels even to inclusion. >> Of course. >> Like, you know, being a part of the blue sweater club versus the (laughs) versus, "I don't want to be a convict," you know, suddenly because of some record you found, or association with someone else. And let's just face it, a lot of these algorithmic models do do these kinds of things where they... They use n+1, you know, a lot... you know what I'm saying. And so you're associated naturally with the next person closest to you, and that's not always the right thing to do, right? So, in some ways, and so I'm positing just little bit of a new idea here, you're creating some policies, whether you're being, and we were just talking about this, but whether you're being implicit about them or explicit, more likely you're being implicit because you're just you're summarily deciding. Well, okay, I have just decided in the credit score example, that if you don't have a good credit threshold... But where in your policies and your corporate policy did it ever say that people of lesser financial means should be excluded from being able to have good car insurance for... 'cause now, the same goes with like Facebook. Some people feel like they're going to have to opt of of life, I mean, if they don't-- >> (laughs) Opt out of life. >> I mean like, seriously, when you think about grandparents who are excluded, you know, out in whatever Timbuktu place they live, and all their families are somewhere else, and the only way that they get to see is, you know, on Facebook. >> Go back to the issue you raised earlier about "Somebody read my email," I can tell you, as a person with a couple of more elderly grandparents, they inadvertently shared some information with me on Facebook about a health condition that they had. You know how grotesque the response of Facebook was to that? And, it affected me to because they had my name in it. They didn't know any better. >> Sometimes there's a stigma. Sometimes things become a stigma as well. There's an emotional response. When I put the article out about why I left IBM to start this new AI Truth nonprofit, the responses I got back that were so immediate were emotional responses about how this stuff affects people. That they're scared of what this means. Can people come after my kids or my grandkids? And if you think about how genetic information can get used, you're not just hosing yourself. I mean, breast cancer genes, I believe, aren't they, like... They run through families, so, I-- >> And they're pretty well-understood. >> If someone swabs my, and uses it and swaps it with other data, you know, people, all of a sudden, not just me is affected, but my whole entire lineage, I mean... It's hard to think of that, but... it's true (laughs). >> These are real life and death... these are-- >> Not just today, but for the future. And in many respects, it's that notion of inclusion... Going back to it, now I'm making something up, but not entirely, but going back to some of the stuff that you were talking about, Carl, the decisions we make about data today, we want to ensure that we know that there's value in the options for how we use that data in the future. So, the issue of inclusion is not just about people, but it's also about other activities, or other things that we might be able to do with data because of the nature of data. I think we always have to have an options approach to thinking about... as we make data decisions. Would you agree with that? Yes, because you know, data's not absolute. So, you can measure something and you can look at the data quality, you can look at the inputs to a model, whatever, but you still have to have that human element of, "Are you we doing the right thing?" You know, the data should guide us in our decisions, but I don't think it's ever an absolute. It's a range of options, and we chose this options for this reason. >> Right, so are we doing the right thing and do no harm too? Carl, Cortnie, we could talk all day, this has been a really fun conversation. >> Oh yeah, and we have. (laughter) >> But we're out of time. I'm Rebecca Knight for Peter Burris, we will have more from MIT CDOIQ in just a little bit. (upbeat music)
SUMMARY :
Brought to you by SiliconANGLE Media. she is the founder of the nonprofit AI Truth, So I want to start by just having you To the point where you can even see that and some private, you know, private offerings Carl, tell us a little bit about and not really generating insight from the data itself and you know, navigate between different groups Well you know once I get to talking (laughs). And so, the practice emerged. and somebody finds out that you used and you just want to make sure that you're being on the Is it a... sort of similar to a Hippocratic Oath? that you have to have more transparency And the vetting process is part technology, A lot of these things, you have to think through An MVP for everything and you just let it run until... the metadata, and how we manage that the ability for me to having a decision to say, because, and then I want to ask you a question about it Carl, that at the end of the day, you don't... This is the authenticated data we want to give How are you going to do that? and now you have to pay more. And there's a whole group, I think you know about So what would be the one thing you could say if, But it will harm you if I give you a teacher assessment Because the last thing you want is that I do agree with it, and I still think there's levels and that's not always the right thing to do, right? and the only way that they get to see is, you know, Go back to the issue you raised earlier about And if you think about how genetic information can get used, and uses it and swaps it with other data, you know, people, in the options for how we use that data in the future. and do no harm too? Oh yeah, and we have. we will have more from MIT CDOIQ in just a little bit.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Rebecca Knight | PERSON | 0.99+ |
Cortnie Abercrombie | PERSON | 0.99+ |
Carl | PERSON | 0.99+ |
Cortnie | PERSON | 0.99+ |
Peter Burris | PERSON | 0.99+ |
Trump | PERSON | 0.99+ |
Carl Gerber | PERSON | 0.99+ |
Jack Welch | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
90% | QUANTITY | 0.99+ |
Hillary | PERSON | 0.99+ |
four-star | QUANTITY | 0.99+ |
GE | ORGANIZATION | 0.99+ |
two guests | QUANTITY | 0.99+ |
1970s | DATE | 0.99+ |
Cambridge, Massachusetts | LOCATION | 0.99+ |
Silicon Valley | LOCATION | 0.99+ |
both sides | QUANTITY | 0.99+ |
ORGANIZATION | 0.99+ | |
Obam | PERSON | 0.99+ |
both | QUANTITY | 0.98+ |
SiliconANGLE Media | ORGANIZATION | 0.98+ |
40 years ago | DATE | 0.98+ |
DuckDuckGo | TITLE | 0.98+ |
thousands of dollars | QUANTITY | 0.98+ |
Timbuktu | LOCATION | 0.98+ |
America | LOCATION | 0.98+ |
theCUBE | ORGANIZATION | 0.98+ |
today | DATE | 0.98+ |
FICO | ORGANIZATION | 0.98+ |
GDPR | TITLE | 0.98+ |
MIT CDOIQ | ORGANIZATION | 0.96+ |
20 years ago | DATE | 0.95+ |
ORGANIZATION | 0.95+ | |
12th Annual MIT Chief Data Officer and Information Quality Symposium | EVENT | 0.93+ |
one | QUANTITY | 0.93+ |
AI Truth | ORGANIZATION | 0.89+ |
70, 80% | QUANTITY | 0.87+ |
MIT | ORGANIZATION | 0.87+ |
Global Data Analytics Leaders | ORGANIZATION | 0.86+ |
2018 | DATE | 0.83+ |
CDO Coach | TITLE | 0.82+ |
Hippocratic Oath | TITLE | 0.82+ |
two large multinational companies | QUANTITY | 0.79+ |
half | QUANTITY | 0.75+ |
Fairness | ORGANIZATION | 0.68+ |
X23andMe | ORGANIZATION | 0.68+ |
23andMe | ORGANIZATION | 0.66+ |
Analytics | ORGANIZATION | 0.64+ |
couple | QUANTITY | 0.62+ |
European | OTHER | 0.59+ |
blue sweater | ORGANIZATION | 0.58+ |
Epic | ORGANIZATION | 0.5+ |
Fortune | ORGANIZATION | 0.48+ |
1 | QUANTITY | 0.46+ |
CDOIQ | EVENT | 0.36+ |
500 | QUANTITY | 0.35+ |