AI for Good Panel - Precision Medicine - SXSW 2017 - #IntelAI - #theCUBE
>> Welcome to the Intel AI Lounge. Today, we're very excited to share with you the Precision Medicine panel discussion. I'll be moderating the session. My name is Kay Erin. I'm the general manager of Health and Life Sciences at Intel. And I'm excited to share with you these three panelists that we have here. First is John Madison. He is a chief information medical officer and he is part of Kaiser Permanente. We're very excited to have you here. Thank you, John. >> Thank you. >> We also have Naveen Rao. He is the VP and general manager for the Artificial Intelligence Solutions at Intel. He's also the former CEO of Nervana, which was acquired by Intel. And we also have Bob Rogers, who's the chief data scientist at our AI solutions group. So, why don't we get started with our questions. I'm going to ask each of the panelists to talk, introduce themselves, as well as talk about how they got started with AI. So why don't we start with John? >> Sure, so can you hear me okay in the back? Can you hear? Okay, cool. So, I am a recovering evolutionary biologist and a recovering physician and a recovering geek. And I implemented the health record system for the first and largest region of Kaiser Permanente. And it's pretty obvious that most of the useful data in a health record, in lies in free text. So I started up a natural language processing team to be able to mine free text about a dozen years ago. So we can do things with that that you can't otherwise get out of health information. I'll give you an example. I read an article online from the New England Journal of Medicine about four years ago that said over half of all people who have had their spleen taken out were not properly vaccinated for a common form of pneumonia, and when your spleen's missing, you must have that vaccine or you die a very sudden death with sepsis. In fact, our medical director in Northern California's father died of that exact same scenario. So, when I read the article, I went to my structured data analytics team and to my natural language processing team and said please show me everybody who has had their spleen taken out and hasn't been appropriately vaccinated and we ran through about 20 million records in about three hours with the NLP team, and it took about three weeks with a structured data analytics team. That sounds counterintuitive but it actually happened that way. And it's not a competition for time only. It's a competition for quality and sensitivity and specificity. So we were able to indentify all of our members who had their spleen taken out, who should've had a pneumococcal vaccine. We vaccinated them and there are a number of people alive today who otherwise would've died absent that capability. So people don't really commonly associate natural language processing with machine learning, but in fact, natural language processing relies heavily and is the first really, highly successful example of machine learning. So we've done dozens of similar projects, mining free text data in millions of records very efficiently, very effectively. But it really helped advance the quality of care and reduce the cost of care. It's a natural step forward to go into the world of personalized medicine with the arrival of a 100-dollar genome, which is actually what it costs today to do a full genome sequence. Microbiomics, that is the ecosystem of bacteria that are in every organ of the body actually. And we know now that there is a profound influence of what's in our gut and how we metabolize drugs, what diseases we get. You can tell in a five year old, whether or not they were born by a vaginal delivery or a C-section delivery by virtue of the bacteria in the gut five years later. So if you look at the complexity of the data that exists in the genome, in the microbiome, in the health record with free text and you look at all the other sources of data like this streaming data from my wearable monitor that I'm part of a research study on Precision Medicine out of Stanford, there is a vast amount of disparate data, not to mention all the imaging, that really can collectively produce much more useful information to advance our understanding of science, and to advance our understanding of every individual. And then we can do the mash up of a much broader range of science in health care with a much deeper sense of data from an individual and to do that with structured questions and structured data is very yesterday. The only way we're going to be able to disambiguate those data and be able to operate on those data in concert and generate real useful answers from the broad array of data types and the massive quantity of data, is to let loose machine learning on all of those data substrates. So my team is moving down that pathway and we're very excited about the future prospects for doing that. >> Yeah, great. I think that's actually some of the things I'm very excited about in the future with some of the technologies we're developing. My background, I started actually being fascinated with computation in biological forms when I was nine. Reading and watching sci-fi, I was kind of a big dork which I pretty much still am. I haven't really changed a whole lot. Just basically seeing that machines really aren't all that different from biological entities, right? We are biological machines and kind of understanding how a computer works and how we engineer those things and trying to pull together concepts that learn from biology into that has always been a fascination of mine. As an undergrad, I was in the EE, CS world. Even then, I did some research projects around that. I worked in the industry for about 10 years designing chips, microprocessors, various kinds of ASICs, and then actually went back to school, quit my job, got a Ph.D. in neuroscience, computational neuroscience, to specifically understand what's the state of the art. What do we really understand about the brain? And are there concepts that we can take and bring back? Inspiration's always been we want to... We watch birds fly around. We want to figure out how to make something that flies. We extract those principles, and then build a plane. Don't necessarily want to build a bird. And so Nervana's really was the combination of all those experiences, bringing it together. Trying to push computation in a new a direction. Now, as part of Intel, we can really add a lot of fuel to that fire. I'm super excited to be part of Intel in that the technologies that we were developing can really proliferate and be applied to health care, can be applied to Internet, can be applied to every facet of our lives. And some of the examples that John mentioned are extremely exciting right now and these are things we can do today. And the generality of these solutions are just really going to hit every part of health care. I mean from a personal viewpoint, my whole family are MDs. I'm sort of the black sheep of the family. I don't have an MD. And it's always been kind of funny to me that knowledge is concentrated in a few individuals. Like you have a rare tumor or something like that, you need the guy who knows how to read this MRI. Why? Why is it like that? Can't we encapsulate that knowledge into a computer or into an algorithm, and democratize it. And the reason we couldn't do it is we just didn't know how. And now we're really getting to a point where we know how to do that. And so I want that capability to go to everybody. It'll bring the cost of healthcare down. It'll make all of us healthier. That affects everything about our society. So that's really what's exciting about it to me. >> That's great. So, as you heard, I'm Bob Rogers. I'm chief data scientist for analytics and artificial intelligence solutions at Intel. My mission is to put powerful analytics in the hands of every decision maker and when I think about Precision Medicine, decision makers are not just doctors and surgeons and nurses, but they're also case managers and care coordinators and probably most of all, patients. So the mission is really to put powerful analytics and AI capabilities in the hands of everyone in health care. It's a very complex world and we need tools to help us navigate it. So my background, I started with a Ph.D. in physics and I was computer modeling stuff, falling into super massive black holes. And there's a lot of applications for that in the real world. No, I'm kidding. (laughter) >> John: There will be, I'm sure. Yeah, one of these days. Soon as we have time travel. Okay so, I actually, about 1991, I was working on my post doctoral research, and I heard about neural networks, these things that could compute the way the brain computes. And so, I started doing some research on that. I wrote some papers and actually, it was an interesting story. The problem that we solved that got me really excited about neural networks, which have become deep learning, my office mate would come in. He was this young guy who was about to go off to grad school. He'd come in every morning. "I hate my project." Finally, after two weeks, what's your project? What's the problem? It turns out he had to circle these little fuzzy spots on these images from a telescope. So they were looking for the interesting things in a sky survey, and he had to circle them and write down their coordinates all summer. Anyone want to volunteer to do that? No? Yeah, he was very unhappy. So we took the first two weeks of data that he created doing his work by hand, and we trained an artificial neural network to do his summer project and finished it in about eight hours of computing. (crowd laughs) And so he was like yeah, this is amazing. I'm so happy. And we wrote a paper. I was the first author of course, because I was the senior guy at age 24. And he was second author. His first paper ever. He was very, very excited. So we have to fast forward about 20 years. His name popped up on the Internet. And so it caught my attention. He had just won the Nobel Prize in physics. (laughter) So that's where artificial intelligence will get you. (laughter) So thanks Naveen. Fast forwarding, I also developed some time series forecasting capabilities that allowed me to create a hedge fund that I ran for 12 years. After that, I got into health care, which really is the center of my passion. Applying health care to figuring out how to get all the data from all those siloed sources, put it into the cloud in a secure way, and analyze it so you can actually understand those cases that John was just talking about. How do you know that that person had had a splenectomy and that they needed to get that pneumovax? You need to be able to search all the data, so we used AI, natural language processing, machine learning, to do that and then two years ago, I was lucky enough to join Intel and, in the intervening time, people like Naveen actually thawed the AI winter and we're really in a spring of amazing opportunities with AI, not just in health care but everywhere, but of course, the health care applications are incredibly life saving and empowering so, excited to be here on this stage with you guys. >> I just want to cue off of your comment about the role of physics in AI and health care. So the field of microbiomics that I referred to earlier, bacteria in our gut. There's more bacteria in our gut than there are cells in our body. There's 100 times more DNA in that bacteria than there is in the human genome. And we're now discovering a couple hundred species of bacteria a year that have never been identified under a microscope just by their DNA. So it turns out the person who really catapulted the study and the science of microbiomics forward was an astrophysicist who did his Ph.D. in Steven Hawking's lab on the collision of black holes and then subsequently, put the other team in a virtual reality, and he developed the first super computing center and so how did he get an interest in microbiomics? He has the capacity to do high performance computing and the kind of advanced analytics that are required to look at a 100 times the volume of 3.2 billion base pairs of the human genome that are represented in the bacteria in our gut, and that has unleashed the whole science of microbiomics, which is going to really turn a lot of our assumptions of health and health care upside down. >> That's great, I mean, that's really transformational. So a lot of data. So I just wanted to let the audience know that we want to make this an interactive session, so I'll be asking for questions in a little bit, but I will start off with one question so that you can think about it. So I wanted to ask you, it looks like you've been thinking a lot about AI over the years. And I wanted to understand, even though AI's just really starting in health care, what are some of the new trends or the changes that you've seen in the last few years that'll impact how AI's being used going forward? >> So I'll start off. There was a paper published by a guy by the name of Tegmark at Harvard last summer that, for the first time, explained why neural networks are efficient beyond any mathematical model we predict. And the title of the paper's fun. It's called Deep Learning Versus Cheap Learning. So there were two sort of punchlines of the paper. One is is that the reason that mathematics doesn't explain the efficiency of neural networks is because there's a higher order of mathematics called physics. And the physics of the underlying data structures determined how efficient you could mine those data using machine learning tools. Much more so than any mathematical modeling. And so the second thing that was a reel from that paper is that the substrate of the data that you're operating on and the natural physics of those data have inherent levels of complexity that determine whether or not a 12th layer of neural net will get you where you want to go really fast, because when you do the modeling, for those math geeks in the audience, a factorial. So if there's 12 layers, there's 12 factorial permutations of different ways you could sequence the learning through those data. When you have 140 layers of a neural net, it's a much, much, much bigger number of permutations and so you end up being hardware-bound. And so, what Max Tegmark basically said is you can determine whether to do deep learning or cheap learning based upon the underlying physics of the data substrates you're operating on and have a good insight into how to optimize your hardware and software approach to that problem. >> So another way to put that is that neural networks represent the world in the way the world is sort of built. >> Exactly. >> It's kind of hierarchical. It's funny because, sort of in retrospect, like oh yeah, that kind of makes sense. But when you're thinking about it mathematically, we're like well, anything... The way a neural can represent any mathematical function, therfore, it's fully general. And that's the way we used to look at it, right? So now we're saying, well actually decomposing the world into different types of features that are layered upon each other is actually a much more efficient, compact representation of the world, right? I think this is actually, precisely the point of kind of what you're getting at. What's really exciting now is that what we were doing before was sort of building these bespoke solutions for different kinds of data. NLP, natural language processing. There's a whole field, 25 plus years of people devoted to figuring out features, figuring out what structures make sense in this particular context. Those didn't carry over at all to computer vision. Didn't carry over at all to time series analysis. Now, with neural networks, we've seen it at Nervana, and now part of Intel, solving customers' problems. We apply a very similar set of techniques across all these different types of data domains and solve them. All data in the real world seems to be hierarchical. You can decompose it into this hierarchy. And it works really well. Our brains are actually general structures. As a neuroscientist, you can look at different parts of your brain and there are differences. Something that takes in visual information, versus auditory information is slightly different but they're much more similar than they are different. So there is something invariant, something very common between all of these different modalities and we're starting to learn that. And this is extremely exciting to me trying to understand the biological machine that is a computer, right? We're figurig it out, right? >> One of the really fun things that Ray Chrisfall likes to talk about is, and it falls in the genre of biomimmicry, and how we actually replicate biologic evolution in our technical solutions so if you look at, and we're beginning to understand more and more how real neural nets work in our cerebral cortex. And it's sort of a pyramid structure so that the first pass of a broad base of analytics, it gets constrained to the next pass, gets constrained to the next pass, which is how information is processed in the brain. So we're discovering increasingly that what we've been evolving towards, in term of architectures of neural nets, is approximating the architecture of the human cortex and the more we understand the human cortex, the more insight we get to how to optimize neural nets, so when you think about it, with millions of years of evolution of how the cortex is structured, it shouldn't be a surprise that the optimization protocols, if you will, in our genetic code are profoundly efficient in how they operate. So there's a real role for looking at biologic evolutionary solutions, vis a vis technical solutions, and there's a friend of mine who worked with who worked with George Church at Harvard and actually published a book on biomimmicry and they wrote the book completely in DNA so if all of you have your home DNA decoder, you can actually read the book on your DNA reader, just kidding. >> There's actually a start up I just saw in the-- >> Read-Write DNA, yeah. >> Actually it's a... He writes something. What was it? (response from crowd member) Yeah, they're basically encoding information in DNA as a storage medium. (laughter) The company, right? >> Yeah, that same friend of mine who coauthored that biomimmicry book in DNA also did the estimate of the density of information storage. So a cubic centimeter of DNA can store an hexabyte of data. I mean that's mind blowing. >> Naveen: Highly done soon. >> Yeah that's amazing. Also you hit upon a really important point there, that one of the things that's changed is... Well, there are two major things that have changed in my perception from let's say five to 10 years ago, when we were using machine learning. You could use data to train models and make predictions to understand complex phenomena. But they had limited utility and the challenge was that if I'm trying to build on these things, I had to do a lot of work up front. It was called feature engineering. I had to do a lot of work to figure out what are the key attributes of that data? What are the 10 or 20 or 100 pieces of information that I should pull out of the data to feed to the model, and then the model can turn it into a predictive machine. And so, what's really exciting about the new generation of machine learning technology, and particularly deep learning, is that it can actually learn from example data those features without you having to do any preprogramming. That's why Naveen is saying you can take the same sort of overall approach and apply it to a bunch of different problems. Because you're not having to fine tune those features. So at the end of the day, the two things that have changed to really enable this evolution is access to more data, and I'd be curious to hear from you where you're seeing data come from, what are the strategies around that. So access to data, and I'm talking millions of examples. So 10,000 examples most times isn't going to cut it. But millions of examples will do it. And then, the other piece is the computing capability to actually take millions of examples and optimize this algorithm in a single lifetime. I mean, back in '91, when I started, we literally would have thousands of examples and it would take overnight to run the thing. So now in the world of millions, and you're putting together all of these combinations, the computing has changed a lot. I know you've made some revolutionary advances in that. But I'm curious about the data. Where are you seeing interesting sources of data for analytics? >> So I do some work in the genomics space and there are more viable permutations of the human genome than there are people who have ever walked the face of the earth. And the polygenic determination of a phenotypic expression translation, what are genome does to us in our physical experience in health and disease is determined by many, many genes and the interaction of many, many genes and how they are up and down regulated. And the complexity of disambiguating which 27 genes are affecting your diabetes and how are they up and down regulated by different interventions is going to be different than his. It's going to be different than his. And we already know that there's four or five distinct genetic subtypes of type II diabetes. So physicians still think there's one disease called type II diabetes. There's actually at least four or five genetic variants that have been identified. And so, when you start thinking about disambiguating, particularly when we don't know what 95 percent of DNA does still, what actually is the underlining cause, it will require this massive capability of developing these feature vectors, sometimes intuiting it, if you will, from the data itself. And other times, taking what's known knowledge to develop some of those feature vectors, and be able to really understand the interaction of the genome and the microbiome and the phenotypic data. So the complexity is high and because the variation complexity is high, you do need these massive members. Now I'm going to make a very personal pitch here. So forgive me, but if any of you have any role in policy at all, let me tell you what's happening right now. The Genomic Information Nondiscrimination Act, so called GINA, written by a friend of mine, passed a number of years ago, says that no one can be discriminated against for health insurance based upon their genomic information. That's cool. That should allow all of you to feel comfortable donating your DNA to science right? Wrong. You are 100% unprotected from discrimination for life insurance, long term care and disability. And it's being practiced legally today and there's legislation in the House, in mark up right now to completely undermine the existing GINA legislation and say that whenever there's another applicable statute like HIPAA, that the GINA is irrelevant, that none of the fines and penalties are applicable at all. So we need a ton of data to be able to operate on. We will not be getting a ton of data to operate on until we have the kind of protection we need to tell people, you can trust us. You can give us your data, you will not be subject to discrimination. And that is not the case today. And it's being further undermined. So I want to make a plea to any of you that have any policy influence to go after that because we need this data to help the understanding of human health and disease and we're not going to get it when people look behind the curtain and see that discrimination is occurring today based upon genetic information. >> Well, I don't like the idea of being discriminated against based on my DNA. Especially given how little we actually know. There's so much complexity in how these things unfold in our own bodies, that I think anything that's being done is probably childishly immature and oversimplifying. So it's pretty rough. >> I guess the translation here is that we're all unique. It's not just a Disney movie. (laughter) We really are. And I think one of the strengths that I'm seeing, kind of going back to the original point, of these new techniques is it's going across different data types. It will actually allow us to learn more about the uniqueness of the individual. It's not going to be just from one data source. They were collecting data from many different modalities. We're collecting behavioral data from wearables. We're collecting things from scans, from blood tests, from genome, from many different sources. The ability to integrate those into a unified picture, that's the important thing that we're getting toward now. That's what I think is going to be super exciting here. Think about it, right. I can tell you to visual a coin, right? You can visualize a coin. Not only do you visualize it. You also know what it feels like. You know how heavy it is. You have a mental model of that from many different perspectives. And if I take away one of those senses, you can still identify the coin, right? If I tell you to put your hand in your pocket, and pick out a coin, you probably can do that with 100% reliability. And that's because we have this generalized capability to build a model of something in the world. And that's what we need to do for individuals is actually take all these different data sources and come up with a model for an individual and you can actually then say what drug works best on this. What treatment works best on this? It's going to get better with time. It's not going to be perfect, because this is what a doctor does, right? A doctor who's very experienced, you're a practicing physician right? Back me up here. That's what you're doing. You basically have some categories. You're taking information from the patient when you talk with them, and you're building a mental model. And you apply what you know can work on that patient, right? >> I don't have clinic hours anymore, but I do take care of many friends and family. (laughter) >> You used to, you used to. >> I practiced for many years before I became a full-time geek. >> I thought you were a recovering geek. >> I am. (laughter) I do more policy now. >> He's off the wagon. >> I just want to take a moment and see if there's anyone from the audience who would like to ask, oh. Go ahead. >> We've got a mic here, hang on one second. >> I have tons and tons of questions. (crosstalk) Yes, so first of all, the microbiome and the genome are really complex. You already hit about that. Yet most of the studies we do are small scale and we have difficulty repeating them from study to study. How are we going to reconcile all that and what are some of the technical hurdles to get to the vision that you want? >> So primarily, it's been the cost of sequencing. Up until a year ago, it's $1000, true cost. Now it's $100, true cost. And so that barrier is going to enable fairly pervasive testing. It's not a real competitive market becaue there's one sequencer that is way ahead of everybody else. So the price is not $100 yet. The cost is below $100. So as soon as there's competition to drive the cost down, and hopefully, as soon as we all have the protection we need against discrimination, as I mentioned earlier, then we will have large enough sample sizes. And so, it is our expectation that we will be able to pool data from local sources. I chair the e-health work group at the Global Alliance for Genomics and Health which is working on this very issue. And rather than pooling all the data into a single, common repository, the strategy, and we're developing our five-year plan in a month in London, but the goal is to have a federation of essentially credentialed data enclaves. That's a formal method. HHS already does that so you can get credentialed to search all the data that Medicare has on people that's been deidentified according to HIPPA. So we want to provide the same kind of service with appropriate consent, at an international scale. And there's a lot of nations that are talking very much about data nationality so that you can't export data. So this approach of a federated model to get at data from all the countries is important. The other thing is a block-chain technology is going to be very profoundly useful in this context. So David Haussler of UC Santa Cruz is right now working on a protocol using an open block-chain, public ledger, where you can put out. So for any typical cancer, you may have a half dozen, what are called sematic variance. Cancer is a genetic disease so what has mutated to cause it to behave like a cancer? And if we look at those biologically active sematic variants, publish them on a block chain that's public, so there's not enough data there to reidentify the patient. But if I'm a physician treating a woman with breast cancer, rather than say what's the protocol for treating a 50-year-old woman with this cell type of cancer, I can say show me all the people in the world who have had this cancer at the age of 50, wit these exact six sematic variants. Find the 200 people worldwide with that. Ask them for consent through a secondary mechanism to donate everything about their medical record, pool that information of the core of 200 that exactly resembles the one sitting in front of me, and find out, of the 200 ways they were treated, what got the best results. And so, that's the kind of future where a distributed, federated architecture will allow us to query and obtain a very, very relevant cohort, so we can basically be treating patients like mine, sitting right in front of me. Same thing applies for establishing research cohorts. There's some very exciting stuff at the convergence of big data analytics, machine learning, and block chaining. >> And this is an area that I'm really excited about and I think we're excited about generally at Intel. They actually have something called the Collaborative Cancer Cloud, which is this kind of federated model. We have three different academic research centers. Each of them has a very sizable and valuable collection of genomic data with phenotypic annotations. So you know, pancreatic cancer, colon cancer, et cetera, and we've actually built a secure computing architecture that can allow a person who's given the right permissions by those organizations to ask a specific question of specific data without ever sharing the data. So the idea is my data's really important to me. It's valuable. I want us to be able to do a study that gets the number from the 20 pancreatic cancer patients in my cohort, up to the 80 that we have in the whole group. But I can't do that if I'm going to just spill my data all over the world. And there are HIPAA and compliance reasons for that. There are business reasons for that. So what we've built at Intel is this platform that allows you to do different kinds of queries on this genetic data. And reach out to these different sources without sharing it. And then, the work that I'm really involved in right now and that I'm extremely excited about... This also touches on something that both of you said is it's not sufficient to just get the genome sequences. You also have to have the phenotypic data. You have to know what cancer they've had. You have to know that they've been treated with this drug and they've survived for three months or that they had this side effect. That clinical data also needs to be put together. It's owned by other organizations, right? Other hospitals. So the broader generalization of the Collaborative Cancer Cloud is something we call the data exchange. And it's a misnomer in a sense that we're not actually exchanging data. We're doing analytics on aggregated data sets without sharing it. But it really opens up a world where we can have huge populations and big enough amounts of data to actually train these models and draw the thread in. Of course, that really then hits home for the techniques that Nervana is bringing to the table, and of course-- >> Stanford's one of your academic medical centers? >> Not for that Collaborative Cancer Cloud. >> The reason I mentioned Standford is because the reason I'm wearing this FitBit is because I'm a research subject at Mike Snyder's, the chair of genetics at Stanford, IPOP, intrapersonal omics profile. So I was fully sequenced five years ago and I get four full microbiomes. My gut, my mouth, my nose, my ears. Every three months and I've done that for four years now. And about a pint of blood. And so, to your question of the density of data, so a lot of the problem with applying these techniques to health care data is that it's basically a sparse matrix and there's a lot of discontinuities in what you can find and operate on. So what Mike is doing with the IPOP study is much the same as you described. Creating a highly dense longitudinal set of data that will help us mitigate the sparse matrix problem. (low volume response from audience member) Pardon me. >> What's that? (low volume response) (laughter) >> Right, okay. >> John: Lost the school sample. That's got to be a new one I've heard now. >> Okay, well, thank you so much. That was a great question. So I'm going to repeat this and ask if there's another question. You want to go ahead? >> Hi, thanks. So I'm a journalist and I report a lot on these neural networks, a system that's beter at reading mammograms than your human radiologists. Or a system that's better at predicting which patients in the ICU will get sepsis. These sort of fascinating academic studies that I don't really see being translated very quickly into actual hospitals or clinical practice. Seems like a lot of the problems are regulatory, or liability, or human factors, but how do you get past that and really make this stuff practical? >> I think there's a few things that we can do there and I think the proof points of the technology are really important to start with in this specific space. In other places, sometimes, you can start with other things. But here, there's a real confidence problem when it comes to health care, and for good reason. We have doctors trained for many, many years. School and then residencies and other kinds of training. Because we are really, really conservative with health care. So we need to make sure that technology's well beyond just the paper, right? These papers are proof points. They get people interested. They even fuel entire grant cycles sometimes. And that's what we need to happen. It's just an inherent problem, its' going to take a while. To get those things to a point where it's like well, I really do trust what this is saying. And I really think it's okay to now start integrating that into our standard of care. I think that's where you're seeing it. It's frustrating for all of us, believe me. I mean, like I said, I think personally one of the biggest things, I want to have an impact. Like when I go to my grave, is that we used machine learning to improve health care. We really do feel that way. But it's just not something we can do very quickly and as a business person, I don't actually look at those use cases right away because I know the cycle is just going to be longer. >> So to your point, the FDA, for about four years now, has understood that the process that has been given to them by their board of directors, otherwise known as Congress, is broken. And so they've been very actively seeking new models of regulation and what's really forcing their hand is regulation of devices and software because, in many cases, there are black box aspects of that and there's a black box aspect to machine learning. Historically, Intel and others are making inroads into providing some sort of traceability and transparency into what happens in that black box rather than say, overall we get better results but once in a while we kill somebody. Right? So there is progress being made on that front. And there's a concept that I like to use. Everyone knows Ray Kurzweil's book The Singularity Is Near? Well, I like to think that diadarity is near. And the diadarity is where you have human transparency into what goes on in the black box and so maybe Bob, you want to speak a little bit about... You mentioned that, in a prior discussion, that there's some work going on at Intel there. >> Yeah, absolutely. So we're working with a number of groups to really build tools that allow us... In fact Naveen probably can talk in even more detail than I can, but there are tools that allow us to actually interrogate machine learning and deep learning systems to understand, not only how they respond to a wide variety of situations but also where are there biases? I mean, one of the things that's shocking is that if you look at the clinical studies that our drug safety rules are based on, 50 year old white guys are the peak of that distribution, which I don't see any problem with that, but some of you out there might not like that if you're taking a drug. So yeah, we want to understand what are the biases in the data, right? And so, there's some new technologies. There's actually some very interesting data-generative technologies. And this is something I'm also curious what Naveen has to say about, that you can generate from small sets of observed data, much broader sets of varied data that help probe and fill in your training for some of these systems that are very data dependent. So that takes us to a place where we're going to start to see deep learning systems generating data to train other deep learning systems. And they start to sort of go back and forth and you start to have some very nice ways to, at least, expose the weakness of these underlying technologies. >> And that feeds back to your question about regulatory oversight of this. And there's the fascinating, but little known origin of why very few women are in clinical studies. Thalidomide causes birth defects. So rather than say pregnant women can't be enrolled in drug trials, they said any woman who is at risk of getting pregnant cannot be enrolled. So there was actually a scientific meritorious argument back in the day when they really didn't know what was going to happen post-thalidomide. So it turns out that the adverse, unintended consequence of that decision was we don't have data on women and we know in certain drugs, like Xanax, that the metabolism is so much slower, that the typical dosing of Xanax is women should be less than half of that for men. And a lot of women have had very serious adverse effects by virtue of the fact that they weren't studied. So the point I want to illustrate with that is that regulatory cycles... So people have known for a long time that was like a bad way of doing regulations. It should be changed. It's only recently getting changed in any meaningful way. So regulatory cycles and legislative cycles are incredibly slow. The rate of exponential growth in technology is exponential. And so there's impedance mismatch between the cycle time for regulation cycle time for innovation. And what we need to do... I'm working with the FDA. I've done four workshops with them on this very issue. Is that they recognize that they need to completely revitalize their process. They're very interested in doing it. They're not resisting it. People think, oh, they're bad, the FDA, they're resisting. Trust me, there's nobody on the planet who wants to revise these review processes more than the FDA itself. And so they're looking at models and what I recommended is global cloud sourcing and the FDA could shift from a regulatory role to one of doing two things, assuring the people who do their reviews are competent, and assuring that their conflicts of interest are managed, because if you don't have a conflict of interest in this very interconnected space, you probably don't know enough to be a reviewer. So there has to be a way to manage the conflict of interest and I think those are some of the keypoints that the FDA is wrestling with because there's type one and type two errors. If you underregulate, you end up with another thalidomide and people born without fingers. If you overregulate, you prevent life saving drugs from coming to market. So striking that balance across all these different technologies is extraordinarily difficult. If it were easy, the FDA would've done it four years ago. It's very complicated. >> Jumping on that question, so all three of you are in some ways entrepreneurs, right? Within your organization or started companies. And I think it would be good to talk a little bit about the business opportunity here, where there's a huge ecosystem in health care, different segments, biotech, pharma, insurance payers, etc. Where do you see is the ripe opportunity or industry, ready to really take this on and to make AI the competitive advantage. >> Well, the last question also included why aren't you using the result of the sepsis detection? We do. There were six or seven published ways of doing it. We did our own data, looked at it, we found a way that was superior to all the published methods and we apply that today, so we are actually using that technology to change clinical outcomes. As far as where the opportunities are... So it's interesting. Because if you look at what's going to be here in three years, we're not going to be using those big data analytics models for sepsis that we are deploying today, because we're just going to be getting a tiny aliquot of blood, looking for the DNA or RNA of any potential infection and we won't have to infer that there's a bacterial infection from all these other ancillary, secondary phenomenon. We'll see if the DNA's in the blood. So things are changing so fast that the opportunities that people need to look for are what are generalizable and sustainable kind of wins that are going to lead to a revenue cycle that are justified, a venture capital world investing. So there's a lot of interesting opportunities in the space. But I think some of the biggest opportunities relate to what Bob has talked about in bringing many different disparate data sources together and really looking for things that are not comprehensible in the human brain or in traditional analytic models. >> I think we also got to look a little bit beyond direct care. We're talking about policy and how we set up standards, these kinds of things. That's one area. That's going to drive innovation forward. I completely agree with that. Direct care is one piece. How do we scale out many of the knowledge kinds of things that are embedded into one person's head and get them out to the world, democratize that. Then there's also development. The underlying technology's of medicine, right? Pharmaceuticals. The traditional way that pharmaceuticals is developed is actually kind of funny, right? A lot of it was started just by chance. Penicillin, a very famous story right? It's not that different today unfortunately, right? It's conceptually very similar. Now we've got more science behind it. We talk about domains and interactions, these kinds of things but fundamentally, the problem is what we in computer science called NP hard, it's too difficult to model. You can't solve it analytically. And this is true for all these kinds of natural sorts of problems by the way. And so there's a whole field around this, molecular dynamics and modeling these sorts of things, that are actually being driven forward by these AI techniques. Because it turns out, our brain doesn't do magic. It actually doesn't solve these problems. It approximates them very well. And experience allows you to approximate them better and better. Actually, it goes a little bit to what you were saying before. It's like simulations and forming your own networks and training off each other. There are these emerging dynamics. You can simulate steps of physics. And you come up with a system that's much too complicated to ever solve. Three pool balls on a table is one such system. It seems pretty simple. You know how to model that, but it actual turns out you can't predict where a balls going to be once you inject some energy into that table. So something that simple is already too complex. So neural network techniques actually allow us to start making those tractable. These NP hard problems. And things like molecular dynamics and actually understanding how different medications and genetics will interact with each other is something we're seeing today. And so I think there's a huge opportunity there. We've actually worked with customers in this space. And I'm seeing it. Like Rosch is acquiring a few different companies in space. They really want to drive it forward, using big data to drive drug development. It's kind of counterintuitive. I never would've thought it had I not seen it myself. >> And there's a big related challenge. Because in personalized medicine, there's smaller and smaller cohorts of people who will benefit from a drug that still takes two billion dollars on average to develop. That is unsustainable. So there's an economic imperative of overcoming the cost and the cycle time for drug development. >> I want to take a go at this question a little bit differently, thinking about not so much where are the industry segments that can benefit from AI, but what are the kinds of applications that I think are most impactful. So if this is what a skilled surgeon needs to know at a particular time to care properly for a patient, this is where most, this area here, is where most surgeons are. They are close to the maximum knowledge and ability to assimilate as they can be. So it's possible to build complex AI that can pick up on that one little thing and move them up to here. But it's not a gigantic accelerator, amplifier of their capability. But think about other actors in health care. I mentioned a couple of them earlier. Who do you think the least trained actor in health care is? >> John: Patients. >> Yes, the patients. The patients are really very poorly trained, including me. I'm abysmal at figuring out who to call and where to go. >> Naveen: You know as much the doctor right? (laughing) >> Yeah, that's right. >> My doctor friends always hate that. Know your diagnosis, right? >> Yeah, Dr. Google knows. So the opportunities that I see that are really, really exciting are when you take an AI agent, like sometimes I like to call it contextually intelligent agent, or a CIA, and apply it to a problem where a patient has a complex future ahead of them that they need help navigating. And you use the AI to help them work through. Post operative. You've got PT. You've got drugs. You've got to be looking for side effects. An agent can actually help you navigate. It's like your own personal GPS for health care. So it's giving you the inforamation that you need about you for your care. That's my definition of Precision Medicine. And it can include genomics, of course. But it's much bigger. It's that broader picture and I think that a sort of agent way of thinking about things and filling in the gaps where there's less training and more opportunity, is very exciting. >> Great start up idea right there by the way. >> Oh yes, right. We'll meet you all out back for the next start up. >> I had a conversation with the head of the American Association of Medical Specialties just a couple of days ago. And what she was saying, and I'm aware of this phenomenon, but all of the medical specialists are saying, you're killing us with these stupid board recertification trivia tests that you're giving us. So if you're a cardiologist, you have to remember something that happens in one in 10 million people, right? And they're saying that irrelevant anymore, because we've got advanced decision support coming. We have these kinds of analytics coming. Precisely what you're saying. So it's human augmentation of decision support that is coming at blazing speed towards health care. So in that context, it's much more important that you have a basic foundation, you know how to think, you know how to learn, and you know where to look. So we're going to be human-augmented learning systems much more so than in the past. And so the whole recertification process is being revised right now. (inaudible audience member speaking) Speak up, yeah. (person speaking) >> What makes it fathomable is that you can-- (audience member interjects inaudibly) >> Sure. She was saying that our brain is really complex and large and even our brains don't know how our brains work, so... are there ways to-- >> What hope do we have kind of thing? (laughter) >> It's a metaphysical question. >> It circles all the way down, exactly. It's a great quote. I mean basically, you can decompose every system. Every complicated system can be decomposed into simpler, emergent properties. You lose something perhaps with each of those, but you get enough to actually understand most of the behavior. And that's really how we understand the world. And that's what we've learned in the last few years what neural network techniques can allow us to do. And that's why our brain can understand our brain. (laughing) >> Yeah, I'd recommend reading Chris Farley's last book because he addresses that issue in there very elegantly. >> Yeah we're seeing some really interesting technologies emerging right now where neural network systems are actually connecting other neural network systems in networks. You can see some very compelling behavior because one of the things I like to distinguish AI versus traditional analytics is we used to have question-answering systems. I used to query a database and create a report to find out how many widgets I sold. Then I started using regression or machine learning to classify complex situations from this is one of these and that's one of those. And then as we've moved more recently, we've got these AI-like capabilities like being able to recognize that there's a kitty in the photograph. But if you think about it, if I were to show you a photograph that happened to have a cat in it, and I said, what's the answer, you'd look at me like, what are you talking about? I have to know the question. So where we're cresting with these connected sets of neural systems, and with AI in general, is that the systems are starting to be able to, from the context, understand what the question is. Why would I be asking about this picture? I'm a marketing guy, and I'm curious about what Legos are in the thing or what kind of cat it is. So it's being able to ask a question, and then take these question-answering systems, and actually apply them so that's this ability to understand context and ask questions that we're starting to see emerge from these more complex hierarchical neural systems. >> There's a person dying to ask a question. >> Sorry. You have hit on several different topics that all coalesce together. You mentioned personalized models. You mentioned AI agents that could help you as you're going through a transitionary period. You mentioned data sources, especially across long time periods. Who today has access to enough data to make meaningful progress on that, not just when you're dealing with an issue, but day-to-day improvement of your life and your health? >> Go ahead, great question. >> That was a great question. And I don't think we have a good answer to it. (laughter) I'm sure John does. Well, I think every large healthcare organization and various healthcare consortiums are working very hard to achieve that goal. The problem remains in creating semantic interoperatability. So I spent a lot of my career working on semantic interoperatability. And the problem is that if you don't have well-defined, or self-defined data, and if you don't have well-defined and documented metadata, and you start operating on it, it's real easy to reach false conclusions and I can give you a classic example. It's well known, with hundreds of studies looking at when you give an antibiotic before surgery and how effective it is in preventing a post-op infection. Simple question, right? So most of the literature done prosectively was done in institutions where they had small sample sizes. So if you pool that, you get a little bit more noise, but you get a more confirming answer. What was done at a very large, not my own, but a very large institution... I won't name them for obvious reasons, but they pooled lots of data from lots of different hospitals, where the data definitions and the metadata were different. Two examples. When did they indicate the antibiotic was given? Was it when it was ordered, dispensed from the pharmacy, delivered to the floor, brought to the bedside, put in the IV, or the IV starts flowing? Different hospitals used a different metric of when it started. When did surgery occur? When they were wheeled into the OR, when they were prepped and drapped, when the first incision occurred? All different. And they concluded quite dramatically that it didn't matter when you gave the pre-op antibiotic and whether or not you get a post-op infection. And everybody who was intimate with the prior studies just completely ignored and discounted that study. It was wrong. And it was wrong because of the lack of commonality and the normalization of data definitions and metadata definitions. So because of that, this problem is much more challenging than you would think. If it were so easy as to put all these data together and operate on it, normalize and operate on it, we would've done that a long time ago. It's... Semantic interoperatability remains a big problem and we have a lot of heavy lifting ahead of us. I'm working with the Global Alliance, for example, of Genomics and Health. There's like 30 different major ontologies for how you represent genetic information. And different institutions are using different ones in different ways in different versions over different periods of time. That's a mess. >> Our all those issues applicable when you're talking about a personalized data set versus a population? >> Well, so N of 1 studies and single-subject research is an emerging field of statistics. So there's some really interesting new models like step wedge analytics for doing that on small sample sizes, recruiting people asynchronously. There's single-subject research statistics. You compare yourself with yourself at a different point in time, in a different context. So there are emerging statistics to do that and as long as you use the same sensor, you won't have a problem. But people are changing their remote sensors and you're getting different data. It's measured in different ways with different sensors at different normalization and different calibration. So yes. It even persists in the N of 1 environment. >> Yeah, you have to get started with a large N that you can apply to the N of 1. I'm actually going to attack your question from a different perspective. So who has the data? The millions of examples to train a deep learning system from scratch. It's a very limited set right now. Technology such as the Collaborative Cancer Cloud and The Data Exchange are definitely impacting that and creating larger and larger sets of critical mass. And again, not withstanding the very challenging semantic interoperability questions. But there's another opportunity Kay asked about what's changed recently. One of the things that's changed in deep learning is that we now have modules that have been trained on massive data sets that are actually very smart as certain kinds of problems. So, for instance, you can go online and find deep learning systems that actually can recognize, better than humans, whether there's a cat, dog, motorcycle, house, in a photograph. >> From Intel, open source. >> Yes, from Intel, open source. So here's what happens next. Because most of that deep learning system is very expressive. That combinatorial mixture of features that Naveen was talking about, when you have all these layers, there's a lot of features there. They're actually very general to images, not just finding cats, dogs, trees. So what happens is you can do something called transfer learning, where you take a small or modest data set and actually reoptimize it for your specific problem very, very quickly. And so we're starting to see a place where you can... On one end of the spectrum, we're getting access to the computing capabilities and the data to build these incredibly expressive deep learning systems. And over here on the right, we're able to start using those deep learning systems to solve custom versions of problems. Just last weekend or two weekends ago, in 20 minutes, I was able to take one of those general systems and create one that could recognize all different kinds of flowers. Very subtle distinctions, that I would never be able to know on my own. But I happen to be able to get the data set and literally, it took 20 minutes and I have this vision system that I could now use for a specific problem. I think that's incredibly profound and I think we're going to see this spectrum of wherever you are in your ability to get data and to define problems and to put hardware in place to see really neat customizations and a proliferation of applications of this kind of technology. >> So one other trend I think, I'm very hopeful about it... So this is a hard problem clearly, right? I mean, getting data together, formatting it from many different sources, it's one of these things that's probably never going to happen perfectly. But one trend I think that is extremely hopeful to me is the fact that the cost of gathering data has precipitously dropped. Building that thing is almost free these days. I can write software and put it on 100 million cell phones in an instance. You couldn't do that five years ago even right? And so, the amount of information we can gain from a cell phone today has gone up. We have more sensors. We're bringing online more sensors. People have Apple Watches and they're sending blood data back to the phone, so once we can actually start gathering more data and do it cheaper and cheaper, it actually doesn't matter where the data is. I can write my own app. I can gather that data and I can start driving the correct inferences or useful inferences back to you. So that is a positive trend I think here and personally, I think that's how we're going to solve it, is by gathering from that many different sources cheaply. >> Hi, my name is Pete. I've very much enjoyed the conversation so far but I was hoping perhaps to bring a little bit more focus into Precision Medicine and ask two questions. Number one, how have you applied the AI technologies as you're emerging so rapidly to your natural language processing? I'm particularly interested in, if you look at things like Amazon Echo or Siri, or the other voice recognition systems that are based on AI, they've just become incredibly accurate and I'm interested in specifics about how I might use technology like that in medicine. So where would I find a medical nomenclature and perhaps some reference to a back end that works that way? And the second thing is, what specifically is Intel doing, or making available? You mentioned some open source stuff on cats and dogs and stuff but I'm the doc, so I'm looking at the medical side of that. What are you guys providing that would allow us who are kind of geeks on the software side, as well as being docs, to experiment a little bit more thoroughly with AI technology? Google has a free AI toolkit. Several other people have come out with free AI toolkits in order to accelerate that. There's special hardware now with graphics, and different processors, hitting amazing speeds. And so I was wondering, where do I go in Intel to find some of those tools and perhaps learn a bit about the fantastic work that you guys are already doing at Kaiser? >> Let me take that first part and then we'll be able to talk about the MD part. So in terms of technology, this is what's extremely exciting now about what Intel is focusing on. We're providing those pieces. So you can actually assemble and build the application. How you build that application specific for MDs and the use cases is up to you or the one who's filling out the application. But we're going to power that technology for multiple perspectives. So Intel is already the main force behind The Data Center, right? Cloud computing, all this is already Intel. We're making that extremely amenable to AI and setting the standard for AI in the future, so we can do that from a number of different mechanisms. For somebody who wants to develop an application quickly, we have hosted solutions. Intel Nervana is kind of the brand for these kinds of things. Hosted solutions will get you going very quickly. Once you get to a certain level of scale, where costs start making more sense, things can be bought on premise. We're supplying that. We're also supplying software that makes that transition essentially free. Then taking those solutions that you develop in the cloud, or develop in The Data Center, and actually deploying them on device. You want to write something on your smartphone or PC or whatever. We're actually providing those hooks as well, so we want to make it very easy for developers to take these pieces and actually build solutions out of them quickly so you probably don't even care what hardware it's running on. You're like here's my data set, this is what I want to do. Train it, make it work. Go fast. Make my developers efficient. That's all you care about, right? And that's what we're doing. We're taking it from that point at how do we best do that? We're going to provide those technologies. In the next couple of years, there's going to be a lot of new stuff coming from Intel. >> Do you want to talk about AI Academy as well? >> Yeah, that's a great segway there. In addition to this, we have an entire set of tutorials and other online resources and things we're going to be bringing into the academic world for people to get going quickly. So that's not just enabling them on our tools, but also just general concepts. What is a neural network? How does it work? How does it train? All of these things are available now and we've made a nice, digestible class format that you can actually go and play with. >> Let me give a couple of quick answers in addition to the great answers already. So you're asking why can't we use medical terminology and do what Alexa does? Well, no, you may not be aware of this, but Andrew Ian, who was the AI guy at Google, who was recruited by Google, they have a medical chat bot in China today. I don't speak Chinese. I haven't been able to use it yet. There are two similar initiatives in this country that I know of. There's probably a dozen more in stealth mode. But Lumiata and Health Cap are doing chat bots for health care today, using medical terminology. You have the compound problem of semantic normalization within language, compounded by a cross language. I've done a lot of work with an international organization called Snowmed, which translates medical terminology. So you're aware of that. We can talk offline if you want, because I'm pretty deep into the semantic space. >> Go google Intel Nervana and you'll see all the websites there. It's intel.com/ai or nervanasys.com. >> Okay, great. Well this has been fantastic. I want to, first of all, thank all the people here for coming and asking great questions. I also want to thank our fantastic panelists today. (applause) >> Thanks, everyone. >> Thank you. >> And lastly, I just want to share one bit of information. We will have more discussions on AI next Tuesday at 9:30 AM. Diane Bryant, who is our general manager of Data Centers Group will be here to do a keynote. So I hope you all get to join that. Thanks for coming. (applause) (light electronic music)
SUMMARY :
And I'm excited to share with you He is the VP and general manager for the And it's pretty obvious that most of the useful data in that the technologies that we were developing So the mission is really to put and analyze it so you can actually understand So the field of microbiomics that I referred to earlier, so that you can think about it. is that the substrate of the data that you're operating on neural networks represent the world in the way And that's the way we used to look at it, right? and the more we understand the human cortex, What was it? also did the estimate of the density of information storage. and I'd be curious to hear from you And that is not the case today. Well, I don't like the idea of being discriminated against and you can actually then say what drug works best on this. I don't have clinic hours anymore, but I do take care of I practiced for many years I do more policy now. I just want to take a moment and see Yet most of the studies we do are small scale And so that barrier is going to enable So the idea is my data's really important to me. is much the same as you described. That's got to be a new one I've heard now. So I'm going to repeat this and ask Seems like a lot of the problems are regulatory, because I know the cycle is just going to be longer. And the diadarity is where you have and deep learning systems to understand, And that feeds back to your question about regulatory and to make AI the competitive advantage. that the opportunities that people need to look for to what you were saying before. of overcoming the cost and the cycle time and ability to assimilate Yes, the patients. Know your diagnosis, right? and filling in the gaps where there's less training We'll meet you all out back for the next start up. And so the whole recertification process is being are there ways to-- most of the behavior. because he addresses that issue in there is that the systems are starting to be able to, You mentioned AI agents that could help you So most of the literature done prosectively So there are emerging statistics to do that that you can apply to the N of 1. and the data to build these And so, the amount of information we can gain And the second thing is, what specifically is Intel doing, and the use cases is up to you that you can actually go and play with. You have the compound problem of semantic normalization all the websites there. I also want to thank our fantastic panelists today. So I hope you all get to join that.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Diane Bryant | PERSON | 0.99+ |
Bob Rogers | PERSON | 0.99+ |
Kay Erin | PERSON | 0.99+ |
John | PERSON | 0.99+ |
David Haussler | PERSON | 0.99+ |
China | LOCATION | 0.99+ |
six | QUANTITY | 0.99+ |
Chris Farley | PERSON | 0.99+ |
Naveen Rao | PERSON | 0.99+ |
100% | QUANTITY | 0.99+ |
Bob | PERSON | 0.99+ |
10 | QUANTITY | 0.99+ |
Ray Kurzweil | PERSON | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
London | LOCATION | 0.99+ |
Mike | PERSON | 0.99+ |
John Madison | PERSON | 0.99+ |
American Association of Medical Specialties | ORGANIZATION | 0.99+ |
four | QUANTITY | 0.99+ |
ORGANIZATION | 0.99+ | |
three months | QUANTITY | 0.99+ |
HHS | ORGANIZATION | 0.99+ |
Andrew Ian | PERSON | 0.99+ |
20 minutes | QUANTITY | 0.99+ |
$100 | QUANTITY | 0.99+ |
first paper | QUANTITY | 0.99+ |
Congress | ORGANIZATION | 0.99+ |
95 percent | QUANTITY | 0.99+ |
second author | QUANTITY | 0.99+ |
UC Santa Cruz | ORGANIZATION | 0.99+ |
100-dollar | QUANTITY | 0.99+ |
200 ways | QUANTITY | 0.99+ |
two billion dollars | QUANTITY | 0.99+ |
George Church | PERSON | 0.99+ |
Health Cap | ORGANIZATION | 0.99+ |
Naveen | PERSON | 0.99+ |
25 plus years | QUANTITY | 0.99+ |
12 layers | QUANTITY | 0.99+ |
27 genes | QUANTITY | 0.99+ |
12 years | QUANTITY | 0.99+ |
Kay | PERSON | 0.99+ |
140 layers | QUANTITY | 0.99+ |
first author | QUANTITY | 0.99+ |
one question | QUANTITY | 0.99+ |
200 people | QUANTITY | 0.99+ |
20 | QUANTITY | 0.99+ |
First | QUANTITY | 0.99+ |
CIA | ORGANIZATION | 0.99+ |
NLP | ORGANIZATION | 0.99+ |
Today | DATE | 0.99+ |
two questions | QUANTITY | 0.99+ |
yesterday | DATE | 0.99+ |
Pete | PERSON | 0.99+ |
Medicare | ORGANIZATION | 0.99+ |
Legos | ORGANIZATION | 0.99+ |
Northern California | LOCATION | 0.99+ |
Echo | COMMERCIAL_ITEM | 0.99+ |
Each | QUANTITY | 0.99+ |
100 times | QUANTITY | 0.99+ |
nervanasys.com | OTHER | 0.99+ |
$1000 | QUANTITY | 0.99+ |
Ray Chrisfall | PERSON | 0.99+ |
Nervana | ORGANIZATION | 0.99+ |
Data Centers Group | ORGANIZATION | 0.99+ |
Global Alliance | ORGANIZATION | 0.99+ |
Global Alliance for Genomics and Health | ORGANIZATION | 0.99+ |
millions | QUANTITY | 0.99+ |
intel.com/ai | OTHER | 0.99+ |
four years | QUANTITY | 0.99+ |
Stanford | ORGANIZATION | 0.99+ |
10,000 examples | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
one disease | QUANTITY | 0.99+ |
Two examples | QUANTITY | 0.99+ |
Steven Hawking | PERSON | 0.99+ |
five years ago | DATE | 0.99+ |
first | QUANTITY | 0.99+ |
two sort | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
One | QUANTITY | 0.99+ |
first time | QUANTITY | 0.99+ |
AI for Good Panel - Autonomous World | SXSW 2017
>> Welcome everyone. Thank you for coming to the Intel AI lounge and joining us here for this economist world event. My name is Jack. I'm the chief architect of our autonomist driving solutions at Intel and I'm very happy to be here and to be joined by an esteemed panel of colleagues who are joining to, I hope, engage you all in a frayed dialogue and discussion. There will be time for questions as well, so keep your questions in mind. Jot them down so you ask them to us later. So first, let me introduce the panel. Next to me we have Michelle, who's the co-founder and CEO of Fine Mind. She just did an interview here shortly. Fine Mind is a company that provides a technology platform for retailers and brands that uses artificial intelligence as the heart of the experiences that her company's technology provides. Joe from Intel is the head of partnerships and acquisitions for artificial intelligence and software technologies. He participated in the recent acquisition of Movidius, a computer vision company that Intel recently acquired and is involved in a lot of smart city activities as well. And then finally, Sarush, who is data scientist by training, but now has JDA labs, which is researching emerging technologies and their application in the supply chain worldwide. So at the end of the day, the internet things that artificial intelligence really promises to improve our lives in quite incredible ways and change the way that we live and work. Often times the first thing that we think about when we think about AI is Skynet, but we at Intel believe in AI for good and that there's a lot of things that can happen to improve the way people live, work, and enjoy life. So as things in the Internet, as things become connected, smart, and automated, artificial intelligence is really going to be at the heart of those new experiences. So as I said my role is the architect for autonomous driving. It's a common place when people think about artificial intelligence, because what we're trying to do is replace a human brain with a machine brain, which means we need to endow that machine with intelligent thoughts, contexts, experiences. All of these things that sort of make us human. So computer vision is the space, obviously, with cameras in your car that people often think about, but it's actually more complicated than that. How many of us have been in a situation on a two lane road, maybe there's a car coming towards us, there's a road off to the right, and you sort of sense, "You know what? That car might turn in front of me." There's no signal. There's no real physical cue, but just something about what that driver's doing where they're looking tells us. So what do we do? We take our foot off the accelerator. We maybe hover it over the brake, just in case, right? But that's intelligence that we take for granted through years and years and years of driving experience that tells us something interesting is happening there. And so that's the challenge that we face in terms of how to bring that level of human intelligence into machines to make our lives better and richer. So enough about automated vehicles though, let's talk to our panelists about some of the areas in which they have expertise. So first for Michelle, I'll ask... Many of us probably buy stuff online everyday, every week, every hour, hourly delivery now. So a lot has been written about the death of traditional retail experiences. How will artificial intelligence and the technology that your company has rejuvenate that retail experience, whether it be online or in the traditional brick and mortar store? >> Yeah, excuse me. So one of the things that I think is a common misconception. You hear about the death of the brick and mortar store, the growth of e-commerce. It's really that e-commerce is beating brick and mortar in growth only and there's still over 90% of the world's commerce is done in physical brick and mortar store. So e-commerce, while it has the growth, has a really long way to go and I think one of the things that's going to be really hard to replace is the very human element of interaction and connection that you get by going to a store. So just because a robot named Pepper comes up to you and asks you some questions, they might get you the answer you need faster and maybe more efficiently, but I think as humans we crave interaction and shopping for certain products especially, is an experience better enjoyed in person with other people, whether that's an associate in the store or people you come with to the store to enjoy that experience with you. So I think artificial intelligence can help it be a more frictionless experience, whether you're in store or online to get you from point A to buying the thing you need faster, but I don't think that it's going to ever completely replace the joy that we get by physically going out into the world and interacting with other people to buy products. >> You said something really profound. You said that the real revolution for artificial intelligence in retail will be invisible. What did you mean by that? >> Yeah, so right now I think that most of the artificial intelligence that's being applied in the retail space is actually not something that shoppers like you and I see when we're on a website or when we're in the store. It's actually happening behind the scenes. It's happening to dynamically change the webpage to show you different stuff. It's happening further up the supply chain, right? With how the products are getting manufactured, put together, packaged, shipped, delivered to you, and that efficiency is just helping retailers be smarter and more effective with their budgets. And so, as they can save money in the supply chain, as they can sell more product with less work, they can reinvest in experience, they can reinvest in the brand, they can reinvest in the quality of the products, so we might start noticing those things change, but you won't actually know that that has anything to do with artificial intelligence, because not always in a robot that's rolling up to you in an aisle. >> So you mentioned the supply chain. That's something that we hear about a lot, but frankly for most of us, I think it's very hard to understand what exactly that means, so could you educate us a bit on what exactly is the supply chain and how is artificial intelligence being implied to improve it? >> Sure, sure. So for a lot of us, supply chain is the term that we picked up when we went to school or we read about it every so often, but we're not that far away from it. It is in fact a key part of what Michelle calls the invisible part of one's experience. So when you go to a store and you're buying a pair of shoes or you're picking up a box of cereal, how often do we think about, "How did it ever make it's way here?" We're the constituent components. They probably came from multiple countries and so they had to be manufactured. They had to be assembled in these plants. They had to then be moved, either through an ocean vessel or through trucks. They probably have gone through multiple warehouses and distribution centers and then finally into the store. And what do we see? We want to make sure that when I go to pick up my favorite brand of cereal, it better be there. And so, one of the things where AI is going to help and we're doing a lot of active work in this, is in the notion of the self learning supply chain. And what that means is really bringing in these various assets and actors of the supply chain. First of all, through IOT and others, generating the data, obviously connecting them, and through AI driving the intelligence, so that I can dynamically figure out the fact that the ocean vessel that left China on it's way to Long Beach has been delayed by 24 hours. What does that mean when you go to a Foot Locker to buy your new pair of shoes? Can I come up with alternate sourcing decisions, so it's not just predicting. It's prescribing and recommending as well. So behind the scenes, bringing in a lot of the, generating a lot of the data, connecting a lot of these actors and then really deriving the smarts. That's what the self learning supply chain is all about. >> Are supply chains always international or can they be local as well? >> Definitely local as well. I think what we've seen over the last decades, it's kind of gotten more and more global, but a lot of the supply chain can really just be within the store as well. You'd be surprised at how often retailers do not know where their product is. Even is it in the front of the store? Is it in the back of the store? Is it in the fitting room? Even that local information is not really available. So to have sensors to discover where things are and to really provide that efficiency, which right now doesn't exist, is a key part of what we're doing. >> So Joe, as you look at companies out there to partner or potentially acquire, do you tend to see technologies that are very domain specific for retail or supply chain or do you see technologies that could bridge multiple different domains in terms of the experiences we could enjoy? >> Yeah, definitely. So both. A lot of infant technologies start out in very niched use cases, but then there are technologies that are pervasive across multiple geographies and multiple markets. So, smart cities is a good way to look at that. So let's level set really quick on smart cities and how we think about that. I have a little sheet here to help me. Alright, so, if anybody here played Sim City before, you have your little city that's a real world that sits here, okay? So this is reality and you have little buildings and cars and they all travel around and you have people walking around with cell phones. And what's happening is as we develop smart cities, we're putting sensors everywhere. We're putting them around utilities, energies, water. They're in our phones. We have cameras and we have audio sensors in our phones. We're placing these on light poles, which is existing sustaining power points around the city. So we have all these different sensors and they're not just cameras and microphones, but they're particulate sensors. They're able to do environmental monitoring and things like that. And so, what we have is we have this physical world with all these sensors here. And then what we have is we've created basically this virtual world that has a great memory because it has all the data from all the sensors and those sensors really act as ties, if you think of it like a quilt, trying a quilt together. You bring it down together and everywhere you have a stitch, you're stitching that virtual world on top of the physical world and that just enables incredible amounts of innovation and creation for developers, for entrepreneurs, to do whatever they want to do to create and solve specific problems. So what really makes that possible is communications, connectivity. So that's where 5G comes in. So with 5G it's not just a faster form of connectivity. It's new infrastructure. It's new communication. It includes multiple types of communication and connectivity. And what it allows it to do is all those little sensors can talk to each other again. So the camera on the light pole can talk to the vehicle driving by or the sensor on the light pole. And so you start to connect everything and that's really where artificial intelligence can now come in and sense what's going on. It can then reason, which is neat, to have computer or some sort of algorithm that actually reasons based on a situation that's happening real time. And it acts on that, but then you can iterate on that or you can adapt that in the future. So if we think of an actual use case, we'll think of a camera on a light post that observes an accident. Well it's programmed to automatically notify emergency services that there's been an accident. But it knows the difference between a fender bender and an actual major crash where we need to send an ambulance or maybe multiple firetrucks. And then you can create iterations and that learns to become more smart. Let's say there was a vehicle that was in the accident that had a little yellow placard on it that said hazard. You're going to want to send different types of emergency services out there. So you can iterate on what it actually does and that's a fantastic world to be in and that's where I see AI really playing. >> That's a great example of what it's all about in terms of making things smart, connective, and autonomous. So Michelle as somebody who has founded the company and the space with technology that's trying to bring some of these experiences to market, there may be folks in the audience who have aspirations to do the same. So what have you learned over the course of starting your company and developing the technology that you're now deploying to market? >> Yeah, I think because AI is such a buzz word. You can get a dot AI domain now, doesn't mean that you should use it for everything. Maybe 7, 10, 15 years ago... These trends have happened before. In the late 90s, it was technology and there was technology companies and they sat over here and there was everybody else. Well that not true anymore. Every company uses technology. Then fast forward a little bit, there was social media was a thing. Social media was these companies over here and then there was everybody else and now every company needs to use social media or actually maybe not. Maybe it's a really bad idea for you to spend a ton of money on social media and you have to make that choice for yourself. So the same thing is true with artificial intelligence and what I tell... I did a panel on AI for Adventure Capitalists last week, trying to help them figure out when to invest and how to evaluate and all that kind of stuff. And what I would tell other aspiring entrepreneurs is "AI is means to an end. "It's not an end in itself." So unless you're a PH.D in machine learning and you want to start an AI as a service business, you're probably not going to start an AI only company. You're going to start a company for a specific purpose, to solve a problem, and you're going to use AI as a means to an end, maybe, if it makes sense to get there, to make it more efficient and all that stuff. But if you wouldn't get up everyday for ten years to do this business that's going to solve whatever problem you're solving or if you wouldn't invest in it if AI didn't exist, then adding dot AI at the end of a domain is not going to work. So don't think that that will help you make a better business. >> That's great advice. Thank you. Surash, as you talked about the automation then of the supply chain, what about people? What about the workers whose jobs may be lost or displaced because of the introduction of this automation? What's your perspective on that? >> Well, that's a great question. It's one that I'm asked quite a bit. So if you think about the supply chain with a lot of the manufacturing plants, with a lot of the distribution centers, a lot of the transportation, not only are we talking about driverless cars as in cars that you and I own, but we're talking about driverless delivery vehicles. We're talking about drones and all of these on the surface appears like it's going to displace human beings. What humans used to do, now machines will do and potentially do better. So what are the implications around human beings. So I'm asked that question quite a bit, especially from our customers and my general perception on this is that I'm actually cautiously optimistic that human beings will continue to do things that are strategic. Human beings will continue to do things that are creative and human being will probably continue to do things that are truly catastrophic, that machines simply have not been able to learn because it doesn't happen very often. One thing that comes to mind is when ATM machines came about several years ago before my time, that displaced a lot of teller jobs in the banking industry, but the banking industry did not go belly up. They found other things to do. If anything, they offered more services. They were more branches that were closed and if I were to ask any of you now if you would go back and not have 24/7 access to cash, you would probably laugh at me. So the thing is, this is AI for good. I think these things might have temporary impact in terms of what it will do to labor and to human beings but I think we as human beings will find bigger, better, different things to do and that's just in the nature of the human journey. >> Yeah, there's definitely a social acceptance angle to this technology, right? Many of us technologists in the room, it's easier for us to understand what the technology is, how it works, how it was created, but for many of our friends and family, they don't. So there's a social acceptance angle to this. So Michelle as you see this technology deployed in retail environments, which is a space where almost every person in every country goes, how do you think about making it feel comfortable for people to interact with this kind of technology and not be afraid of the robots or the machines behind the curtain. >> Yeah, that's a great question. I think that user experience always has to come first, so if you're using AI for AI's sake or for the cool factor, the wow factor, you're already doing it wrong. Again, it needs to solve a problem and what I tend to tell people who are like, "Oh my God. AI sounds so scary. "We can't let this happen." I'm like, "It's already happening "and you're already liking it. "You just don't know "because it's invisible in a lot of ways." So if you can point of those scenarios where AI has already benefited you and it wasn't scary because it was a friendly kind of interaction, you might not even have realized it was there versus something that looks so different and... Like panic driving. I think that's why the driverless car thing is a big deal because you're so used to seeing, in America at least, someone on the left side of the car in the front seat. And not seeing that is like, woah, crazy. So I think that it starts with the experience and making it an acceptable kind of interface or format that doesn't give you that, "Oh my God. Something is wrong here," kind of feeling. >> Yeah, that's a great answer. In fact, it reminds me there was this really amazing study by a Professor Nicholas Eppily that was published in the journal of social psychology and the name of this study was called A Mind In A Machine. And what he did was he took subjects and had a fully functional automated vehicle and then a second identical fully functional automated vehicle, but this one had a name and it had a voice and it had sort of a personality. So it had human anthropomorphics characteristics. And he took people through these two different scenarios and in both scenarios he's evil and introduced a crash in the scenario where it was unavoidable. There was nothing going to happen. You were going to get into an accident in these cars. And then afterwards, he pulled the subjects and said, "Well, what did you feel about that accident? "First, what did you feel about the car?" They were more comfortable in the one that had anthropomorphic features. They felt it was safer and they'd be more willing to get into it, which is not terribly surprising, but the kicker was the accident. In the vehicle that had a voice and a name, they actually didn't blame the self-driving car they were in. They blamed the other car. But in the car that didn't have anthropomorphic features, they blamed the machine. They said there's something wrong with that car. So it's one of my favorite studies because I think it does illustrate that we have to remember the human element to these experiences and as artificial intelligence begins to replace humans, or some of us even, we need to remember that we are still social beings and how we interact with other things, whether they be human or non-human, is important. So, Joe, you talk about evaluating companies. Michelle started a company. She's gotten funding. As you go out and look at new companies that are starting up, there's just so much activity, companies that just add dot AI to the name as Michelle said, how do you cut through the noise and try to get to the heart of is there any value in a technology that a company's bringing or not? >> Definitely. Well, each company has it's unique, special sauce, right? And so, just to reiterate what Michelle was talking about, we look for companies that are really good at doing what they do best, whatever that may be, whatever that problem that they're solving that a customer's willing to pay for, we want to make sure that that company's doing that. No one wants a company that just has AI in the name. So we look for that number one and the other thing we do is once we establish that we have a need or we're looking at a company based on either talent or intellectual property, we'll go in and we'll have to do a vetting process and it takes a whole. It's a very long process and there's legal involved but at the end of the day, the most important thing for the start up to remember is to continue doing what they do best and continue to build upon their special sauce and make sure that it's very valuable to their customer. And if someone else wants to look at them for acquisition so be it, but you need to be meniacally focused on your own customer. That's my two cents. >> I'm thinking again about this concept of embedding human intelligence, but humans have biases right? And sometimes those biases aren't always good. So how do we as technologists in this industry try to create AI for good and not unintentionally put some of our own human biases into models that we train about what's socially acceptable or not? Anyone have any thoughts on that? >> I actually think that the hype about AI taking over and destroying humanity, it's possible and I don't want to disagree with Steven Hawking as he's way smarter than I am. But he kind of recognizes it could go both ways and so right now, we're in a world where we're still feeding the machine. And so, there's a bunch of different issues that came up with humans feeding the machine with their foibles of racism and hatred and bias and humans experience shame which causes them to lash out and what to put somebody else down. And so we saw that with Tay, the Microsoft chatbot. We saw that with even Google's fake news. They're like picking sources now to answer the question in the top box that might be the wrong source. Ads that Google serves often show men high paying jobs, $200,000 a year jobs, and women don't get those same ones. So if you trace that back, it's always coming back to the inputs and the lens that humans are coming at it from. So I actually think that we could be in a way better place after this singularity happens and the machines are smarter than us and they take over and they become our overlords. Because when we think about the future, it's a very common tendency for humans to fill in the blanks of what you don't know in the future with what's true today. And I was talking to you guys at lunch. We were talking about this harbored psychology professor who wrote a book and in the book he was talking about how 1950s, they were imagining the future and all these scifi stories and they have flying cars and hovercrafts and they're living in space, but the woman still stays at home and everyone's white. So they forgot to extrapolate the social things to paint the picture in, but I think when we're extrapolating into the future where the computers are our overlords, we're painting them with our current reality, which is where humans are kind of terrible (laughs). And maybe computers won't be and they'll actually create this Utopia for us. So it could be positive. >> That's a very positive view. >> Thanks. >> That's great. So do we have this all figured out? Are there any big challenges that remain in our industries? >> I want to add a little bit more to the learning because I'm a data scientist by training and a lot of times, I run into folks who think that everything's been figured out. Everything is done. This is so cool. We're good to go and one of the things that I share with them is something that I'm sure everyone here can relate to. So if a kindergartner goes to school and starts to spell profanity, that's not because the kid knows anything good or bad. That is what the kid has learned at home. Likewise, if we don't train machines well, it's training will in fact be biased to your point. So one of the things that we have to kep in mind when we talk about this is we have to be careful as well because we're the ones doing the training. It doesn't automatically know what is good or bad unless that set of data is also fed to it. So I just wanted to kind of add to your... >> Good. Thank you. So why don't we open it up a little bit for questions. Any questions in the audience for our panelists? There's one there looks like (laughs). Emily, we'll get to you soon. >> I had a question for Sarush based on what you just said about us training or you all training these models and teaching them things. So when you deploy these models to the public with them being machine learning and AI based, is it possible for us to retrain them and how do you build in redundancies for the public like throwing off your model and things like that? What are some of the considerations that go into that? >> Well, one thing for sure is training is continuous. So no system should be trained once, deployed, and then forgotten. So that is something that we as AI professionals need to absolutely, because... Trends change as well. What was optimal two years ago is no longer optimal. So that part needs to continue to happen and we're the where the whole IOT space is so important is it will continue to generate relevant consumable data that these machines can continuously learn. >> So how do you decide what data though, is good or bad, as you retrain and evolve that data over time? As a data scientist, how do you do selection on data? >> So, and I want to piggyback on what Michelle said because she's spot on. What is the problem that you're trying to solve? It always starts from there because we have folks who come in to CIOs, "Oh look. "When big data was hot, we started to collect "a lot of the data, but nothing has happened." But data by itself doesn't automatically do magic for you, so we ask, "What kind of problem are you trying to solve? "Are you trying to figure out "what kinds of products to sell? "Are you trying to figure out "the optimal assortment mix for you? "Are you trying to find the shortest path "in order to get to your stores?" And then the question is, "Do you now have the right data "to solve that problem?" A lot of times we put the science and I'm a data scientist by training. I would love to talk about the science, but really, it's the problem first. The data and the science, they come after. >> Thanks, good advice. Any other questions in the audience? Yes, one right up here. (laughing) >> Test, test. Can you hear me? >> Yep. >> So with AI machinery becoming more commonplace and becoming more accessible to developers and visionaries and thinkers alike rather than being just a giant warehouse of a ton of machines and you get one tiny machine learning, do you foresee more governance coming into play in terms of what AI is allowed to do and the decisions of what training data is allowed to be fed to Ais in terms of influence? You talk about data determining if AI will become good or bad, but humans being the ones responsible for the training in the first place, obviously, they can use that data to influence as they, just the governance and the influence. >> Jack: Who wants to take that one? >> I'll take a quick stab at it. So, yes, it's going to be an open discussion. It's going to have to take place, because really, they're just machines. It's machine learning. We teach it. We teach it what to do, how to act. It's just an extension of us and in fact, I think you had a really great conversation or a statement at lunch where you talked about your product being an extension of a designer because, and we can get into that a little bit, but really, it's just going to do what we tell it to do. So there's definitely going to have to be discussions about what type of data we feed. It's all going to be centered around the use case and what that solves the use case. But I imagine that that will be a topic of discussion for a long time about what we're going to decide to do. >> Jack: Michelle do you want to comment on this thought of taking a designer's brain and putting it into a model somehow? >> Well, actually, what I wanted to say was that I think that the regulation and the governance around it is going to be self imposed by the the developer and data science community first, because I feel like even experts who have been doing this for a long time don't rally have their arms fully around what we're dealing with here. And so to expect our senators, our congressmen, women, to actually make regulation around it is a lot, because they're not technologists by training. They have a lot of other stuff going on. If the community that's already doing the work doesn't quite know what we're dealing with, then how can we expect them to get there? So I feel like that's going to be a long way off, but I think that the people who touch and feel and deal with models and with data sets and stuff everyday are the kind of people who are going to get together and self-regulate for a while, if they're good hearted people. And we talk about AI for good. Some people are bad. Those people won't respect those convenance that we come up with, but I think that's the place we have to start. >> So really you're saying, I think, for data scientists and those of us working in this space, we have a social, ethical, or moral obligation to humanity to ensure that our work is used for good. >> Michelle: No pressure. (laughing) >> None taken. Any other questions? Anything else? >> I just wanted to talk about the second part of what she said. We've been working with a company that builds robots for the store, a store associate if you will. And one of their very interesting findings was that the greatest acceptance of it right now has been at car dealerships because when someone goes to the car dealer and we all have had terrible experiences doing that. That's why we try to buy it online, but just this perception that a robot would be unbiased, that it will give you the information without trying to push me one way or the other. >> The hard sell. >> So there's that perception side of it too that, it isn't that the governance part of your question, but more the biased perception side of what you said. I think it's fascinating how we're already trained to think that this is going to have an unbiased opinion, whether or not that true. >> That's fascinating. Very cool. Thank you Sarush. Any other questions in the audience? No, okay. Michelle, could I ask, you've got a station over there that talks a little bit more about your company, but for those that haven't seen it yet, could you tell us a little bit about what is the experience like or how is the shopping experience different for someone that's using your company's technology than what it was before? >> Oh, free advertising. I would love to. No, but actually, I started this company because as a consumer I found myself going back to the user experience piece, just constantly frustrated with the user experience of buying products one at a time and then getting zero help. And then here I am having to google how to wear a white blazer to not look like an idiot in the morning when I get dressed with my white blazer that I just bought and I was excited about. And it's a really simple thing, which is how do I use the product that I'm buying and that really simple thing has been just abysmally handled in the retail industry, because the only tool that the retailers have right now are manual. So in fashion, some of our fashion customers like John Varvatos is an example we have over there, it's like a designer for high-end men's clothing, and John Varvatos is a person, it's not just the name of the company. He's an actual person and he has a vision for what he wants his products to look like and the aesthetic and the style and there's a rockstar vibe and to get that information into the organization, he would share it verbally with PDFs, thing like that. And then his team of merchandisers would literally go manually and make outfits on one page and then go make an outfit on another page with the same exact items and then products would go out of stock and they'd go around in circles and that's a terrible, terrible job. So to the conversation earlier about people losing jobs because of artificial intelligence. I hope people do lose jobs and I hope they're the terrible jobs that no one wanted to do in the first place, because the merchandisers that we help, like the one form John Varvatos, literally said she was weeks away from quitting and she got a new boss and said, "If you don't ix this part of my job, I'm out of here." And he had heard about us. He knew about us and so he brought us in to solve that problem. So I don't think it's always a bad thing, because if we can take that route, boring, repetitive task off of human's plates, what more amazing things can we do with our brain that is only human and very unique to us and how much more can we advance ourselves and our society by giving the boring work to a robot or a machine. >> Well, that's fantastic. So Joe, when you talk about Smart Cities, it seems like people have been talking about Smart Cities for decades and often people cite funding issues, regulatory environment or a host of other reasons why these things haven't happened. Do you think we're on the cusp of breaking through there or what challenges still remain for fulfilling that vision of a smart city? >> I do, I do think we're on the cusp. I think a lot of it has to do, largely actually, with 5G and connectivity, the ability to process and send all this data that needs to be shared across the system. I also think that we're getting closer and more conscientious about security, which is a major issue with IOT, making sure that our in devices or our edge devices, those things out there sensing, are secure. And I think interocular ability is something that we need to champion as well and make sure that we basically work together to enable these systems. So very, very difficult to create little, tiny walled gardens of solutions in a smart city. You may corner a certain part of the market, but you're definitely not going to have that ubiquitous benefit to society if you establish those little walled gardens, so those are the areas I think we need to focus on and I think we are making serious progress in all of them. >> Very good. Michelle, you mentioned earlier that artificial intelligence was all around us in lots of places and things that we do on a daily basis, but we probably don't realize it. Could you share a couple examples? >> Yeah, so I think everything you do online for the most part, literally anything you might do, whether that's googling something or you go to some article, the ads might be dynamically picked for you using machine learning models that have decided what is appropriate based on you and your treasure trove of data that you have out there that you're giving up all the time and not really understanding you're giving up >> The shoes that follow you around the internet right? >> Yeah, exactly. So that's basically anything online. I'm trying to give in the real-world. I think that, to your point earlier about he supply chain, just picking a box of cereal off the shelf and taking it home, there's not artificial intelligence in that at all, but the supply chain behind it. So the supply chain behind pretty much everything we do even in television, like how media gets to us and get consumed. At some point in the supply chain, there's artificial intelligence playing in there as well. >> So to start us in the supply chain where we can get the same day even within the hour delivery. How do you get better than that? What's coming that's innovative in the supply chain that will be new in the future? >> Well, so that is one example of it, but you'd be surprised at how inefficient the supply chain is, even with all the advances that have already gone in, whether it's physical advances around building modern warehouses and modern manufacturing plants, whether it's through software and others that really help schedule things and optimize things. What has happened in the supply chain just given how they've evolved is they're very siloed, so a lot of times the manufacturing plant does things that the distribution folks do not know. The distribution folks do things that the transportation folks don't know and then the store folks know nothing other than when the trucks pulls up, that's the first time they find out about things. So where the great opportunity in my mind is, in the space that I'm in, is really the generation of data, the connection of data, and finally, deriving the smarts that really help us improve efficiency. There's huge opportunity there. And again, we don't know it because it's all invisible to us. >> Good. Let me pause and see if there's any questions in the audience. There, we got one there. >> Thank you. Hi guys, you alright? I just had a question about ethics and the teaching of ethics. As you were saying, we feed the artificial intelligence, whereas in a scenario which is probably a little bit more attuned to automated driving, in a car crash scenario between do we crash these two people or three people? I would be choosing two, whereas the scenario may be it's actually better to just crash the car and kill myself. That thought would never go through my mind, because I'm human. My rule number one is self preservation. So how do we teach the computer this sort of side of it? Is there actually the AI ethic going to be better than our own ethics? How do we start? >> Yeah, that's a great question. I think the opportunity is there as Michelle was talking earlier about maybe when you cross that chasm and you get this new singularity, maybe the AI ethics will be better than human ethics because the machine will be able to think about greater concerns perhaps other than ourselves. But I think just from my point of view, working in the space of automated vehicles, I think it is going to have to be something that the industry, and societies are different, different geographies, and different countries. We have different ways of looking at the world. Cultures value different things and so I think technologists in those spaces are going to have to get together and agree amongst the community from a social contract theory standpoint perhaps in a way that's going to be acceptable to everyone who lives in that environment. I don't think we can come up with a uniform model that would apply to all spaces, but it's got to be something though that we all, as members of a community, can accept. And so yeah, that would be the right thing to do in that situation and that's not going to be an easy task by any means, which is, I think, one of the reasons why you'll continue to see humans have an important role to play in automated vehicles so that the human could take over in exactly that kind of scenario, because the machines perhaps aren't quite smart enough to do it or maybe it's not the smarts or the processing capability. It's maybe that we haven't as technologists and ethicists gotten together long enough to figure out what are those moral and ethical frameworks that we could use to apply to those situations. Any other thoughts? >> Yeah, I wanted to jump in there real quick. Absolutely questions that need to be answered, but let's come together and make a solution that needs to have those questions answered. So let's come together first and fix the problems that need to be fixed now so that we can build out those types of scenarios. We can now put our brainpower to work to decide what to do next. There was a quote I believe by Andrew Ningh Bidou and he was saying in concerning deep questions about what's going to happen in the future with AI. Are we going to have AI overlords or anything like that? And it's kind of like worrying about overpopulation at the point of Mars. Because maybe we're going to get there someday and maybe we're going to send people there and maybe we're going to establish a human population on Mars and then maybe it will get too big and then maybe we'll have problems on Mars, but right now we haven't landed on the planet and I thought that really does a good job of putting in perspective that that overall concern about AI taking over. >> So when you think about AI being applied for good and Michelle you talked about don't do AI just for AI's sake, have a problem to solve, I'll open it up to any of the three of you, what's a problem in your life or in your work experience that you'd love somebody out here would go solve with AI? >> I have one. Sorry, I wanted to do this real quick. There's roads blocked off and it's raining and I have to walk a mile to find a taxi in the rain right now after this to go home. I would love for us to have some sort of ability to manage parking spaces and determine when and who can come in to which parts of the city and when there's a spot downtown, I want my autonomous vehicle to know which one's available and go directly to that spot and I want it to be cued in a certain manner to where I'm next in line and I know. And so I would love for someone to go solve that problem. There's been some development on the infrastructure side for that kind of solution. We have a partnership Intel does with GE and we're putting sensors that have, it's an IOT sensor basically. It's called City IQ. It has environmental monitoring, audio, visual sensors and it allows this type of use case to take place. So I would love to see iterations on that. I would love to see, sorry there's another one that I'm particular about. Growing up I lived in Southern California right against the hills, a housing development, because the hills and there was not a factory, but a bunch of oil derricks back there. I would love to have sensor that senses the particulate in the air to see if there was too many fumes coming from that oil field into my yard growing up as a little kid. I would love for us to solve problems like that, so that's the type of thing that we'll be able to solve. Those are the types of innovations that will be able to take place once we have these sensors in place, so I'm going to sit down on that one and let someone else take over. >> I'm really glad you said the second one because I was thinking, "What I'm about to say is totally going to "trivialize Joe's pain and I don't want to do that." But cancer is my answer, because there's so much data in health and all these patterns are there waiting to be recognized. There's so many things you don't know about cancer and so many indicators that we could capture if we just were able to unmask the data and take a look, but I knew a brilliant company that was using artificial intelligence specifically around image processing to look at CAT scans and figure out what the leading indicators might be in a cancerous scenario. And they pivoted to some way more trivial problem which is still a problem and not to trivialize parking an whatnot, but it's not cancer. And they pivoted away from this amazing opportunity because of the privacy and the issues with HIPPA around health data. And I understand there's a ton of concern with it getting into the wrong hands and hacking and all of this stuff. I get that, but the opportunity in my mind far outweighs the risk and the fact that they had to change their business model and change their company essentially broke my heart because they were really onto something. >> Yeah that's a shame and it's funny you mention that. Intel has an effort that we're calling the cancer cloud and what we're trying to do is provide some infrastructure to help with that problem and the way cancer treatments work today is if you go to a university hospital let's say here in Texas, how you interpret that scan and how you respond and apply treatment, that knowledge is basically just kept within that hospital and within that staff. And so on the other side of the country, somebody could go in and get a scan and maybe that scan brand new to that facility and so they don't know how to treat it, but if you had an opportunity with machine learning to be able to compare scans from people, not only just in this country, but around the world and understand globally, all of the hundreds of different treatment pads that were applied to that particular kind of cancer, think how many lives could be saved, because then you're sharing knowledge with what courses of treatment worked. But it's one of those things like you say, sometimes it's the regulatory environment or it's other factors that hold us back from applying this technology to do some really good things, so it's a great example. Okay, any other questions in the audience? >> I have one. >> Good Emily. >> So this goes off of the HIPPA question, which is, and you were talking about just dynamically displaying ads earlier. What does privacy look like in a fully autonomous world? Anybody can answer that one. Are we still private citizens? What does it look like? >> How about from a supply chain standpoint? You can learn a lot about somebody in terms of the products that they buy and I think to all of us, we sort of know maybe somebody's tracking what we're buying but it's still creepy when we think about how people could potentially use that against us. So, how do you from a supply chain standpoint approach that problem? >> Yeah and it's something that comes up in my life almost every day because one of the thing's we'd like to do is to understand consumer behavior. How often am I buying? What kinds of products am I buying? What am I returning? And so for that you need transactional data. You really get to understand the individual. That then starts to get into this area of privacy. Do you know too much about me? And so a lot of times what we do is data is clearly anonymized so all we know is customer A has this tendency, customer B has this tendency. And that then helps the retailers offer the right products to these customers, but to your point, there are those privacy concerns and I think issues around governance, issues around ethics, issues around privacy, these will continue to be ironed out. I don't think there's a solid answer for any of these just yet. >> And it's largely a reflection of society. How comfortable are we with how much privacy? Right now I believe we put the individual in control of as much information as possible that they are able to release or not. And so a lot of what you said, everyone's anonymizing everything at the moment, but that may change as society's values change slightly and we'll be able to adapt to what's necessary. >> Why don't we try to stump the panel. Anyone have any ideas on things in your life you'd like to be solved with AI for good? Any suggestions out there that we could then hear from our data scientist and technologist and folks here? Any ideas? No? Alright good. Alright, well, thank you everyone. Really appreciate your time. Thank you for joining Intel here at the AI lounge at Autonomous World. We hope you've enjoyed the panel and we wish you a great rest of your event here at South by Southwest. (audience clapping) (bright music)
SUMMARY :
and change the way that we live and work. So one of the things that I think is a common misconception. You said that the real revolution to show you different stuff. So you mentioned the supply chain. and so they had to be manufactured. and to really provide that efficiency, and that learns to become more smart. and the space with technology that's trying at the end of a domain is not going to work. of the supply chain, what about people? and that's just in the nature of the human journey. and not be afraid of the robots or format that doesn't give you that, and the name of this study was called A Mind In A Machine. And so, just to reiterate what Michelle was talking about, that we train about what's socially acceptable or not? and the machines are smarter than us So do we have this all figured out? So one of the things that we have to kep in mind Any questions in the audience for our panelists? and how do you build in redundancies for the public So that part needs to continue to happen so we ask, "What kind of problem are you trying to solve? Any other questions in the audience? Can you hear me? and the decisions of what training data is allowed So there's definitely going to have to be discussions So I feel like that's going to be a long way off, to humanity to ensure that our work is used for good. Michelle: No pressure. Any other questions? for the store, a store associate if you will. but more the biased perception side of what you said. Any other questions in the audience? and the aesthetic and the style and there's a rockstar vibe So Joe, when you talk about Smart Cities, and make sure that we basically work together in lots of places and things that we do on a daily basis, in that at all, but the supply chain behind it. So to start us in the supply chain where we can get that the transportation folks don't know There, we got one there. and the teaching of ethics. in that situation and that's not going to be that need to be fixed now so that in the air to see if there was too many fumes coming and so many indicators that we could capture and maybe that scan brand new to that facility and you were talking about of the products that they buy and I think to all of us, And so for that you need transactional data. that they are able to release or not. here at the AI lounge at Autonomous World.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Michelle | PERSON | 0.99+ |
Jack | PERSON | 0.99+ |
Steven Hawking | PERSON | 0.99+ |
Emily | PERSON | 0.99+ |
Texas | LOCATION | 0.99+ |
Joe | PERSON | 0.99+ |
America | LOCATION | 0.99+ |
Mars | LOCATION | 0.99+ |
Southern California | LOCATION | 0.99+ |
ten years | QUANTITY | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
two | QUANTITY | 0.99+ |
Fine Mind | ORGANIZATION | 0.99+ |
Andrew Ningh Bidou | PERSON | 0.99+ |
John Varvatos | PERSON | 0.99+ |
Sim City | TITLE | 0.99+ |
Nicholas Eppily | PERSON | 0.99+ |
two people | QUANTITY | 0.99+ |
three people | QUANTITY | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
Sarush | PERSON | 0.99+ |
GE | ORGANIZATION | 0.99+ |
hundreds | QUANTITY | 0.99+ |
First | QUANTITY | 0.99+ |
24 hours | QUANTITY | 0.99+ |
two lane | QUANTITY | 0.99+ |
last week | DATE | 0.99+ |
Long Beach | LOCATION | 0.99+ |
first | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
one page | QUANTITY | 0.99+ |
both scenarios | QUANTITY | 0.99+ |
three | QUANTITY | 0.99+ |
Googl | ORGANIZATION | 0.99+ |
first thing | QUANTITY | 0.99+ |
both ways | QUANTITY | 0.98+ |
Movidius | ORGANIZATION | 0.98+ |
two cents | QUANTITY | 0.98+ |
one | QUANTITY | 0.98+ |
one example | QUANTITY | 0.98+ |
1950s | DATE | 0.98+ |
second part | QUANTITY | 0.98+ |
two different scenarios | QUANTITY | 0.98+ |
over 90% | QUANTITY | 0.98+ |
two years ago | DATE | 0.98+ |
7 | DATE | 0.98+ |
Surash | PERSON | 0.98+ |
China | LOCATION | 0.98+ |
late 90s | DATE | 0.97+ |
each company | QUANTITY | 0.97+ |
several years ago | DATE | 0.97+ |
SXSW 2017 | EVENT | 0.97+ |
today | DATE | 0.97+ |
second one | QUANTITY | 0.97+ |
ORGANIZATION | 0.96+ | |
10 | DATE | 0.94+ |
second | QUANTITY | 0.94+ |
Tay | PERSON | 0.94+ |
Autonomous World | LOCATION | 0.94+ |