Image Title

Search Results for SXSW 2017:

AI for Good Panel - Precision Medicine - SXSW 2017 - #IntelAI - #theCUBE


 

>> Welcome to the Intel AI Lounge. Today, we're very excited to share with you the Precision Medicine panel discussion. I'll be moderating the session. My name is Kay Erin. I'm the general manager of Health and Life Sciences at Intel. And I'm excited to share with you these three panelists that we have here. First is John Madison. He is a chief information medical officer and he is part of Kaiser Permanente. We're very excited to have you here. Thank you, John. >> Thank you. >> We also have Naveen Rao. He is the VP and general manager for the Artificial Intelligence Solutions at Intel. He's also the former CEO of Nervana, which was acquired by Intel. And we also have Bob Rogers, who's the chief data scientist at our AI solutions group. So, why don't we get started with our questions. I'm going to ask each of the panelists to talk, introduce themselves, as well as talk about how they got started with AI. So why don't we start with John? >> Sure, so can you hear me okay in the back? Can you hear? Okay, cool. So, I am a recovering evolutionary biologist and a recovering physician and a recovering geek. And I implemented the health record system for the first and largest region of Kaiser Permanente. And it's pretty obvious that most of the useful data in a health record, in lies in free text. So I started up a natural language processing team to be able to mine free text about a dozen years ago. So we can do things with that that you can't otherwise get out of health information. I'll give you an example. I read an article online from the New England Journal of Medicine about four years ago that said over half of all people who have had their spleen taken out were not properly vaccinated for a common form of pneumonia, and when your spleen's missing, you must have that vaccine or you die a very sudden death with sepsis. In fact, our medical director in Northern California's father died of that exact same scenario. So, when I read the article, I went to my structured data analytics team and to my natural language processing team and said please show me everybody who has had their spleen taken out and hasn't been appropriately vaccinated and we ran through about 20 million records in about three hours with the NLP team, and it took about three weeks with a structured data analytics team. That sounds counterintuitive but it actually happened that way. And it's not a competition for time only. It's a competition for quality and sensitivity and specificity. So we were able to indentify all of our members who had their spleen taken out, who should've had a pneumococcal vaccine. We vaccinated them and there are a number of people alive today who otherwise would've died absent that capability. So people don't really commonly associate natural language processing with machine learning, but in fact, natural language processing relies heavily and is the first really, highly successful example of machine learning. So we've done dozens of similar projects, mining free text data in millions of records very efficiently, very effectively. But it really helped advance the quality of care and reduce the cost of care. It's a natural step forward to go into the world of personalized medicine with the arrival of a 100-dollar genome, which is actually what it costs today to do a full genome sequence. Microbiomics, that is the ecosystem of bacteria that are in every organ of the body actually. And we know now that there is a profound influence of what's in our gut and how we metabolize drugs, what diseases we get. You can tell in a five year old, whether or not they were born by a vaginal delivery or a C-section delivery by virtue of the bacteria in the gut five years later. So if you look at the complexity of the data that exists in the genome, in the microbiome, in the health record with free text and you look at all the other sources of data like this streaming data from my wearable monitor that I'm part of a research study on Precision Medicine out of Stanford, there is a vast amount of disparate data, not to mention all the imaging, that really can collectively produce much more useful information to advance our understanding of science, and to advance our understanding of every individual. And then we can do the mash up of a much broader range of science in health care with a much deeper sense of data from an individual and to do that with structured questions and structured data is very yesterday. The only way we're going to be able to disambiguate those data and be able to operate on those data in concert and generate real useful answers from the broad array of data types and the massive quantity of data, is to let loose machine learning on all of those data substrates. So my team is moving down that pathway and we're very excited about the future prospects for doing that. >> Yeah, great. I think that's actually some of the things I'm very excited about in the future with some of the technologies we're developing. My background, I started actually being fascinated with computation in biological forms when I was nine. Reading and watching sci-fi, I was kind of a big dork which I pretty much still am. I haven't really changed a whole lot. Just basically seeing that machines really aren't all that different from biological entities, right? We are biological machines and kind of understanding how a computer works and how we engineer those things and trying to pull together concepts that learn from biology into that has always been a fascination of mine. As an undergrad, I was in the EE, CS world. Even then, I did some research projects around that. I worked in the industry for about 10 years designing chips, microprocessors, various kinds of ASICs, and then actually went back to school, quit my job, got a Ph.D. in neuroscience, computational neuroscience, to specifically understand what's the state of the art. What do we really understand about the brain? And are there concepts that we can take and bring back? Inspiration's always been we want to... We watch birds fly around. We want to figure out how to make something that flies. We extract those principles, and then build a plane. Don't necessarily want to build a bird. And so Nervana's really was the combination of all those experiences, bringing it together. Trying to push computation in a new a direction. Now, as part of Intel, we can really add a lot of fuel to that fire. I'm super excited to be part of Intel in that the technologies that we were developing can really proliferate and be applied to health care, can be applied to Internet, can be applied to every facet of our lives. And some of the examples that John mentioned are extremely exciting right now and these are things we can do today. And the generality of these solutions are just really going to hit every part of health care. I mean from a personal viewpoint, my whole family are MDs. I'm sort of the black sheep of the family. I don't have an MD. And it's always been kind of funny to me that knowledge is concentrated in a few individuals. Like you have a rare tumor or something like that, you need the guy who knows how to read this MRI. Why? Why is it like that? Can't we encapsulate that knowledge into a computer or into an algorithm, and democratize it. And the reason we couldn't do it is we just didn't know how. And now we're really getting to a point where we know how to do that. And so I want that capability to go to everybody. It'll bring the cost of healthcare down. It'll make all of us healthier. That affects everything about our society. So that's really what's exciting about it to me. >> That's great. So, as you heard, I'm Bob Rogers. I'm chief data scientist for analytics and artificial intelligence solutions at Intel. My mission is to put powerful analytics in the hands of every decision maker and when I think about Precision Medicine, decision makers are not just doctors and surgeons and nurses, but they're also case managers and care coordinators and probably most of all, patients. So the mission is really to put powerful analytics and AI capabilities in the hands of everyone in health care. It's a very complex world and we need tools to help us navigate it. So my background, I started with a Ph.D. in physics and I was computer modeling stuff, falling into super massive black holes. And there's a lot of applications for that in the real world. No, I'm kidding. (laughter) >> John: There will be, I'm sure. Yeah, one of these days. Soon as we have time travel. Okay so, I actually, about 1991, I was working on my post doctoral research, and I heard about neural networks, these things that could compute the way the brain computes. And so, I started doing some research on that. I wrote some papers and actually, it was an interesting story. The problem that we solved that got me really excited about neural networks, which have become deep learning, my office mate would come in. He was this young guy who was about to go off to grad school. He'd come in every morning. "I hate my project." Finally, after two weeks, what's your project? What's the problem? It turns out he had to circle these little fuzzy spots on these images from a telescope. So they were looking for the interesting things in a sky survey, and he had to circle them and write down their coordinates all summer. Anyone want to volunteer to do that? No? Yeah, he was very unhappy. So we took the first two weeks of data that he created doing his work by hand, and we trained an artificial neural network to do his summer project and finished it in about eight hours of computing. (crowd laughs) And so he was like yeah, this is amazing. I'm so happy. And we wrote a paper. I was the first author of course, because I was the senior guy at age 24. And he was second author. His first paper ever. He was very, very excited. So we have to fast forward about 20 years. His name popped up on the Internet. And so it caught my attention. He had just won the Nobel Prize in physics. (laughter) So that's where artificial intelligence will get you. (laughter) So thanks Naveen. Fast forwarding, I also developed some time series forecasting capabilities that allowed me to create a hedge fund that I ran for 12 years. After that, I got into health care, which really is the center of my passion. Applying health care to figuring out how to get all the data from all those siloed sources, put it into the cloud in a secure way, and analyze it so you can actually understand those cases that John was just talking about. How do you know that that person had had a splenectomy and that they needed to get that pneumovax? You need to be able to search all the data, so we used AI, natural language processing, machine learning, to do that and then two years ago, I was lucky enough to join Intel and, in the intervening time, people like Naveen actually thawed the AI winter and we're really in a spring of amazing opportunities with AI, not just in health care but everywhere, but of course, the health care applications are incredibly life saving and empowering so, excited to be here on this stage with you guys. >> I just want to cue off of your comment about the role of physics in AI and health care. So the field of microbiomics that I referred to earlier, bacteria in our gut. There's more bacteria in our gut than there are cells in our body. There's 100 times more DNA in that bacteria than there is in the human genome. And we're now discovering a couple hundred species of bacteria a year that have never been identified under a microscope just by their DNA. So it turns out the person who really catapulted the study and the science of microbiomics forward was an astrophysicist who did his Ph.D. in Steven Hawking's lab on the collision of black holes and then subsequently, put the other team in a virtual reality, and he developed the first super computing center and so how did he get an interest in microbiomics? He has the capacity to do high performance computing and the kind of advanced analytics that are required to look at a 100 times the volume of 3.2 billion base pairs of the human genome that are represented in the bacteria in our gut, and that has unleashed the whole science of microbiomics, which is going to really turn a lot of our assumptions of health and health care upside down. >> That's great, I mean, that's really transformational. So a lot of data. So I just wanted to let the audience know that we want to make this an interactive session, so I'll be asking for questions in a little bit, but I will start off with one question so that you can think about it. So I wanted to ask you, it looks like you've been thinking a lot about AI over the years. And I wanted to understand, even though AI's just really starting in health care, what are some of the new trends or the changes that you've seen in the last few years that'll impact how AI's being used going forward? >> So I'll start off. There was a paper published by a guy by the name of Tegmark at Harvard last summer that, for the first time, explained why neural networks are efficient beyond any mathematical model we predict. And the title of the paper's fun. It's called Deep Learning Versus Cheap Learning. So there were two sort of punchlines of the paper. One is is that the reason that mathematics doesn't explain the efficiency of neural networks is because there's a higher order of mathematics called physics. And the physics of the underlying data structures determined how efficient you could mine those data using machine learning tools. Much more so than any mathematical modeling. And so the second thing that was a reel from that paper is that the substrate of the data that you're operating on and the natural physics of those data have inherent levels of complexity that determine whether or not a 12th layer of neural net will get you where you want to go really fast, because when you do the modeling, for those math geeks in the audience, a factorial. So if there's 12 layers, there's 12 factorial permutations of different ways you could sequence the learning through those data. When you have 140 layers of a neural net, it's a much, much, much bigger number of permutations and so you end up being hardware-bound. And so, what Max Tegmark basically said is you can determine whether to do deep learning or cheap learning based upon the underlying physics of the data substrates you're operating on and have a good insight into how to optimize your hardware and software approach to that problem. >> So another way to put that is that neural networks represent the world in the way the world is sort of built. >> Exactly. >> It's kind of hierarchical. It's funny because, sort of in retrospect, like oh yeah, that kind of makes sense. But when you're thinking about it mathematically, we're like well, anything... The way a neural can represent any mathematical function, therfore, it's fully general. And that's the way we used to look at it, right? So now we're saying, well actually decomposing the world into different types of features that are layered upon each other is actually a much more efficient, compact representation of the world, right? I think this is actually, precisely the point of kind of what you're getting at. What's really exciting now is that what we were doing before was sort of building these bespoke solutions for different kinds of data. NLP, natural language processing. There's a whole field, 25 plus years of people devoted to figuring out features, figuring out what structures make sense in this particular context. Those didn't carry over at all to computer vision. Didn't carry over at all to time series analysis. Now, with neural networks, we've seen it at Nervana, and now part of Intel, solving customers' problems. We apply a very similar set of techniques across all these different types of data domains and solve them. All data in the real world seems to be hierarchical. You can decompose it into this hierarchy. And it works really well. Our brains are actually general structures. As a neuroscientist, you can look at different parts of your brain and there are differences. Something that takes in visual information, versus auditory information is slightly different but they're much more similar than they are different. So there is something invariant, something very common between all of these different modalities and we're starting to learn that. And this is extremely exciting to me trying to understand the biological machine that is a computer, right? We're figurig it out, right? >> One of the really fun things that Ray Chrisfall likes to talk about is, and it falls in the genre of biomimmicry, and how we actually replicate biologic evolution in our technical solutions so if you look at, and we're beginning to understand more and more how real neural nets work in our cerebral cortex. And it's sort of a pyramid structure so that the first pass of a broad base of analytics, it gets constrained to the next pass, gets constrained to the next pass, which is how information is processed in the brain. So we're discovering increasingly that what we've been evolving towards, in term of architectures of neural nets, is approximating the architecture of the human cortex and the more we understand the human cortex, the more insight we get to how to optimize neural nets, so when you think about it, with millions of years of evolution of how the cortex is structured, it shouldn't be a surprise that the optimization protocols, if you will, in our genetic code are profoundly efficient in how they operate. So there's a real role for looking at biologic evolutionary solutions, vis a vis technical solutions, and there's a friend of mine who worked with who worked with George Church at Harvard and actually published a book on biomimmicry and they wrote the book completely in DNA so if all of you have your home DNA decoder, you can actually read the book on your DNA reader, just kidding. >> There's actually a start up I just saw in the-- >> Read-Write DNA, yeah. >> Actually it's a... He writes something. What was it? (response from crowd member) Yeah, they're basically encoding information in DNA as a storage medium. (laughter) The company, right? >> Yeah, that same friend of mine who coauthored that biomimmicry book in DNA also did the estimate of the density of information storage. So a cubic centimeter of DNA can store an hexabyte of data. I mean that's mind blowing. >> Naveen: Highly done soon. >> Yeah that's amazing. Also you hit upon a really important point there, that one of the things that's changed is... Well, there are two major things that have changed in my perception from let's say five to 10 years ago, when we were using machine learning. You could use data to train models and make predictions to understand complex phenomena. But they had limited utility and the challenge was that if I'm trying to build on these things, I had to do a lot of work up front. It was called feature engineering. I had to do a lot of work to figure out what are the key attributes of that data? What are the 10 or 20 or 100 pieces of information that I should pull out of the data to feed to the model, and then the model can turn it into a predictive machine. And so, what's really exciting about the new generation of machine learning technology, and particularly deep learning, is that it can actually learn from example data those features without you having to do any preprogramming. That's why Naveen is saying you can take the same sort of overall approach and apply it to a bunch of different problems. Because you're not having to fine tune those features. So at the end of the day, the two things that have changed to really enable this evolution is access to more data, and I'd be curious to hear from you where you're seeing data come from, what are the strategies around that. So access to data, and I'm talking millions of examples. So 10,000 examples most times isn't going to cut it. But millions of examples will do it. And then, the other piece is the computing capability to actually take millions of examples and optimize this algorithm in a single lifetime. I mean, back in '91, when I started, we literally would have thousands of examples and it would take overnight to run the thing. So now in the world of millions, and you're putting together all of these combinations, the computing has changed a lot. I know you've made some revolutionary advances in that. But I'm curious about the data. Where are you seeing interesting sources of data for analytics? >> So I do some work in the genomics space and there are more viable permutations of the human genome than there are people who have ever walked the face of the earth. And the polygenic determination of a phenotypic expression translation, what are genome does to us in our physical experience in health and disease is determined by many, many genes and the interaction of many, many genes and how they are up and down regulated. And the complexity of disambiguating which 27 genes are affecting your diabetes and how are they up and down regulated by different interventions is going to be different than his. It's going to be different than his. And we already know that there's four or five distinct genetic subtypes of type II diabetes. So physicians still think there's one disease called type II diabetes. There's actually at least four or five genetic variants that have been identified. And so, when you start thinking about disambiguating, particularly when we don't know what 95 percent of DNA does still, what actually is the underlining cause, it will require this massive capability of developing these feature vectors, sometimes intuiting it, if you will, from the data itself. And other times, taking what's known knowledge to develop some of those feature vectors, and be able to really understand the interaction of the genome and the microbiome and the phenotypic data. So the complexity is high and because the variation complexity is high, you do need these massive members. Now I'm going to make a very personal pitch here. So forgive me, but if any of you have any role in policy at all, let me tell you what's happening right now. The Genomic Information Nondiscrimination Act, so called GINA, written by a friend of mine, passed a number of years ago, says that no one can be discriminated against for health insurance based upon their genomic information. That's cool. That should allow all of you to feel comfortable donating your DNA to science right? Wrong. You are 100% unprotected from discrimination for life insurance, long term care and disability. And it's being practiced legally today and there's legislation in the House, in mark up right now to completely undermine the existing GINA legislation and say that whenever there's another applicable statute like HIPAA, that the GINA is irrelevant, that none of the fines and penalties are applicable at all. So we need a ton of data to be able to operate on. We will not be getting a ton of data to operate on until we have the kind of protection we need to tell people, you can trust us. You can give us your data, you will not be subject to discrimination. And that is not the case today. And it's being further undermined. So I want to make a plea to any of you that have any policy influence to go after that because we need this data to help the understanding of human health and disease and we're not going to get it when people look behind the curtain and see that discrimination is occurring today based upon genetic information. >> Well, I don't like the idea of being discriminated against based on my DNA. Especially given how little we actually know. There's so much complexity in how these things unfold in our own bodies, that I think anything that's being done is probably childishly immature and oversimplifying. So it's pretty rough. >> I guess the translation here is that we're all unique. It's not just a Disney movie. (laughter) We really are. And I think one of the strengths that I'm seeing, kind of going back to the original point, of these new techniques is it's going across different data types. It will actually allow us to learn more about the uniqueness of the individual. It's not going to be just from one data source. They were collecting data from many different modalities. We're collecting behavioral data from wearables. We're collecting things from scans, from blood tests, from genome, from many different sources. The ability to integrate those into a unified picture, that's the important thing that we're getting toward now. That's what I think is going to be super exciting here. Think about it, right. I can tell you to visual a coin, right? You can visualize a coin. Not only do you visualize it. You also know what it feels like. You know how heavy it is. You have a mental model of that from many different perspectives. And if I take away one of those senses, you can still identify the coin, right? If I tell you to put your hand in your pocket, and pick out a coin, you probably can do that with 100% reliability. And that's because we have this generalized capability to build a model of something in the world. And that's what we need to do for individuals is actually take all these different data sources and come up with a model for an individual and you can actually then say what drug works best on this. What treatment works best on this? It's going to get better with time. It's not going to be perfect, because this is what a doctor does, right? A doctor who's very experienced, you're a practicing physician right? Back me up here. That's what you're doing. You basically have some categories. You're taking information from the patient when you talk with them, and you're building a mental model. And you apply what you know can work on that patient, right? >> I don't have clinic hours anymore, but I do take care of many friends and family. (laughter) >> You used to, you used to. >> I practiced for many years before I became a full-time geek. >> I thought you were a recovering geek. >> I am. (laughter) I do more policy now. >> He's off the wagon. >> I just want to take a moment and see if there's anyone from the audience who would like to ask, oh. Go ahead. >> We've got a mic here, hang on one second. >> I have tons and tons of questions. (crosstalk) Yes, so first of all, the microbiome and the genome are really complex. You already hit about that. Yet most of the studies we do are small scale and we have difficulty repeating them from study to study. How are we going to reconcile all that and what are some of the technical hurdles to get to the vision that you want? >> So primarily, it's been the cost of sequencing. Up until a year ago, it's $1000, true cost. Now it's $100, true cost. And so that barrier is going to enable fairly pervasive testing. It's not a real competitive market becaue there's one sequencer that is way ahead of everybody else. So the price is not $100 yet. The cost is below $100. So as soon as there's competition to drive the cost down, and hopefully, as soon as we all have the protection we need against discrimination, as I mentioned earlier, then we will have large enough sample sizes. And so, it is our expectation that we will be able to pool data from local sources. I chair the e-health work group at the Global Alliance for Genomics and Health which is working on this very issue. And rather than pooling all the data into a single, common repository, the strategy, and we're developing our five-year plan in a month in London, but the goal is to have a federation of essentially credentialed data enclaves. That's a formal method. HHS already does that so you can get credentialed to search all the data that Medicare has on people that's been deidentified according to HIPPA. So we want to provide the same kind of service with appropriate consent, at an international scale. And there's a lot of nations that are talking very much about data nationality so that you can't export data. So this approach of a federated model to get at data from all the countries is important. The other thing is a block-chain technology is going to be very profoundly useful in this context. So David Haussler of UC Santa Cruz is right now working on a protocol using an open block-chain, public ledger, where you can put out. So for any typical cancer, you may have a half dozen, what are called sematic variance. Cancer is a genetic disease so what has mutated to cause it to behave like a cancer? And if we look at those biologically active sematic variants, publish them on a block chain that's public, so there's not enough data there to reidentify the patient. But if I'm a physician treating a woman with breast cancer, rather than say what's the protocol for treating a 50-year-old woman with this cell type of cancer, I can say show me all the people in the world who have had this cancer at the age of 50, wit these exact six sematic variants. Find the 200 people worldwide with that. Ask them for consent through a secondary mechanism to donate everything about their medical record, pool that information of the core of 200 that exactly resembles the one sitting in front of me, and find out, of the 200 ways they were treated, what got the best results. And so, that's the kind of future where a distributed, federated architecture will allow us to query and obtain a very, very relevant cohort, so we can basically be treating patients like mine, sitting right in front of me. Same thing applies for establishing research cohorts. There's some very exciting stuff at the convergence of big data analytics, machine learning, and block chaining. >> And this is an area that I'm really excited about and I think we're excited about generally at Intel. They actually have something called the Collaborative Cancer Cloud, which is this kind of federated model. We have three different academic research centers. Each of them has a very sizable and valuable collection of genomic data with phenotypic annotations. So you know, pancreatic cancer, colon cancer, et cetera, and we've actually built a secure computing architecture that can allow a person who's given the right permissions by those organizations to ask a specific question of specific data without ever sharing the data. So the idea is my data's really important to me. It's valuable. I want us to be able to do a study that gets the number from the 20 pancreatic cancer patients in my cohort, up to the 80 that we have in the whole group. But I can't do that if I'm going to just spill my data all over the world. And there are HIPAA and compliance reasons for that. There are business reasons for that. So what we've built at Intel is this platform that allows you to do different kinds of queries on this genetic data. And reach out to these different sources without sharing it. And then, the work that I'm really involved in right now and that I'm extremely excited about... This also touches on something that both of you said is it's not sufficient to just get the genome sequences. You also have to have the phenotypic data. You have to know what cancer they've had. You have to know that they've been treated with this drug and they've survived for three months or that they had this side effect. That clinical data also needs to be put together. It's owned by other organizations, right? Other hospitals. So the broader generalization of the Collaborative Cancer Cloud is something we call the data exchange. And it's a misnomer in a sense that we're not actually exchanging data. We're doing analytics on aggregated data sets without sharing it. But it really opens up a world where we can have huge populations and big enough amounts of data to actually train these models and draw the thread in. Of course, that really then hits home for the techniques that Nervana is bringing to the table, and of course-- >> Stanford's one of your academic medical centers? >> Not for that Collaborative Cancer Cloud. >> The reason I mentioned Standford is because the reason I'm wearing this FitBit is because I'm a research subject at Mike Snyder's, the chair of genetics at Stanford, IPOP, intrapersonal omics profile. So I was fully sequenced five years ago and I get four full microbiomes. My gut, my mouth, my nose, my ears. Every three months and I've done that for four years now. And about a pint of blood. And so, to your question of the density of data, so a lot of the problem with applying these techniques to health care data is that it's basically a sparse matrix and there's a lot of discontinuities in what you can find and operate on. So what Mike is doing with the IPOP study is much the same as you described. Creating a highly dense longitudinal set of data that will help us mitigate the sparse matrix problem. (low volume response from audience member) Pardon me. >> What's that? (low volume response) (laughter) >> Right, okay. >> John: Lost the school sample. That's got to be a new one I've heard now. >> Okay, well, thank you so much. That was a great question. So I'm going to repeat this and ask if there's another question. You want to go ahead? >> Hi, thanks. So I'm a journalist and I report a lot on these neural networks, a system that's beter at reading mammograms than your human radiologists. Or a system that's better at predicting which patients in the ICU will get sepsis. These sort of fascinating academic studies that I don't really see being translated very quickly into actual hospitals or clinical practice. Seems like a lot of the problems are regulatory, or liability, or human factors, but how do you get past that and really make this stuff practical? >> I think there's a few things that we can do there and I think the proof points of the technology are really important to start with in this specific space. In other places, sometimes, you can start with other things. But here, there's a real confidence problem when it comes to health care, and for good reason. We have doctors trained for many, many years. School and then residencies and other kinds of training. Because we are really, really conservative with health care. So we need to make sure that technology's well beyond just the paper, right? These papers are proof points. They get people interested. They even fuel entire grant cycles sometimes. And that's what we need to happen. It's just an inherent problem, its' going to take a while. To get those things to a point where it's like well, I really do trust what this is saying. And I really think it's okay to now start integrating that into our standard of care. I think that's where you're seeing it. It's frustrating for all of us, believe me. I mean, like I said, I think personally one of the biggest things, I want to have an impact. Like when I go to my grave, is that we used machine learning to improve health care. We really do feel that way. But it's just not something we can do very quickly and as a business person, I don't actually look at those use cases right away because I know the cycle is just going to be longer. >> So to your point, the FDA, for about four years now, has understood that the process that has been given to them by their board of directors, otherwise known as Congress, is broken. And so they've been very actively seeking new models of regulation and what's really forcing their hand is regulation of devices and software because, in many cases, there are black box aspects of that and there's a black box aspect to machine learning. Historically, Intel and others are making inroads into providing some sort of traceability and transparency into what happens in that black box rather than say, overall we get better results but once in a while we kill somebody. Right? So there is progress being made on that front. And there's a concept that I like to use. Everyone knows Ray Kurzweil's book The Singularity Is Near? Well, I like to think that diadarity is near. And the diadarity is where you have human transparency into what goes on in the black box and so maybe Bob, you want to speak a little bit about... You mentioned that, in a prior discussion, that there's some work going on at Intel there. >> Yeah, absolutely. So we're working with a number of groups to really build tools that allow us... In fact Naveen probably can talk in even more detail than I can, but there are tools that allow us to actually interrogate machine learning and deep learning systems to understand, not only how they respond to a wide variety of situations but also where are there biases? I mean, one of the things that's shocking is that if you look at the clinical studies that our drug safety rules are based on, 50 year old white guys are the peak of that distribution, which I don't see any problem with that, but some of you out there might not like that if you're taking a drug. So yeah, we want to understand what are the biases in the data, right? And so, there's some new technologies. There's actually some very interesting data-generative technologies. And this is something I'm also curious what Naveen has to say about, that you can generate from small sets of observed data, much broader sets of varied data that help probe and fill in your training for some of these systems that are very data dependent. So that takes us to a place where we're going to start to see deep learning systems generating data to train other deep learning systems. And they start to sort of go back and forth and you start to have some very nice ways to, at least, expose the weakness of these underlying technologies. >> And that feeds back to your question about regulatory oversight of this. And there's the fascinating, but little known origin of why very few women are in clinical studies. Thalidomide causes birth defects. So rather than say pregnant women can't be enrolled in drug trials, they said any woman who is at risk of getting pregnant cannot be enrolled. So there was actually a scientific meritorious argument back in the day when they really didn't know what was going to happen post-thalidomide. So it turns out that the adverse, unintended consequence of that decision was we don't have data on women and we know in certain drugs, like Xanax, that the metabolism is so much slower, that the typical dosing of Xanax is women should be less than half of that for men. And a lot of women have had very serious adverse effects by virtue of the fact that they weren't studied. So the point I want to illustrate with that is that regulatory cycles... So people have known for a long time that was like a bad way of doing regulations. It should be changed. It's only recently getting changed in any meaningful way. So regulatory cycles and legislative cycles are incredibly slow. The rate of exponential growth in technology is exponential. And so there's impedance mismatch between the cycle time for regulation cycle time for innovation. And what we need to do... I'm working with the FDA. I've done four workshops with them on this very issue. Is that they recognize that they need to completely revitalize their process. They're very interested in doing it. They're not resisting it. People think, oh, they're bad, the FDA, they're resisting. Trust me, there's nobody on the planet who wants to revise these review processes more than the FDA itself. And so they're looking at models and what I recommended is global cloud sourcing and the FDA could shift from a regulatory role to one of doing two things, assuring the people who do their reviews are competent, and assuring that their conflicts of interest are managed, because if you don't have a conflict of interest in this very interconnected space, you probably don't know enough to be a reviewer. So there has to be a way to manage the conflict of interest and I think those are some of the keypoints that the FDA is wrestling with because there's type one and type two errors. If you underregulate, you end up with another thalidomide and people born without fingers. If you overregulate, you prevent life saving drugs from coming to market. So striking that balance across all these different technologies is extraordinarily difficult. If it were easy, the FDA would've done it four years ago. It's very complicated. >> Jumping on that question, so all three of you are in some ways entrepreneurs, right? Within your organization or started companies. And I think it would be good to talk a little bit about the business opportunity here, where there's a huge ecosystem in health care, different segments, biotech, pharma, insurance payers, etc. Where do you see is the ripe opportunity or industry, ready to really take this on and to make AI the competitive advantage. >> Well, the last question also included why aren't you using the result of the sepsis detection? We do. There were six or seven published ways of doing it. We did our own data, looked at it, we found a way that was superior to all the published methods and we apply that today, so we are actually using that technology to change clinical outcomes. As far as where the opportunities are... So it's interesting. Because if you look at what's going to be here in three years, we're not going to be using those big data analytics models for sepsis that we are deploying today, because we're just going to be getting a tiny aliquot of blood, looking for the DNA or RNA of any potential infection and we won't have to infer that there's a bacterial infection from all these other ancillary, secondary phenomenon. We'll see if the DNA's in the blood. So things are changing so fast that the opportunities that people need to look for are what are generalizable and sustainable kind of wins that are going to lead to a revenue cycle that are justified, a venture capital world investing. So there's a lot of interesting opportunities in the space. But I think some of the biggest opportunities relate to what Bob has talked about in bringing many different disparate data sources together and really looking for things that are not comprehensible in the human brain or in traditional analytic models. >> I think we also got to look a little bit beyond direct care. We're talking about policy and how we set up standards, these kinds of things. That's one area. That's going to drive innovation forward. I completely agree with that. Direct care is one piece. How do we scale out many of the knowledge kinds of things that are embedded into one person's head and get them out to the world, democratize that. Then there's also development. The underlying technology's of medicine, right? Pharmaceuticals. The traditional way that pharmaceuticals is developed is actually kind of funny, right? A lot of it was started just by chance. Penicillin, a very famous story right? It's not that different today unfortunately, right? It's conceptually very similar. Now we've got more science behind it. We talk about domains and interactions, these kinds of things but fundamentally, the problem is what we in computer science called NP hard, it's too difficult to model. You can't solve it analytically. And this is true for all these kinds of natural sorts of problems by the way. And so there's a whole field around this, molecular dynamics and modeling these sorts of things, that are actually being driven forward by these AI techniques. Because it turns out, our brain doesn't do magic. It actually doesn't solve these problems. It approximates them very well. And experience allows you to approximate them better and better. Actually, it goes a little bit to what you were saying before. It's like simulations and forming your own networks and training off each other. There are these emerging dynamics. You can simulate steps of physics. And you come up with a system that's much too complicated to ever solve. Three pool balls on a table is one such system. It seems pretty simple. You know how to model that, but it actual turns out you can't predict where a balls going to be once you inject some energy into that table. So something that simple is already too complex. So neural network techniques actually allow us to start making those tractable. These NP hard problems. And things like molecular dynamics and actually understanding how different medications and genetics will interact with each other is something we're seeing today. And so I think there's a huge opportunity there. We've actually worked with customers in this space. And I'm seeing it. Like Rosch is acquiring a few different companies in space. They really want to drive it forward, using big data to drive drug development. It's kind of counterintuitive. I never would've thought it had I not seen it myself. >> And there's a big related challenge. Because in personalized medicine, there's smaller and smaller cohorts of people who will benefit from a drug that still takes two billion dollars on average to develop. That is unsustainable. So there's an economic imperative of overcoming the cost and the cycle time for drug development. >> I want to take a go at this question a little bit differently, thinking about not so much where are the industry segments that can benefit from AI, but what are the kinds of applications that I think are most impactful. So if this is what a skilled surgeon needs to know at a particular time to care properly for a patient, this is where most, this area here, is where most surgeons are. They are close to the maximum knowledge and ability to assimilate as they can be. So it's possible to build complex AI that can pick up on that one little thing and move them up to here. But it's not a gigantic accelerator, amplifier of their capability. But think about other actors in health care. I mentioned a couple of them earlier. Who do you think the least trained actor in health care is? >> John: Patients. >> Yes, the patients. The patients are really very poorly trained, including me. I'm abysmal at figuring out who to call and where to go. >> Naveen: You know as much the doctor right? (laughing) >> Yeah, that's right. >> My doctor friends always hate that. Know your diagnosis, right? >> Yeah, Dr. Google knows. So the opportunities that I see that are really, really exciting are when you take an AI agent, like sometimes I like to call it contextually intelligent agent, or a CIA, and apply it to a problem where a patient has a complex future ahead of them that they need help navigating. And you use the AI to help them work through. Post operative. You've got PT. You've got drugs. You've got to be looking for side effects. An agent can actually help you navigate. It's like your own personal GPS for health care. So it's giving you the inforamation that you need about you for your care. That's my definition of Precision Medicine. And it can include genomics, of course. But it's much bigger. It's that broader picture and I think that a sort of agent way of thinking about things and filling in the gaps where there's less training and more opportunity, is very exciting. >> Great start up idea right there by the way. >> Oh yes, right. We'll meet you all out back for the next start up. >> I had a conversation with the head of the American Association of Medical Specialties just a couple of days ago. And what she was saying, and I'm aware of this phenomenon, but all of the medical specialists are saying, you're killing us with these stupid board recertification trivia tests that you're giving us. So if you're a cardiologist, you have to remember something that happens in one in 10 million people, right? And they're saying that irrelevant anymore, because we've got advanced decision support coming. We have these kinds of analytics coming. Precisely what you're saying. So it's human augmentation of decision support that is coming at blazing speed towards health care. So in that context, it's much more important that you have a basic foundation, you know how to think, you know how to learn, and you know where to look. So we're going to be human-augmented learning systems much more so than in the past. And so the whole recertification process is being revised right now. (inaudible audience member speaking) Speak up, yeah. (person speaking) >> What makes it fathomable is that you can-- (audience member interjects inaudibly) >> Sure. She was saying that our brain is really complex and large and even our brains don't know how our brains work, so... are there ways to-- >> What hope do we have kind of thing? (laughter) >> It's a metaphysical question. >> It circles all the way down, exactly. It's a great quote. I mean basically, you can decompose every system. Every complicated system can be decomposed into simpler, emergent properties. You lose something perhaps with each of those, but you get enough to actually understand most of the behavior. And that's really how we understand the world. And that's what we've learned in the last few years what neural network techniques can allow us to do. And that's why our brain can understand our brain. (laughing) >> Yeah, I'd recommend reading Chris Farley's last book because he addresses that issue in there very elegantly. >> Yeah we're seeing some really interesting technologies emerging right now where neural network systems are actually connecting other neural network systems in networks. You can see some very compelling behavior because one of the things I like to distinguish AI versus traditional analytics is we used to have question-answering systems. I used to query a database and create a report to find out how many widgets I sold. Then I started using regression or machine learning to classify complex situations from this is one of these and that's one of those. And then as we've moved more recently, we've got these AI-like capabilities like being able to recognize that there's a kitty in the photograph. But if you think about it, if I were to show you a photograph that happened to have a cat in it, and I said, what's the answer, you'd look at me like, what are you talking about? I have to know the question. So where we're cresting with these connected sets of neural systems, and with AI in general, is that the systems are starting to be able to, from the context, understand what the question is. Why would I be asking about this picture? I'm a marketing guy, and I'm curious about what Legos are in the thing or what kind of cat it is. So it's being able to ask a question, and then take these question-answering systems, and actually apply them so that's this ability to understand context and ask questions that we're starting to see emerge from these more complex hierarchical neural systems. >> There's a person dying to ask a question. >> Sorry. You have hit on several different topics that all coalesce together. You mentioned personalized models. You mentioned AI agents that could help you as you're going through a transitionary period. You mentioned data sources, especially across long time periods. Who today has access to enough data to make meaningful progress on that, not just when you're dealing with an issue, but day-to-day improvement of your life and your health? >> Go ahead, great question. >> That was a great question. And I don't think we have a good answer to it. (laughter) I'm sure John does. Well, I think every large healthcare organization and various healthcare consortiums are working very hard to achieve that goal. The problem remains in creating semantic interoperatability. So I spent a lot of my career working on semantic interoperatability. And the problem is that if you don't have well-defined, or self-defined data, and if you don't have well-defined and documented metadata, and you start operating on it, it's real easy to reach false conclusions and I can give you a classic example. It's well known, with hundreds of studies looking at when you give an antibiotic before surgery and how effective it is in preventing a post-op infection. Simple question, right? So most of the literature done prosectively was done in institutions where they had small sample sizes. So if you pool that, you get a little bit more noise, but you get a more confirming answer. What was done at a very large, not my own, but a very large institution... I won't name them for obvious reasons, but they pooled lots of data from lots of different hospitals, where the data definitions and the metadata were different. Two examples. When did they indicate the antibiotic was given? Was it when it was ordered, dispensed from the pharmacy, delivered to the floor, brought to the bedside, put in the IV, or the IV starts flowing? Different hospitals used a different metric of when it started. When did surgery occur? When they were wheeled into the OR, when they were prepped and drapped, when the first incision occurred? All different. And they concluded quite dramatically that it didn't matter when you gave the pre-op antibiotic and whether or not you get a post-op infection. And everybody who was intimate with the prior studies just completely ignored and discounted that study. It was wrong. And it was wrong because of the lack of commonality and the normalization of data definitions and metadata definitions. So because of that, this problem is much more challenging than you would think. If it were so easy as to put all these data together and operate on it, normalize and operate on it, we would've done that a long time ago. It's... Semantic interoperatability remains a big problem and we have a lot of heavy lifting ahead of us. I'm working with the Global Alliance, for example, of Genomics and Health. There's like 30 different major ontologies for how you represent genetic information. And different institutions are using different ones in different ways in different versions over different periods of time. That's a mess. >> Our all those issues applicable when you're talking about a personalized data set versus a population? >> Well, so N of 1 studies and single-subject research is an emerging field of statistics. So there's some really interesting new models like step wedge analytics for doing that on small sample sizes, recruiting people asynchronously. There's single-subject research statistics. You compare yourself with yourself at a different point in time, in a different context. So there are emerging statistics to do that and as long as you use the same sensor, you won't have a problem. But people are changing their remote sensors and you're getting different data. It's measured in different ways with different sensors at different normalization and different calibration. So yes. It even persists in the N of 1 environment. >> Yeah, you have to get started with a large N that you can apply to the N of 1. I'm actually going to attack your question from a different perspective. So who has the data? The millions of examples to train a deep learning system from scratch. It's a very limited set right now. Technology such as the Collaborative Cancer Cloud and The Data Exchange are definitely impacting that and creating larger and larger sets of critical mass. And again, not withstanding the very challenging semantic interoperability questions. But there's another opportunity Kay asked about what's changed recently. One of the things that's changed in deep learning is that we now have modules that have been trained on massive data sets that are actually very smart as certain kinds of problems. So, for instance, you can go online and find deep learning systems that actually can recognize, better than humans, whether there's a cat, dog, motorcycle, house, in a photograph. >> From Intel, open source. >> Yes, from Intel, open source. So here's what happens next. Because most of that deep learning system is very expressive. That combinatorial mixture of features that Naveen was talking about, when you have all these layers, there's a lot of features there. They're actually very general to images, not just finding cats, dogs, trees. So what happens is you can do something called transfer learning, where you take a small or modest data set and actually reoptimize it for your specific problem very, very quickly. And so we're starting to see a place where you can... On one end of the spectrum, we're getting access to the computing capabilities and the data to build these incredibly expressive deep learning systems. And over here on the right, we're able to start using those deep learning systems to solve custom versions of problems. Just last weekend or two weekends ago, in 20 minutes, I was able to take one of those general systems and create one that could recognize all different kinds of flowers. Very subtle distinctions, that I would never be able to know on my own. But I happen to be able to get the data set and literally, it took 20 minutes and I have this vision system that I could now use for a specific problem. I think that's incredibly profound and I think we're going to see this spectrum of wherever you are in your ability to get data and to define problems and to put hardware in place to see really neat customizations and a proliferation of applications of this kind of technology. >> So one other trend I think, I'm very hopeful about it... So this is a hard problem clearly, right? I mean, getting data together, formatting it from many different sources, it's one of these things that's probably never going to happen perfectly. But one trend I think that is extremely hopeful to me is the fact that the cost of gathering data has precipitously dropped. Building that thing is almost free these days. I can write software and put it on 100 million cell phones in an instance. You couldn't do that five years ago even right? And so, the amount of information we can gain from a cell phone today has gone up. We have more sensors. We're bringing online more sensors. People have Apple Watches and they're sending blood data back to the phone, so once we can actually start gathering more data and do it cheaper and cheaper, it actually doesn't matter where the data is. I can write my own app. I can gather that data and I can start driving the correct inferences or useful inferences back to you. So that is a positive trend I think here and personally, I think that's how we're going to solve it, is by gathering from that many different sources cheaply. >> Hi, my name is Pete. I've very much enjoyed the conversation so far but I was hoping perhaps to bring a little bit more focus into Precision Medicine and ask two questions. Number one, how have you applied the AI technologies as you're emerging so rapidly to your natural language processing? I'm particularly interested in, if you look at things like Amazon Echo or Siri, or the other voice recognition systems that are based on AI, they've just become incredibly accurate and I'm interested in specifics about how I might use technology like that in medicine. So where would I find a medical nomenclature and perhaps some reference to a back end that works that way? And the second thing is, what specifically is Intel doing, or making available? You mentioned some open source stuff on cats and dogs and stuff but I'm the doc, so I'm looking at the medical side of that. What are you guys providing that would allow us who are kind of geeks on the software side, as well as being docs, to experiment a little bit more thoroughly with AI technology? Google has a free AI toolkit. Several other people have come out with free AI toolkits in order to accelerate that. There's special hardware now with graphics, and different processors, hitting amazing speeds. And so I was wondering, where do I go in Intel to find some of those tools and perhaps learn a bit about the fantastic work that you guys are already doing at Kaiser? >> Let me take that first part and then we'll be able to talk about the MD part. So in terms of technology, this is what's extremely exciting now about what Intel is focusing on. We're providing those pieces. So you can actually assemble and build the application. How you build that application specific for MDs and the use cases is up to you or the one who's filling out the application. But we're going to power that technology for multiple perspectives. So Intel is already the main force behind The Data Center, right? Cloud computing, all this is already Intel. We're making that extremely amenable to AI and setting the standard for AI in the future, so we can do that from a number of different mechanisms. For somebody who wants to develop an application quickly, we have hosted solutions. Intel Nervana is kind of the brand for these kinds of things. Hosted solutions will get you going very quickly. Once you get to a certain level of scale, where costs start making more sense, things can be bought on premise. We're supplying that. We're also supplying software that makes that transition essentially free. Then taking those solutions that you develop in the cloud, or develop in The Data Center, and actually deploying them on device. You want to write something on your smartphone or PC or whatever. We're actually providing those hooks as well, so we want to make it very easy for developers to take these pieces and actually build solutions out of them quickly so you probably don't even care what hardware it's running on. You're like here's my data set, this is what I want to do. Train it, make it work. Go fast. Make my developers efficient. That's all you care about, right? And that's what we're doing. We're taking it from that point at how do we best do that? We're going to provide those technologies. In the next couple of years, there's going to be a lot of new stuff coming from Intel. >> Do you want to talk about AI Academy as well? >> Yeah, that's a great segway there. In addition to this, we have an entire set of tutorials and other online resources and things we're going to be bringing into the academic world for people to get going quickly. So that's not just enabling them on our tools, but also just general concepts. What is a neural network? How does it work? How does it train? All of these things are available now and we've made a nice, digestible class format that you can actually go and play with. >> Let me give a couple of quick answers in addition to the great answers already. So you're asking why can't we use medical terminology and do what Alexa does? Well, no, you may not be aware of this, but Andrew Ian, who was the AI guy at Google, who was recruited by Google, they have a medical chat bot in China today. I don't speak Chinese. I haven't been able to use it yet. There are two similar initiatives in this country that I know of. There's probably a dozen more in stealth mode. But Lumiata and Health Cap are doing chat bots for health care today, using medical terminology. You have the compound problem of semantic normalization within language, compounded by a cross language. I've done a lot of work with an international organization called Snowmed, which translates medical terminology. So you're aware of that. We can talk offline if you want, because I'm pretty deep into the semantic space. >> Go google Intel Nervana and you'll see all the websites there. It's intel.com/ai or nervanasys.com. >> Okay, great. Well this has been fantastic. I want to, first of all, thank all the people here for coming and asking great questions. I also want to thank our fantastic panelists today. (applause) >> Thanks, everyone. >> Thank you. >> And lastly, I just want to share one bit of information. We will have more discussions on AI next Tuesday at 9:30 AM. Diane Bryant, who is our general manager of Data Centers Group will be here to do a keynote. So I hope you all get to join that. Thanks for coming. (applause) (light electronic music)

Published Date : Mar 12 2017

SUMMARY :

And I'm excited to share with you He is the VP and general manager for the And it's pretty obvious that most of the useful data in that the technologies that we were developing So the mission is really to put and analyze it so you can actually understand So the field of microbiomics that I referred to earlier, so that you can think about it. is that the substrate of the data that you're operating on neural networks represent the world in the way And that's the way we used to look at it, right? and the more we understand the human cortex, What was it? also did the estimate of the density of information storage. and I'd be curious to hear from you And that is not the case today. Well, I don't like the idea of being discriminated against and you can actually then say what drug works best on this. I don't have clinic hours anymore, but I do take care of I practiced for many years I do more policy now. I just want to take a moment and see Yet most of the studies we do are small scale And so that barrier is going to enable So the idea is my data's really important to me. is much the same as you described. That's got to be a new one I've heard now. So I'm going to repeat this and ask Seems like a lot of the problems are regulatory, because I know the cycle is just going to be longer. And the diadarity is where you have and deep learning systems to understand, And that feeds back to your question about regulatory and to make AI the competitive advantage. that the opportunities that people need to look for to what you were saying before. of overcoming the cost and the cycle time and ability to assimilate Yes, the patients. Know your diagnosis, right? and filling in the gaps where there's less training We'll meet you all out back for the next start up. And so the whole recertification process is being are there ways to-- most of the behavior. because he addresses that issue in there is that the systems are starting to be able to, You mentioned AI agents that could help you So most of the literature done prosectively So there are emerging statistics to do that that you can apply to the N of 1. and the data to build these And so, the amount of information we can gain And the second thing is, what specifically is Intel doing, and the use cases is up to you that you can actually go and play with. You have the compound problem of semantic normalization all the websites there. I also want to thank our fantastic panelists today. So I hope you all get to join that.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Diane BryantPERSON

0.99+

Bob RogersPERSON

0.99+

Kay ErinPERSON

0.99+

JohnPERSON

0.99+

David HausslerPERSON

0.99+

ChinaLOCATION

0.99+

sixQUANTITY

0.99+

Chris FarleyPERSON

0.99+

Naveen RaoPERSON

0.99+

100%QUANTITY

0.99+

BobPERSON

0.99+

10QUANTITY

0.99+

Ray KurzweilPERSON

0.99+

IntelORGANIZATION

0.99+

LondonLOCATION

0.99+

MikePERSON

0.99+

John MadisonPERSON

0.99+

American Association of Medical SpecialtiesORGANIZATION

0.99+

fourQUANTITY

0.99+

GoogleORGANIZATION

0.99+

three monthsQUANTITY

0.99+

HHSORGANIZATION

0.99+

Andrew IanPERSON

0.99+

20 minutesQUANTITY

0.99+

$100QUANTITY

0.99+

first paperQUANTITY

0.99+

CongressORGANIZATION

0.99+

95 percentQUANTITY

0.99+

second authorQUANTITY

0.99+

UC Santa CruzORGANIZATION

0.99+

100-dollarQUANTITY

0.99+

200 waysQUANTITY

0.99+

two billion dollarsQUANTITY

0.99+

George ChurchPERSON

0.99+

Health CapORGANIZATION

0.99+

NaveenPERSON

0.99+

25 plus yearsQUANTITY

0.99+

12 layersQUANTITY

0.99+

27 genesQUANTITY

0.99+

12 yearsQUANTITY

0.99+

KayPERSON

0.99+

140 layersQUANTITY

0.99+

first authorQUANTITY

0.99+

one questionQUANTITY

0.99+

200 peopleQUANTITY

0.99+

20QUANTITY

0.99+

FirstQUANTITY

0.99+

CIAORGANIZATION

0.99+

NLPORGANIZATION

0.99+

TodayDATE

0.99+

two questionsQUANTITY

0.99+

yesterdayDATE

0.99+

PetePERSON

0.99+

MedicareORGANIZATION

0.99+

LegosORGANIZATION

0.99+

Northern CaliforniaLOCATION

0.99+

EchoCOMMERCIAL_ITEM

0.99+

EachQUANTITY

0.99+

100 timesQUANTITY

0.99+

nervanasys.comOTHER

0.99+

$1000QUANTITY

0.99+

Ray ChrisfallPERSON

0.99+

NervanaORGANIZATION

0.99+

Data Centers GroupORGANIZATION

0.99+

Global AllianceORGANIZATION

0.99+

Global Alliance for Genomics and HealthORGANIZATION

0.99+

millionsQUANTITY

0.99+

intel.com/aiOTHER

0.99+

four yearsQUANTITY

0.99+

StanfordORGANIZATION

0.99+

10,000 examplesQUANTITY

0.99+

todayDATE

0.99+

one diseaseQUANTITY

0.99+

Two examplesQUANTITY

0.99+

Steven HawkingPERSON

0.99+

five years agoDATE

0.99+

firstQUANTITY

0.99+

two sortQUANTITY

0.99+

bothQUANTITY

0.99+

OneQUANTITY

0.99+

first timeQUANTITY

0.99+

AI for Good Panel - Autonomous World | SXSW 2017


 

>> Welcome everyone. Thank you for coming to the Intel AI lounge and joining us here for this economist world event. My name is Jack. I'm the chief architect of our autonomist driving solutions at Intel and I'm very happy to be here and to be joined by an esteemed panel of colleagues who are joining to, I hope, engage you all in a frayed dialogue and discussion. There will be time for questions as well, so keep your questions in mind. Jot them down so you ask them to us later. So first, let me introduce the panel. Next to me we have Michelle, who's the co-founder and CEO of Fine Mind. She just did an interview here shortly. Fine Mind is a company that provides a technology platform for retailers and brands that uses artificial intelligence as the heart of the experiences that her company's technology provides. Joe from Intel is the head of partnerships and acquisitions for artificial intelligence and software technologies. He participated in the recent acquisition of Movidius, a computer vision company that Intel recently acquired and is involved in a lot of smart city activities as well. And then finally, Sarush, who is data scientist by training, but now has JDA labs, which is researching emerging technologies and their application in the supply chain worldwide. So at the end of the day, the internet things that artificial intelligence really promises to improve our lives in quite incredible ways and change the way that we live and work. Often times the first thing that we think about when we think about AI is Skynet, but we at Intel believe in AI for good and that there's a lot of things that can happen to improve the way people live, work, and enjoy life. So as things in the Internet, as things become connected, smart, and automated, artificial intelligence is really going to be at the heart of those new experiences. So as I said my role is the architect for autonomous driving. It's a common place when people think about artificial intelligence, because what we're trying to do is replace a human brain with a machine brain, which means we need to endow that machine with intelligent thoughts, contexts, experiences. All of these things that sort of make us human. So computer vision is the space, obviously, with cameras in your car that people often think about, but it's actually more complicated than that. How many of us have been in a situation on a two lane road, maybe there's a car coming towards us, there's a road off to the right, and you sort of sense, "You know what? That car might turn in front of me." There's no signal. There's no real physical cue, but just something about what that driver's doing where they're looking tells us. So what do we do? We take our foot off the accelerator. We maybe hover it over the brake, just in case, right? But that's intelligence that we take for granted through years and years and years of driving experience that tells us something interesting is happening there. And so that's the challenge that we face in terms of how to bring that level of human intelligence into machines to make our lives better and richer. So enough about automated vehicles though, let's talk to our panelists about some of the areas in which they have expertise. So first for Michelle, I'll ask... Many of us probably buy stuff online everyday, every week, every hour, hourly delivery now. So a lot has been written about the death of traditional retail experiences. How will artificial intelligence and the technology that your company has rejuvenate that retail experience, whether it be online or in the traditional brick and mortar store? >> Yeah, excuse me. So one of the things that I think is a common misconception. You hear about the death of the brick and mortar store, the growth of e-commerce. It's really that e-commerce is beating brick and mortar in growth only and there's still over 90% of the world's commerce is done in physical brick and mortar store. So e-commerce, while it has the growth, has a really long way to go and I think one of the things that's going to be really hard to replace is the very human element of interaction and connection that you get by going to a store. So just because a robot named Pepper comes up to you and asks you some questions, they might get you the answer you need faster and maybe more efficiently, but I think as humans we crave interaction and shopping for certain products especially, is an experience better enjoyed in person with other people, whether that's an associate in the store or people you come with to the store to enjoy that experience with you. So I think artificial intelligence can help it be a more frictionless experience, whether you're in store or online to get you from point A to buying the thing you need faster, but I don't think that it's going to ever completely replace the joy that we get by physically going out into the world and interacting with other people to buy products. >> You said something really profound. You said that the real revolution for artificial intelligence in retail will be invisible. What did you mean by that? >> Yeah, so right now I think that most of the artificial intelligence that's being applied in the retail space is actually not something that shoppers like you and I see when we're on a website or when we're in the store. It's actually happening behind the scenes. It's happening to dynamically change the webpage to show you different stuff. It's happening further up the supply chain, right? With how the products are getting manufactured, put together, packaged, shipped, delivered to you, and that efficiency is just helping retailers be smarter and more effective with their budgets. And so, as they can save money in the supply chain, as they can sell more product with less work, they can reinvest in experience, they can reinvest in the brand, they can reinvest in the quality of the products, so we might start noticing those things change, but you won't actually know that that has anything to do with artificial intelligence, because not always in a robot that's rolling up to you in an aisle. >> So you mentioned the supply chain. That's something that we hear about a lot, but frankly for most of us, I think it's very hard to understand what exactly that means, so could you educate us a bit on what exactly is the supply chain and how is artificial intelligence being implied to improve it? >> Sure, sure. So for a lot of us, supply chain is the term that we picked up when we went to school or we read about it every so often, but we're not that far away from it. It is in fact a key part of what Michelle calls the invisible part of one's experience. So when you go to a store and you're buying a pair of shoes or you're picking up a box of cereal, how often do we think about, "How did it ever make it's way here?" We're the constituent components. They probably came from multiple countries and so they had to be manufactured. They had to be assembled in these plants. They had to then be moved, either through an ocean vessel or through trucks. They probably have gone through multiple warehouses and distribution centers and then finally into the store. And what do we see? We want to make sure that when I go to pick up my favorite brand of cereal, it better be there. And so, one of the things where AI is going to help and we're doing a lot of active work in this, is in the notion of the self learning supply chain. And what that means is really bringing in these various assets and actors of the supply chain. First of all, through IOT and others, generating the data, obviously connecting them, and through AI driving the intelligence, so that I can dynamically figure out the fact that the ocean vessel that left China on it's way to Long Beach has been delayed by 24 hours. What does that mean when you go to a Foot Locker to buy your new pair of shoes? Can I come up with alternate sourcing decisions, so it's not just predicting. It's prescribing and recommending as well. So behind the scenes, bringing in a lot of the, generating a lot of the data, connecting a lot of these actors and then really deriving the smarts. That's what the self learning supply chain is all about. >> Are supply chains always international or can they be local as well? >> Definitely local as well. I think what we've seen over the last decades, it's kind of gotten more and more global, but a lot of the supply chain can really just be within the store as well. You'd be surprised at how often retailers do not know where their product is. Even is it in the front of the store? Is it in the back of the store? Is it in the fitting room? Even that local information is not really available. So to have sensors to discover where things are and to really provide that efficiency, which right now doesn't exist, is a key part of what we're doing. >> So Joe, as you look at companies out there to partner or potentially acquire, do you tend to see technologies that are very domain specific for retail or supply chain or do you see technologies that could bridge multiple different domains in terms of the experiences we could enjoy? >> Yeah, definitely. So both. A lot of infant technologies start out in very niched use cases, but then there are technologies that are pervasive across multiple geographies and multiple markets. So, smart cities is a good way to look at that. So let's level set really quick on smart cities and how we think about that. I have a little sheet here to help me. Alright, so, if anybody here played Sim City before, you have your little city that's a real world that sits here, okay? So this is reality and you have little buildings and cars and they all travel around and you have people walking around with cell phones. And what's happening is as we develop smart cities, we're putting sensors everywhere. We're putting them around utilities, energies, water. They're in our phones. We have cameras and we have audio sensors in our phones. We're placing these on light poles, which is existing sustaining power points around the city. So we have all these different sensors and they're not just cameras and microphones, but they're particulate sensors. They're able to do environmental monitoring and things like that. And so, what we have is we have this physical world with all these sensors here. And then what we have is we've created basically this virtual world that has a great memory because it has all the data from all the sensors and those sensors really act as ties, if you think of it like a quilt, trying a quilt together. You bring it down together and everywhere you have a stitch, you're stitching that virtual world on top of the physical world and that just enables incredible amounts of innovation and creation for developers, for entrepreneurs, to do whatever they want to do to create and solve specific problems. So what really makes that possible is communications, connectivity. So that's where 5G comes in. So with 5G it's not just a faster form of connectivity. It's new infrastructure. It's new communication. It includes multiple types of communication and connectivity. And what it allows it to do is all those little sensors can talk to each other again. So the camera on the light pole can talk to the vehicle driving by or the sensor on the light pole. And so you start to connect everything and that's really where artificial intelligence can now come in and sense what's going on. It can then reason, which is neat, to have computer or some sort of algorithm that actually reasons based on a situation that's happening real time. And it acts on that, but then you can iterate on that or you can adapt that in the future. So if we think of an actual use case, we'll think of a camera on a light post that observes an accident. Well it's programmed to automatically notify emergency services that there's been an accident. But it knows the difference between a fender bender and an actual major crash where we need to send an ambulance or maybe multiple firetrucks. And then you can create iterations and that learns to become more smart. Let's say there was a vehicle that was in the accident that had a little yellow placard on it that said hazard. You're going to want to send different types of emergency services out there. So you can iterate on what it actually does and that's a fantastic world to be in and that's where I see AI really playing. >> That's a great example of what it's all about in terms of making things smart, connective, and autonomous. So Michelle as somebody who has founded the company and the space with technology that's trying to bring some of these experiences to market, there may be folks in the audience who have aspirations to do the same. So what have you learned over the course of starting your company and developing the technology that you're now deploying to market? >> Yeah, I think because AI is such a buzz word. You can get a dot AI domain now, doesn't mean that you should use it for everything. Maybe 7, 10, 15 years ago... These trends have happened before. In the late 90s, it was technology and there was technology companies and they sat over here and there was everybody else. Well that not true anymore. Every company uses technology. Then fast forward a little bit, there was social media was a thing. Social media was these companies over here and then there was everybody else and now every company needs to use social media or actually maybe not. Maybe it's a really bad idea for you to spend a ton of money on social media and you have to make that choice for yourself. So the same thing is true with artificial intelligence and what I tell... I did a panel on AI for Adventure Capitalists last week, trying to help them figure out when to invest and how to evaluate and all that kind of stuff. And what I would tell other aspiring entrepreneurs is "AI is means to an end. "It's not an end in itself." So unless you're a PH.D in machine learning and you want to start an AI as a service business, you're probably not going to start an AI only company. You're going to start a company for a specific purpose, to solve a problem, and you're going to use AI as a means to an end, maybe, if it makes sense to get there, to make it more efficient and all that stuff. But if you wouldn't get up everyday for ten years to do this business that's going to solve whatever problem you're solving or if you wouldn't invest in it if AI didn't exist, then adding dot AI at the end of a domain is not going to work. So don't think that that will help you make a better business. >> That's great advice. Thank you. Surash, as you talked about the automation then of the supply chain, what about people? What about the workers whose jobs may be lost or displaced because of the introduction of this automation? What's your perspective on that? >> Well, that's a great question. It's one that I'm asked quite a bit. So if you think about the supply chain with a lot of the manufacturing plants, with a lot of the distribution centers, a lot of the transportation, not only are we talking about driverless cars as in cars that you and I own, but we're talking about driverless delivery vehicles. We're talking about drones and all of these on the surface appears like it's going to displace human beings. What humans used to do, now machines will do and potentially do better. So what are the implications around human beings. So I'm asked that question quite a bit, especially from our customers and my general perception on this is that I'm actually cautiously optimistic that human beings will continue to do things that are strategic. Human beings will continue to do things that are creative and human being will probably continue to do things that are truly catastrophic, that machines simply have not been able to learn because it doesn't happen very often. One thing that comes to mind is when ATM machines came about several years ago before my time, that displaced a lot of teller jobs in the banking industry, but the banking industry did not go belly up. They found other things to do. If anything, they offered more services. They were more branches that were closed and if I were to ask any of you now if you would go back and not have 24/7 access to cash, you would probably laugh at me. So the thing is, this is AI for good. I think these things might have temporary impact in terms of what it will do to labor and to human beings but I think we as human beings will find bigger, better, different things to do and that's just in the nature of the human journey. >> Yeah, there's definitely a social acceptance angle to this technology, right? Many of us technologists in the room, it's easier for us to understand what the technology is, how it works, how it was created, but for many of our friends and family, they don't. So there's a social acceptance angle to this. So Michelle as you see this technology deployed in retail environments, which is a space where almost every person in every country goes, how do you think about making it feel comfortable for people to interact with this kind of technology and not be afraid of the robots or the machines behind the curtain. >> Yeah, that's a great question. I think that user experience always has to come first, so if you're using AI for AI's sake or for the cool factor, the wow factor, you're already doing it wrong. Again, it needs to solve a problem and what I tend to tell people who are like, "Oh my God. AI sounds so scary. "We can't let this happen." I'm like, "It's already happening "and you're already liking it. "You just don't know "because it's invisible in a lot of ways." So if you can point of those scenarios where AI has already benefited you and it wasn't scary because it was a friendly kind of interaction, you might not even have realized it was there versus something that looks so different and... Like panic driving. I think that's why the driverless car thing is a big deal because you're so used to seeing, in America at least, someone on the left side of the car in the front seat. And not seeing that is like, woah, crazy. So I think that it starts with the experience and making it an acceptable kind of interface or format that doesn't give you that, "Oh my God. Something is wrong here," kind of feeling. >> Yeah, that's a great answer. In fact, it reminds me there was this really amazing study by a Professor Nicholas Eppily that was published in the journal of social psychology and the name of this study was called A Mind In A Machine. And what he did was he took subjects and had a fully functional automated vehicle and then a second identical fully functional automated vehicle, but this one had a name and it had a voice and it had sort of a personality. So it had human anthropomorphics characteristics. And he took people through these two different scenarios and in both scenarios he's evil and introduced a crash in the scenario where it was unavoidable. There was nothing going to happen. You were going to get into an accident in these cars. And then afterwards, he pulled the subjects and said, "Well, what did you feel about that accident? "First, what did you feel about the car?" They were more comfortable in the one that had anthropomorphic features. They felt it was safer and they'd be more willing to get into it, which is not terribly surprising, but the kicker was the accident. In the vehicle that had a voice and a name, they actually didn't blame the self-driving car they were in. They blamed the other car. But in the car that didn't have anthropomorphic features, they blamed the machine. They said there's something wrong with that car. So it's one of my favorite studies because I think it does illustrate that we have to remember the human element to these experiences and as artificial intelligence begins to replace humans, or some of us even, we need to remember that we are still social beings and how we interact with other things, whether they be human or non-human, is important. So, Joe, you talk about evaluating companies. Michelle started a company. She's gotten funding. As you go out and look at new companies that are starting up, there's just so much activity, companies that just add dot AI to the name as Michelle said, how do you cut through the noise and try to get to the heart of is there any value in a technology that a company's bringing or not? >> Definitely. Well, each company has it's unique, special sauce, right? And so, just to reiterate what Michelle was talking about, we look for companies that are really good at doing what they do best, whatever that may be, whatever that problem that they're solving that a customer's willing to pay for, we want to make sure that that company's doing that. No one wants a company that just has AI in the name. So we look for that number one and the other thing we do is once we establish that we have a need or we're looking at a company based on either talent or intellectual property, we'll go in and we'll have to do a vetting process and it takes a whole. It's a very long process and there's legal involved but at the end of the day, the most important thing for the start up to remember is to continue doing what they do best and continue to build upon their special sauce and make sure that it's very valuable to their customer. And if someone else wants to look at them for acquisition so be it, but you need to be meniacally focused on your own customer. That's my two cents. >> I'm thinking again about this concept of embedding human intelligence, but humans have biases right? And sometimes those biases aren't always good. So how do we as technologists in this industry try to create AI for good and not unintentionally put some of our own human biases into models that we train about what's socially acceptable or not? Anyone have any thoughts on that? >> I actually think that the hype about AI taking over and destroying humanity, it's possible and I don't want to disagree with Steven Hawking as he's way smarter than I am. But he kind of recognizes it could go both ways and so right now, we're in a world where we're still feeding the machine. And so, there's a bunch of different issues that came up with humans feeding the machine with their foibles of racism and hatred and bias and humans experience shame which causes them to lash out and what to put somebody else down. And so we saw that with Tay, the Microsoft chatbot. We saw that with even Google's fake news. They're like picking sources now to answer the question in the top box that might be the wrong source. Ads that Google serves often show men high paying jobs, $200,000 a year jobs, and women don't get those same ones. So if you trace that back, it's always coming back to the inputs and the lens that humans are coming at it from. So I actually think that we could be in a way better place after this singularity happens and the machines are smarter than us and they take over and they become our overlords. Because when we think about the future, it's a very common tendency for humans to fill in the blanks of what you don't know in the future with what's true today. And I was talking to you guys at lunch. We were talking about this harbored psychology professor who wrote a book and in the book he was talking about how 1950s, they were imagining the future and all these scifi stories and they have flying cars and hovercrafts and they're living in space, but the woman still stays at home and everyone's white. So they forgot to extrapolate the social things to paint the picture in, but I think when we're extrapolating into the future where the computers are our overlords, we're painting them with our current reality, which is where humans are kind of terrible (laughs). And maybe computers won't be and they'll actually create this Utopia for us. So it could be positive. >> That's a very positive view. >> Thanks. >> That's great. So do we have this all figured out? Are there any big challenges that remain in our industries? >> I want to add a little bit more to the learning because I'm a data scientist by training and a lot of times, I run into folks who think that everything's been figured out. Everything is done. This is so cool. We're good to go and one of the things that I share with them is something that I'm sure everyone here can relate to. So if a kindergartner goes to school and starts to spell profanity, that's not because the kid knows anything good or bad. That is what the kid has learned at home. Likewise, if we don't train machines well, it's training will in fact be biased to your point. So one of the things that we have to kep in mind when we talk about this is we have to be careful as well because we're the ones doing the training. It doesn't automatically know what is good or bad unless that set of data is also fed to it. So I just wanted to kind of add to your... >> Good. Thank you. So why don't we open it up a little bit for questions. Any questions in the audience for our panelists? There's one there looks like (laughs). Emily, we'll get to you soon. >> I had a question for Sarush based on what you just said about us training or you all training these models and teaching them things. So when you deploy these models to the public with them being machine learning and AI based, is it possible for us to retrain them and how do you build in redundancies for the public like throwing off your model and things like that? What are some of the considerations that go into that? >> Well, one thing for sure is training is continuous. So no system should be trained once, deployed, and then forgotten. So that is something that we as AI professionals need to absolutely, because... Trends change as well. What was optimal two years ago is no longer optimal. So that part needs to continue to happen and we're the where the whole IOT space is so important is it will continue to generate relevant consumable data that these machines can continuously learn. >> So how do you decide what data though, is good or bad, as you retrain and evolve that data over time? As a data scientist, how do you do selection on data? >> So, and I want to piggyback on what Michelle said because she's spot on. What is the problem that you're trying to solve? It always starts from there because we have folks who come in to CIOs, "Oh look. "When big data was hot, we started to collect "a lot of the data, but nothing has happened." But data by itself doesn't automatically do magic for you, so we ask, "What kind of problem are you trying to solve? "Are you trying to figure out "what kinds of products to sell? "Are you trying to figure out "the optimal assortment mix for you? "Are you trying to find the shortest path "in order to get to your stores?" And then the question is, "Do you now have the right data "to solve that problem?" A lot of times we put the science and I'm a data scientist by training. I would love to talk about the science, but really, it's the problem first. The data and the science, they come after. >> Thanks, good advice. Any other questions in the audience? Yes, one right up here. (laughing) >> Test, test. Can you hear me? >> Yep. >> So with AI machinery becoming more commonplace and becoming more accessible to developers and visionaries and thinkers alike rather than being just a giant warehouse of a ton of machines and you get one tiny machine learning, do you foresee more governance coming into play in terms of what AI is allowed to do and the decisions of what training data is allowed to be fed to Ais in terms of influence? You talk about data determining if AI will become good or bad, but humans being the ones responsible for the training in the first place, obviously, they can use that data to influence as they, just the governance and the influence. >> Jack: Who wants to take that one? >> I'll take a quick stab at it. So, yes, it's going to be an open discussion. It's going to have to take place, because really, they're just machines. It's machine learning. We teach it. We teach it what to do, how to act. It's just an extension of us and in fact, I think you had a really great conversation or a statement at lunch where you talked about your product being an extension of a designer because, and we can get into that a little bit, but really, it's just going to do what we tell it to do. So there's definitely going to have to be discussions about what type of data we feed. It's all going to be centered around the use case and what that solves the use case. But I imagine that that will be a topic of discussion for a long time about what we're going to decide to do. >> Jack: Michelle do you want to comment on this thought of taking a designer's brain and putting it into a model somehow? >> Well, actually, what I wanted to say was that I think that the regulation and the governance around it is going to be self imposed by the the developer and data science community first, because I feel like even experts who have been doing this for a long time don't rally have their arms fully around what we're dealing with here. And so to expect our senators, our congressmen, women, to actually make regulation around it is a lot, because they're not technologists by training. They have a lot of other stuff going on. If the community that's already doing the work doesn't quite know what we're dealing with, then how can we expect them to get there? So I feel like that's going to be a long way off, but I think that the people who touch and feel and deal with models and with data sets and stuff everyday are the kind of people who are going to get together and self-regulate for a while, if they're good hearted people. And we talk about AI for good. Some people are bad. Those people won't respect those convenance that we come up with, but I think that's the place we have to start. >> So really you're saying, I think, for data scientists and those of us working in this space, we have a social, ethical, or moral obligation to humanity to ensure that our work is used for good. >> Michelle: No pressure. (laughing) >> None taken. Any other questions? Anything else? >> I just wanted to talk about the second part of what she said. We've been working with a company that builds robots for the store, a store associate if you will. And one of their very interesting findings was that the greatest acceptance of it right now has been at car dealerships because when someone goes to the car dealer and we all have had terrible experiences doing that. That's why we try to buy it online, but just this perception that a robot would be unbiased, that it will give you the information without trying to push me one way or the other. >> The hard sell. >> So there's that perception side of it too that, it isn't that the governance part of your question, but more the biased perception side of what you said. I think it's fascinating how we're already trained to think that this is going to have an unbiased opinion, whether or not that true. >> That's fascinating. Very cool. Thank you Sarush. Any other questions in the audience? No, okay. Michelle, could I ask, you've got a station over there that talks a little bit more about your company, but for those that haven't seen it yet, could you tell us a little bit about what is the experience like or how is the shopping experience different for someone that's using your company's technology than what it was before? >> Oh, free advertising. I would love to. No, but actually, I started this company because as a consumer I found myself going back to the user experience piece, just constantly frustrated with the user experience of buying products one at a time and then getting zero help. And then here I am having to google how to wear a white blazer to not look like an idiot in the morning when I get dressed with my white blazer that I just bought and I was excited about. And it's a really simple thing, which is how do I use the product that I'm buying and that really simple thing has been just abysmally handled in the retail industry, because the only tool that the retailers have right now are manual. So in fashion, some of our fashion customers like John Varvatos is an example we have over there, it's like a designer for high-end men's clothing, and John Varvatos is a person, it's not just the name of the company. He's an actual person and he has a vision for what he wants his products to look like and the aesthetic and the style and there's a rockstar vibe and to get that information into the organization, he would share it verbally with PDFs, thing like that. And then his team of merchandisers would literally go manually and make outfits on one page and then go make an outfit on another page with the same exact items and then products would go out of stock and they'd go around in circles and that's a terrible, terrible job. So to the conversation earlier about people losing jobs because of artificial intelligence. I hope people do lose jobs and I hope they're the terrible jobs that no one wanted to do in the first place, because the merchandisers that we help, like the one form John Varvatos, literally said she was weeks away from quitting and she got a new boss and said, "If you don't ix this part of my job, I'm out of here." And he had heard about us. He knew about us and so he brought us in to solve that problem. So I don't think it's always a bad thing, because if we can take that route, boring, repetitive task off of human's plates, what more amazing things can we do with our brain that is only human and very unique to us and how much more can we advance ourselves and our society by giving the boring work to a robot or a machine. >> Well, that's fantastic. So Joe, when you talk about Smart Cities, it seems like people have been talking about Smart Cities for decades and often people cite funding issues, regulatory environment or a host of other reasons why these things haven't happened. Do you think we're on the cusp of breaking through there or what challenges still remain for fulfilling that vision of a smart city? >> I do, I do think we're on the cusp. I think a lot of it has to do, largely actually, with 5G and connectivity, the ability to process and send all this data that needs to be shared across the system. I also think that we're getting closer and more conscientious about security, which is a major issue with IOT, making sure that our in devices or our edge devices, those things out there sensing, are secure. And I think interocular ability is something that we need to champion as well and make sure that we basically work together to enable these systems. So very, very difficult to create little, tiny walled gardens of solutions in a smart city. You may corner a certain part of the market, but you're definitely not going to have that ubiquitous benefit to society if you establish those little walled gardens, so those are the areas I think we need to focus on and I think we are making serious progress in all of them. >> Very good. Michelle, you mentioned earlier that artificial intelligence was all around us in lots of places and things that we do on a daily basis, but we probably don't realize it. Could you share a couple examples? >> Yeah, so I think everything you do online for the most part, literally anything you might do, whether that's googling something or you go to some article, the ads might be dynamically picked for you using machine learning models that have decided what is appropriate based on you and your treasure trove of data that you have out there that you're giving up all the time and not really understanding you're giving up >> The shoes that follow you around the internet right? >> Yeah, exactly. So that's basically anything online. I'm trying to give in the real-world. I think that, to your point earlier about he supply chain, just picking a box of cereal off the shelf and taking it home, there's not artificial intelligence in that at all, but the supply chain behind it. So the supply chain behind pretty much everything we do even in television, like how media gets to us and get consumed. At some point in the supply chain, there's artificial intelligence playing in there as well. >> So to start us in the supply chain where we can get the same day even within the hour delivery. How do you get better than that? What's coming that's innovative in the supply chain that will be new in the future? >> Well, so that is one example of it, but you'd be surprised at how inefficient the supply chain is, even with all the advances that have already gone in, whether it's physical advances around building modern warehouses and modern manufacturing plants, whether it's through software and others that really help schedule things and optimize things. What has happened in the supply chain just given how they've evolved is they're very siloed, so a lot of times the manufacturing plant does things that the distribution folks do not know. The distribution folks do things that the transportation folks don't know and then the store folks know nothing other than when the trucks pulls up, that's the first time they find out about things. So where the great opportunity in my mind is, in the space that I'm in, is really the generation of data, the connection of data, and finally, deriving the smarts that really help us improve efficiency. There's huge opportunity there. And again, we don't know it because it's all invisible to us. >> Good. Let me pause and see if there's any questions in the audience. There, we got one there. >> Thank you. Hi guys, you alright? I just had a question about ethics and the teaching of ethics. As you were saying, we feed the artificial intelligence, whereas in a scenario which is probably a little bit more attuned to automated driving, in a car crash scenario between do we crash these two people or three people? I would be choosing two, whereas the scenario may be it's actually better to just crash the car and kill myself. That thought would never go through my mind, because I'm human. My rule number one is self preservation. So how do we teach the computer this sort of side of it? Is there actually the AI ethic going to be better than our own ethics? How do we start? >> Yeah, that's a great question. I think the opportunity is there as Michelle was talking earlier about maybe when you cross that chasm and you get this new singularity, maybe the AI ethics will be better than human ethics because the machine will be able to think about greater concerns perhaps other than ourselves. But I think just from my point of view, working in the space of automated vehicles, I think it is going to have to be something that the industry, and societies are different, different geographies, and different countries. We have different ways of looking at the world. Cultures value different things and so I think technologists in those spaces are going to have to get together and agree amongst the community from a social contract theory standpoint perhaps in a way that's going to be acceptable to everyone who lives in that environment. I don't think we can come up with a uniform model that would apply to all spaces, but it's got to be something though that we all, as members of a community, can accept. And so yeah, that would be the right thing to do in that situation and that's not going to be an easy task by any means, which is, I think, one of the reasons why you'll continue to see humans have an important role to play in automated vehicles so that the human could take over in exactly that kind of scenario, because the machines perhaps aren't quite smart enough to do it or maybe it's not the smarts or the processing capability. It's maybe that we haven't as technologists and ethicists gotten together long enough to figure out what are those moral and ethical frameworks that we could use to apply to those situations. Any other thoughts? >> Yeah, I wanted to jump in there real quick. Absolutely questions that need to be answered, but let's come together and make a solution that needs to have those questions answered. So let's come together first and fix the problems that need to be fixed now so that we can build out those types of scenarios. We can now put our brainpower to work to decide what to do next. There was a quote I believe by Andrew Ningh Bidou and he was saying in concerning deep questions about what's going to happen in the future with AI. Are we going to have AI overlords or anything like that? And it's kind of like worrying about overpopulation at the point of Mars. Because maybe we're going to get there someday and maybe we're going to send people there and maybe we're going to establish a human population on Mars and then maybe it will get too big and then maybe we'll have problems on Mars, but right now we haven't landed on the planet and I thought that really does a good job of putting in perspective that that overall concern about AI taking over. >> So when you think about AI being applied for good and Michelle you talked about don't do AI just for AI's sake, have a problem to solve, I'll open it up to any of the three of you, what's a problem in your life or in your work experience that you'd love somebody out here would go solve with AI? >> I have one. Sorry, I wanted to do this real quick. There's roads blocked off and it's raining and I have to walk a mile to find a taxi in the rain right now after this to go home. I would love for us to have some sort of ability to manage parking spaces and determine when and who can come in to which parts of the city and when there's a spot downtown, I want my autonomous vehicle to know which one's available and go directly to that spot and I want it to be cued in a certain manner to where I'm next in line and I know. And so I would love for someone to go solve that problem. There's been some development on the infrastructure side for that kind of solution. We have a partnership Intel does with GE and we're putting sensors that have, it's an IOT sensor basically. It's called City IQ. It has environmental monitoring, audio, visual sensors and it allows this type of use case to take place. So I would love to see iterations on that. I would love to see, sorry there's another one that I'm particular about. Growing up I lived in Southern California right against the hills, a housing development, because the hills and there was not a factory, but a bunch of oil derricks back there. I would love to have sensor that senses the particulate in the air to see if there was too many fumes coming from that oil field into my yard growing up as a little kid. I would love for us to solve problems like that, so that's the type of thing that we'll be able to solve. Those are the types of innovations that will be able to take place once we have these sensors in place, so I'm going to sit down on that one and let someone else take over. >> I'm really glad you said the second one because I was thinking, "What I'm about to say is totally going to "trivialize Joe's pain and I don't want to do that." But cancer is my answer, because there's so much data in health and all these patterns are there waiting to be recognized. There's so many things you don't know about cancer and so many indicators that we could capture if we just were able to unmask the data and take a look, but I knew a brilliant company that was using artificial intelligence specifically around image processing to look at CAT scans and figure out what the leading indicators might be in a cancerous scenario. And they pivoted to some way more trivial problem which is still a problem and not to trivialize parking an whatnot, but it's not cancer. And they pivoted away from this amazing opportunity because of the privacy and the issues with HIPPA around health data. And I understand there's a ton of concern with it getting into the wrong hands and hacking and all of this stuff. I get that, but the opportunity in my mind far outweighs the risk and the fact that they had to change their business model and change their company essentially broke my heart because they were really onto something. >> Yeah that's a shame and it's funny you mention that. Intel has an effort that we're calling the cancer cloud and what we're trying to do is provide some infrastructure to help with that problem and the way cancer treatments work today is if you go to a university hospital let's say here in Texas, how you interpret that scan and how you respond and apply treatment, that knowledge is basically just kept within that hospital and within that staff. And so on the other side of the country, somebody could go in and get a scan and maybe that scan brand new to that facility and so they don't know how to treat it, but if you had an opportunity with machine learning to be able to compare scans from people, not only just in this country, but around the world and understand globally, all of the hundreds of different treatment pads that were applied to that particular kind of cancer, think how many lives could be saved, because then you're sharing knowledge with what courses of treatment worked. But it's one of those things like you say, sometimes it's the regulatory environment or it's other factors that hold us back from applying this technology to do some really good things, so it's a great example. Okay, any other questions in the audience? >> I have one. >> Good Emily. >> So this goes off of the HIPPA question, which is, and you were talking about just dynamically displaying ads earlier. What does privacy look like in a fully autonomous world? Anybody can answer that one. Are we still private citizens? What does it look like? >> How about from a supply chain standpoint? You can learn a lot about somebody in terms of the products that they buy and I think to all of us, we sort of know maybe somebody's tracking what we're buying but it's still creepy when we think about how people could potentially use that against us. So, how do you from a supply chain standpoint approach that problem? >> Yeah and it's something that comes up in my life almost every day because one of the thing's we'd like to do is to understand consumer behavior. How often am I buying? What kinds of products am I buying? What am I returning? And so for that you need transactional data. You really get to understand the individual. That then starts to get into this area of privacy. Do you know too much about me? And so a lot of times what we do is data is clearly anonymized so all we know is customer A has this tendency, customer B has this tendency. And that then helps the retailers offer the right products to these customers, but to your point, there are those privacy concerns and I think issues around governance, issues around ethics, issues around privacy, these will continue to be ironed out. I don't think there's a solid answer for any of these just yet. >> And it's largely a reflection of society. How comfortable are we with how much privacy? Right now I believe we put the individual in control of as much information as possible that they are able to release or not. And so a lot of what you said, everyone's anonymizing everything at the moment, but that may change as society's values change slightly and we'll be able to adapt to what's necessary. >> Why don't we try to stump the panel. Anyone have any ideas on things in your life you'd like to be solved with AI for good? Any suggestions out there that we could then hear from our data scientist and technologist and folks here? Any ideas? No? Alright good. Alright, well, thank you everyone. Really appreciate your time. Thank you for joining Intel here at the AI lounge at Autonomous World. We hope you've enjoyed the panel and we wish you a great rest of your event here at South by Southwest. (audience clapping) (bright music)

Published Date : Mar 12 2017

SUMMARY :

and change the way that we live and work. So one of the things that I think is a common misconception. You said that the real revolution to show you different stuff. So you mentioned the supply chain. and so they had to be manufactured. and to really provide that efficiency, and that learns to become more smart. and the space with technology that's trying at the end of a domain is not going to work. of the supply chain, what about people? and that's just in the nature of the human journey. and not be afraid of the robots or format that doesn't give you that, and the name of this study was called A Mind In A Machine. And so, just to reiterate what Michelle was talking about, that we train about what's socially acceptable or not? and the machines are smarter than us So do we have this all figured out? So one of the things that we have to kep in mind Any questions in the audience for our panelists? and how do you build in redundancies for the public So that part needs to continue to happen so we ask, "What kind of problem are you trying to solve? Any other questions in the audience? Can you hear me? and the decisions of what training data is allowed So there's definitely going to have to be discussions So I feel like that's going to be a long way off, to humanity to ensure that our work is used for good. Michelle: No pressure. Any other questions? for the store, a store associate if you will. but more the biased perception side of what you said. Any other questions in the audience? and the aesthetic and the style and there's a rockstar vibe So Joe, when you talk about Smart Cities, and make sure that we basically work together in lots of places and things that we do on a daily basis, in that at all, but the supply chain behind it. So to start us in the supply chain where we can get that the transportation folks don't know There, we got one there. and the teaching of ethics. in that situation and that's not going to be that need to be fixed now so that in the air to see if there was too many fumes coming and so many indicators that we could capture and maybe that scan brand new to that facility and you were talking about of the products that they buy and I think to all of us, And so for that you need transactional data. that they are able to release or not. here at the AI lounge at Autonomous World.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
MichellePERSON

0.99+

JackPERSON

0.99+

Steven HawkingPERSON

0.99+

EmilyPERSON

0.99+

TexasLOCATION

0.99+

JoePERSON

0.99+

AmericaLOCATION

0.99+

MarsLOCATION

0.99+

Southern CaliforniaLOCATION

0.99+

ten yearsQUANTITY

0.99+

MicrosoftORGANIZATION

0.99+

twoQUANTITY

0.99+

Fine MindORGANIZATION

0.99+

Andrew Ningh BidouPERSON

0.99+

John VarvatosPERSON

0.99+

Sim CityTITLE

0.99+

Nicholas EppilyPERSON

0.99+

two peopleQUANTITY

0.99+

three peopleQUANTITY

0.99+

IntelORGANIZATION

0.99+

SarushPERSON

0.99+

GEORGANIZATION

0.99+

hundredsQUANTITY

0.99+

FirstQUANTITY

0.99+

24 hoursQUANTITY

0.99+

two laneQUANTITY

0.99+

last weekDATE

0.99+

Long BeachLOCATION

0.99+

firstQUANTITY

0.99+

bothQUANTITY

0.99+

one pageQUANTITY

0.99+

both scenariosQUANTITY

0.99+

threeQUANTITY

0.99+

GooglORGANIZATION

0.99+

first thingQUANTITY

0.99+

both waysQUANTITY

0.98+

MovidiusORGANIZATION

0.98+

two centsQUANTITY

0.98+

oneQUANTITY

0.98+

one exampleQUANTITY

0.98+

1950sDATE

0.98+

second partQUANTITY

0.98+

two different scenariosQUANTITY

0.98+

over 90%QUANTITY

0.98+

two years agoDATE

0.98+

7DATE

0.98+

SurashPERSON

0.98+

ChinaLOCATION

0.98+

late 90sDATE

0.97+

each companyQUANTITY

0.97+

several years agoDATE

0.97+

SXSW 2017EVENT

0.97+

todayDATE

0.97+

second oneQUANTITY

0.97+

GoogleORGANIZATION

0.96+

10DATE

0.94+

secondQUANTITY

0.94+

TayPERSON

0.94+

Autonomous WorldLOCATION

0.94+

Michelle Bacharach, FINDMINE - SXSW 2017 - #IntelAI - #theCUBE


 

>> Narrator: Live from Austin, Texas, it's the Cube covering South by Southwest 2017. Brought to you by Intel. Now here's John Furrier. >> Welcome back everyone. We're live here at the AI Lounge with Intel, #intelai. This is the Cube, I'm John Furrier. Our next guest is Michelle Bacharach, who's the co-founder and CEO of FINDMINE. retail start up out of New York City, entrepreneur. Welcome to the Cube, thanks for joining us. >> Thank you, thanks for having me. >> So we're at Intel, Intel AI. Pretty packed here, isn't it? >> Yeah. >> Pretty crowded. >> I think it's the cover from the rain. >> Yeah, it's a little rainy here, yesterday was hot. You got a panel here later in the afternoon about AI and retail and convergence, but I want to ask you as an entrepreneur, what got you into starting this company? Was it an itch you were scratching, was it a vision, was it something that you felt compelled to do? Give us the story of FINDMINE. >> Yeah, it's actually a little embarrassing. It kind of sounds like the most selfish reason to start a business. It's because I had a problem I wanted to solve, but I think that's the best way to start a company, honestly, because it means you're going to be a passionate about it, you're going to be a user of your own, whatever you build, and for me, that challenge was I would buy, you know, like my silk bomber here with this big flower on it, and I'd be like yes, I love this, this is great, and I would get it home, but I wouldn't have tried it on with, you know, the pants and the shoes that go with it, so when I'd get it home, I'd be like uh oh, now I have to figure out how to put an outfit together around this to wear it and feel confident. I think a lot of women, especially, have this challenge where we feel pressure to be stylish, but not everyone has that kind of style gene where you can just see something like this and be like oh, I know five ways to wear that. So I struggled with that. I struggled with that when I would buy furniture, even when I would buy things like electronics, like I was really looking into buying a drone at one point. I was like oh, that sounds cool, I could fly a drone, I want to learn that. I found the drone model that I thought I wanted, but then it comes with all this stuff, right, all of these peripherals. They don't all plug in to the drone, so the research involved to figure out how to use one product in combination with another product was way too much work, and I figured someone should be automating that and help a consumer like me answer the question, how do I use this for any product that I might pick up on the shelf. >> And so that was the catalyst. Where is it now today, what's the status of FINDMINE? >> Uh yeah, that's a good question. >> John: Solving all the problems, did it? >> No, not yet, close. No, but, so you know, that was like seven years ago that I started noticing this problem in my personal life, then I researched and found that tons of other people have this problem, customers will buy 170% more if you show them how to use the product that they're buying, but I didn't have the tools to solve it. I have a product management background, but I wasn't a computer scientist, a data scientist to actually execute it, and so I'd met a friend, a friend of mine's husband is a computer scientist, and I sort of like, you know, suckered him in with like this one little project, and then he was like wow, this is really interesting. He cares nothing about fashion, by the way. Like he wears his Columbia sweatshirt and jeans like every single day, so he doesn't really feel the problem the way I do, but what he saw was this opportunity to use artificial intelligence and machine learning and technology to solve this really interesting problem of like, can we make a machine replicate what a human does, which is like figuring out what's stylish, and then that's what hooked him in and he thought the problem and the application of the technology was so cool. So that was, you know, in 2014 we started working on this. Since then, we've, you know, launched a product, we have customers on board. We work with fashion brands and retailers. We produce revenue, we raise money, we have a team now, we have a real office. We're not working out of our apartments anymore, so it's going well. >> So now you're in the middle of this AI world and if you think about the data your problem that you were originally solving actually applies to a lot of things, whether it's learning, healthcare, so it's kind of like the data drives more opportunity to collective intelligence. Is that kind of where this is going? Do you see that trend where it's the data and the algorithms, or the algorithms and the data? >> Yeah, I think that access to the data is the big factor, so in retail there's tons of data, right? Transaction data, product data, user data, all that kind of stuff, and a lot of it is very easily accessible. It's not all like private information, customer information, that you have to guard really closely. Obviously there's some of that because you're doing transactions, so it's credit card information, there's location data, you know gender, all that kind of stuff, but the product data is publicly available. So we didn't even have to have a customer live before we started doing cool stuff with machine learning, with large data sets because we would just go find products that were live on the internet and use that data. I think in different industries like healthcare it's a lot harder to come by the data and there's a lot more concerns around it. >> Michelle, what are some of the learnings that you've had, now if you look back from where you from where you were. What are some of the key learnings with the venture you're building, around what was surprising to you, what popped out as value? Was it the machine learning? I mean, what were some of the learnings you can share? >> I think in general, my best piece of advice for start ups is just don't die. And I say that a lot and people laugh, but it's so true. I've seen so many friends with startups that kind of had a moment where they were like okay, it's all falling apart, and they just, they said okay that's it. But if they had stayed around for like five more days, 10 more days, 50 more days, how their fortunes could have changed is incredible, and we've gone through that, I've seen other people go through that, so that's number one. And the number two is, like don't wait. Just do something. So I think for a long time we were sort of like waiting to get like the right data sets in the right order and like getting it all perfect first, and that's not the right way to approach it. Just go. >> So get a horse on the track and at least run the race, get something going. >> Michelle: Yeah, exactly. >> And don't run out of cash. As I always say, you can't go out of business when there's money in the bank. >> Michelle: Yup. >> So, okay, so now on the tech side. What has surprised you on some of the amazing things that are now starting to come into visibility for you, and what do you see as your vision? So what's kind of obvious and that you're going after, and what are some of the things that you see in your vision that others might not see? >> So what's really, what we're doing right now, and every startup needs focus, you can't do everything at once, but you need to have this bigger vision to make it, you know a billion dollar potential kind of exit company because that's what people want to invest in if you want to take venture capital, and not every startup needs to. You can self finance a business. But for me, this rapid growth was really important, and so I think what was really important was that we kind of like built something that could scale long term, so this broad vision of like every single product that you could pick up off the shelf as a consumer, you know exactly how to use it. For me, there is like a personal mission in that because I hate waste. I went to Berkeley, like we talked about before, so I have a little bit of that like hippie mentality, and I was buying all this stuff like in fast fashion, and it just sat in my closet and then I'd throw it out or I would never use it, and that made me really bummed. And the reason I was throwing it out was because I didn't know how to use it, and if I had just gotten that piece of information up front, then I probably would have been able to integrate it into my life, and I wouldn't have thrown it out. So doing it across all industries in retail. >> So really efficiency too is key on this? >> Yeah. >> You could actually accelerate that. >> Absolutely. >> So on the fashion side, is that where the focus is now on the retail side, or only still? >> Yes, so we're B2B, we sell to fashion retailers and brands. They use our technology and then they figure out where they want to get it into the consumer's hands, so it might be on the e-commerce page, it might be in the store, it might be in the associate's phone, so that you as a shopper don't even know that like a customer, or that the associate is like kind of cheating, right? They're looking at FINDMINE to find out what outfits to recommend. They might just be having an interaction with you like a human does, but they're using an assistive tool to get that efficiency that you mentioned before. >> So you have a panel coming up this afternoon. Without giving away all the content, what's the topic that you want to talk about? >> So the panel is artificial intelligence for good, and ours specifically is autonomous world, so it's about the automation that's kind of all around us and becoming more ubiquitous, and how artificial intelligence is making that possible. >> So I always get, I'm so amazed by autonomous vehicles because I think, you know, it's so obvious, mental models, we all have cars. >> Michelle: Yeah. >> Or you'd have been no transportation, but it's pretty radical when you think about the impact of autonomous vehicles, and this is a pretty amazing trend. I mean, smart cities is also mind blowing as well. You think about what's going to happen for the digital citizen. >> Yeah. >> Like what are those services? So there's some amazing potential but also work that has to get done. What's your thoughts on those two trends and the impacts, you know, 10, 20 years down? Will there be cars on the road in 25 years? >> Yeah, so actually on the panel coming up it's going to be myself, kind of from the retail perspective, there's going to be someone from the smart cities perspective, and someone from the autonomous vehicles perspective, and I'm kind of like what am I doing here? Like those trends are so much bigger and more like amazing and life changing than what we're doing, but I actually think that retail is so ubiquitous and like we're all, we all shop all the time, whether it's through Amazon, whether it's a physical store, and so it's a little bit more accessible, almost, whereas like the idea of having like a driverless car is harder for you to picture. >> Yeah. >> And one of the things that I'll be talking about probably a little bit later is how like you don't actually realize how much of this is going on around you all the time, whereas seeing a car on the street without a driver in the left hand side like drivers seat is like a shock, right? We're so not used to. >> John: Yeah, it's mind blowing. >> Used to that. >> Be it worry, let me ask the retail question because one of the things you're close to as a retail is that you're seeing a lot of the brick and mortar sites becoming destination oriented, not so much day to day shopping. E-commerce is obviously exploding, it's becoming what it is, and there's some tie in between digital and analog now, and a converging. What's the big takeaway? What's the state of the art right now in retail? Is that the vibe right now that it's a combination of destination based or is there something else going on? Can you share some color on what's happening in the retail world? >> Yeah, so everyone talks about like oh my god, like no one's going to shop in stores anymore. Well we're a long way away from that. Over 90% of all commerce is still done in a physical store. It's just that all the growth is in the e-commerce and that's why everyone talks about it is as like this huge disruption because it is, like all of the growth is in e-commerce, which is incredible, so at some point maybe it will completely take it over, but I personally don't feel like that's the case because we're humans, we crave social interaction, and part of shopping is that social interaction, that consultative nature of selling that I just don't, I hope won't be replaced completely by a screen. >> So you're having fun here at South by Southwest? A little bit of rain today, you got drenched as you were walking over here. What's this show like been for you? >> I got here this morning, came straight from the airport to one event and then went to another event with my suitcase like trying to get around, so the rain definitely put a damper on that, but I'm hoping it clears out. >> What do you think about the Intel AI booth here, AI lounge. What do you think, pretty impressive? >> Yeah, you actually can check out FINDMINE in that corner over there. We're on that wall, and it's a live, it's a live website. It's actually showing John Varvatos, which is one of our customers. They're a high end fashion brand for mens and we show the complete outfits, so you can go actually like shop right there, FINDMINE would get credit for that, and Intel has been an awesome partner to us and just really innovative, and I love Rainey Street. I think it's so cool, like these are all houses converted into bars converted into an Intel experience. It's very meta. >> Yeah, very meta, it's a meta of meta. Michelle Bacharach, thanks so much for spending this time in the Cube. We're here inside the Cube inside the AI lounge here with the Cube. I'm John Furrier. We'll be right back with more coverage from South by Southwest. (upbeat instrumental music)

Published Date : Mar 11 2017

SUMMARY :

Austin, Texas, it's the Cube This is the Cube, I'm John Furrier. So we're at Intel, Intel AI. that you felt compelled to do? and the shoes that go with it, And so that was the catalyst. and the application and the algorithms, or the customer information, that you have the learnings you can share? and that's not the right and at least run the race, As I always say, you and what do you see as your vision? and so I think what was really important so that you as a shopper Without giving away all the content, so it's about the because I think, you for the digital citizen. and the impacts, you and someone from the autonomous And one of the things Is that the vibe right now It's just that all the as you were walking over here. from the airport to one event the Intel AI booth here, AI lounge. so you can go actually the AI lounge here with the Cube.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Michelle BacharachPERSON

0.99+

MichellePERSON

0.99+

2014DATE

0.99+

JohnPERSON

0.99+

John FurrierPERSON

0.99+

New York CityLOCATION

0.99+

John VarvatosPERSON

0.99+

five more daysQUANTITY

0.99+

yesterdayDATE

0.99+

Austin, TexasLOCATION

0.99+

IntelORGANIZATION

0.99+

todayDATE

0.99+

seven years agoDATE

0.99+

10 more daysQUANTITY

0.99+

AmazonORGANIZATION

0.99+

oneQUANTITY

0.98+

#intelaiORGANIZATION

0.98+

Over 90%QUANTITY

0.98+

two trendsQUANTITY

0.98+

25 yearsQUANTITY

0.98+

50 more daysQUANTITY

0.98+

this morningDATE

0.98+

billion dollarQUANTITY

0.97+

10QUANTITY

0.97+

one productQUANTITY

0.96+

this afternoonDATE

0.95+

SXSW 2017EVENT

0.95+

five waysQUANTITY

0.95+

CubeCOMMERCIAL_ITEM

0.94+

tonsQUANTITY

0.93+

FINDMINEORGANIZATION

0.92+

one eventQUANTITY

0.92+

one little projectQUANTITY

0.91+

20 yearsQUANTITY

0.9+

firstQUANTITY

0.9+

BerkeleyLOCATION

0.89+

2017DATE

0.88+

Rainey StreetLOCATION

0.88+

single dayQUANTITY

0.87+

ColumbiaORGANIZATION

0.84+

one pointQUANTITY

0.83+

170% moreQUANTITY

0.8+

number twoQUANTITY

0.79+

Intel AIORGANIZATION

0.73+

single productQUANTITY

0.69+

thingsQUANTITY

0.69+

number oneQUANTITY

0.65+

AI LoungeLOCATION

0.62+

South by SouthwestLOCATION

0.53+

otherQUANTITY

0.52+

SouthwestLOCATION

0.49+

Ben Parr | SXSW 2017


 

>> Narrator: Live from Austin, Texas, it's The Cube covering South by Southwest 2017, brought to you by Intel. Now, here's John Furrier. >> Hey, welcome everyone back for day two of live coverage of South by Southwest. This is the cube, our flagship program from Silicon Angle. We go out to the events and extract the (mumbles). We're at the Intel AI Lounge, people are rolling in, it's an amazing vibe here, South by Southwest. The themes are AI, virtual reality, augmented reality, technology. They got great booths here, free beers, free drinks, and of course great sessions and great conversations here with the Cube. My first guest of the day here is Ben Parr, a friend of the Cube. He's been an entrepreneur, he's been a social media maven, he's been a journalist, all around great guy. Ben, thanks for joining us today. >> Thank you for having me again. >> So you're a veteran with South by Southwest, you know the social scene, you've seen the evolution from Web 2.0 all the way to today, had Scobel on yesterday, Brian Fanzo, really the vibe is all about that next level, of social to connecting and you got a startup you're working on that you founded, co-founded called AI? >> Ben: Octane AI. >> Octane AI, that's in the heart of this new social fabric that's developing. Where AI is starting to do stuff, keep learning, analytics but, ultimately, it's just a connection. Talk about your company. What is Octane AI? Tell us a little bit about the company. >> So Octane AI is a platform that lets you build an audience on Facebook Messenger and then through a bot. And so, what we do is allow you to create a presence on Messenger because if I told you there was a social app that had a billion users every month, bigger than Snapchat plus Twitter plus Instagram combined you'd want to figure out a strategy for how to engage with those people right? And that social app is Facebook Messenger. And yet no one ever thinks, oh could I build an audience on a messaging app? Could I build an audience on Messenger or WeChat or any of the others. But you can through a bot. And you can not just build an audience but you can create really engaging content through conversation. So what we've done is, we've made it really easy to make a bot on messenger but more importantly, a real reason for people to, actually, come to your bot and engage with it and make it really easy to create content for it. In the same way you create content for a blog or create content for YouTube Channel. Maroon 5, Aerosmith, KISS, Lindsay Lohan, 30 seconds to MARS, Jason Derulo and a whole bunch more use us to build an audience and engage their fans on Messenger. >> So let me get your thoughts on a couple of trends around this. Cause this is really kind of, to me, a key part that chat bots illustrate the big trends that are going on. Chat bots were the hype. People were talking about, oh chat bots. It's a good mental model for people to see AI but it also has been, kind of, I won't say a pest, if you will, for users. It's been like a notification. A notification of the economy we're living in. Now you're taking it to the next level. This is what we're seeing. The deep learnings and the analytics around turning notifications which can be noisy after a while, into real content and connections. >> Into something useful, absolutely. Like look, the last year of bots. The Facebook platform is not even a year old. We've been in that fart apps stage of bots. Remember the first year of mobile apps? You had the fart app and that made $50,000 a day and that was annoying as hell. We're at that stage now, the experimentation stage. And we've seen different companies going in different, really cool directions. Our direction is, how do you create compelling content so you're not spamming people but you have content that you can share, not just in your bot but as a link on your social media to your followers, to your fans, on Twitter, everywhere else and have a scalable conversation about whatever you want. Maroon 5 has conversations with their audience about their upcoming tours or they even released an exclusive preview of their new song, Cold, through our bots. You could do almost anything with our bots or with any bot. We're just learning right now, as an industry, what are the best practices. >> So where do bots go for the next level? Because you and I have known each other for almost over 10 years, we've seen the whole movement and now we're living in a fake news era. But social media is evolving where content now is super important that glues people together, communities together. In a way, you're taking AI or bots, if you will. Which is a first, I mean, .5 version of where AI is going. Where content, now, is being blended into notifications. How important is content in community? >> Content in community are essential to any product. And I feel like when you hear the word bot, you don't think community and that you could build a community with it because it's a bot, it's supposed to be automated. But you, actually, can if you do it in the right way and it can be a very, very powerful experience. We're building features that allow you to build more community in your bot and have people who are talking with your bot communicate with each other. There's a lot of that. What I feel like is, we're at the zero point one or zero point two of the long scale of AI. What we need to do right now is showcase all the use cases that really work for AI, bots, machine learning. Over time, we will be adding more other great technologies from Intel and others that will make all these technologies and everything we do better, more social and most of all, more personalized. I think that's one of the big benefits of AI. >> Do you see bot technology or what bots can turn into being embedded into things like autonomous vehicles, AR, is there a stack developing, if you will, around bots? What you're talking about is a progression of bots. What's your vision on where this goes down the road? >> I see a bunch of companies, now, building the technological stack for AI. I see a bunch of companies building the consumer interface, bots is one of those consumer interfaces. Not just chat bots but voice bots. And then I see another layer that's more enterprise that's helping make more efficient things like recruiting or all sorts of automation or driving. That are being built as well. But you need each of those stacks to work really well to make this all work. >> So are there bots here at South by Southwest? Is there a bot explosion, is there bots that tell you where the best parties are? What's the scene here at Southby? Where are the bots and if there were bots, what would they be doing to help people figure out what to do? >> The Southby bot is, actually, not a bad bot. They launched their bot just before South by Southwest. It has a good party recommendations and things. But it the standard bot. I feel like what we're seeing is the best use, there's a lot of good bot people. What I'm seeing right now is that people are still flushing out the best use cases for their bots. There's no bot yet that can predict all the parties you want to go to. We got to have our expectations set. That will happen but we're still a few years away from really deep AI bots. But there are clearly ones where you can communicate faster with your friends. There's clearly ones that help you connect with your favorite artist. There's clearly ones that help you build an audience and communicate at scale. And I feel like the next step is the usefulness. >> Talk about the user interface. Robert Scobel and I were talking yesterday, we have some guests coming on today that had user experience background. With AI, with virtual reality, with bots, with deep learning, all this collective intelligence going on, what's your vision of the user interface as it changes, as people's expectations? What are some of those things that you might see developing pretty quickly as deep learning, analytics, more data stats come online? What is the user interface? Cause bots will intersect with that as an assistant or a value add for the user. What's your vision on? >> I'll tell you what I see in the near term and then I'll tell you a really crazy idea of how I see the long term. In the near term, I think what you're going to see is bots have become more predictive. That, based on your conversations, are more personalized and maybe not a necessarily need as much input from you to be really intelligent. And so voice, text, standard interfaces that we're used to. I think the bigger, longer run is neurological. Is the ability to interface without having to speak. Is AI as a companion to help us in everything we do. I feel like, in 30 years, we won't even, it's, kind of like, do your remember the world when it had no internet? It's hard, it feels so much different. There will be a point in about 20 years we will not understand what the world was before AI. Before AI assistance where assisting us mentally, automatically and through every interface. And so good AI's, in the long run, don't just run on one bot or one thing, they follow you wherever you go. Right now it might be on your phone. When you get home, it may be on your home, it may be in your car but it should be the same sets of AI's that you use daily. >> Doctor Nevine Rou, yesterday, called the AI the bulldozer for data. What bulldozers where in the real world, AI's going to do that for data. Cause you want to service more data and make things more usable for users. >> Yes, the data really helps AI become more personalized and that's a really big benefit to the user to every individual. The more personalized the experience, the less you have to do. >> Alright, so what's the most amazing thing you've seen so far this year at Southby? What's going on out there that's pretty amazing? That's popping out of the wood work? In terms of either trend, content, product, demos, what are some of the cool things you're seeing. >> So, as it is only Saturday, I feel like the coolest thing will still come to me. But outside of AI, there have been some really cool mixed reality, augmented reality demos. I can't remember the name. There's a product with butterflies flying around me. All sorts of really breaking edge technologies that, really, create another new interface honestly where AI may interact with us through the augmented reality of our world. I mean, that's Robert Scogul's thing exactly. But there's a lot of really cool things that are being built on that front. I think those are the obvious, coolest ones. I'm curious to see which ones are going to be the big winners. >> Okay, so I want to ask you a personal question. So you were doing some venture investing around AI and some other things. What caused you to put that pause button on that mission to start the chat bot AI company? >> So I was an investor for a couple of years. I invested in ubean, the wireless electricity company and Shots with Justin Bieber which is always fun. And I love investing and I love working with companies. But I got into Silicone Valley and I got into startups because I wanted to build companies. I wanted to build ideas. This happened, in part, because of my co-founders. My co-founder Matt, who is the first head of product at Ustream and twice into the Forbes 30 under 30. One of the king makers of the bot industry. The opportunity to be a part of building the future of AI was irresistible to me. I needed to be a part of that. >> Okay, can you tell any stories about Justin Bieber for us, while we're here inside the Cube? (laughs) >> I wonder how many of those I can, actually, tell? Okay, so look. Justin Bieber is an investor in a company I'm an investor in called Shots. Which is now a super studio that represents everyone from Lele Pons to Mike Tyson on digital online and they're doing really, really well. One of Justin's best friends is the founder, John Shahidi. And so it's just really random. Sitting with John, who I invested in and just getting random FaceTime's. Be like, oh it's Justin Bieber, say hi to Justin. As if it was nothing. As if it was a normal, it's a normal day in his life. >> Could you just have him retweet one of my Tweets. He's got like a zillion followers. What's his follower count at now? >> You don't want that. He's done that to me before. When Justin retweets you or even John retweets you, thousands of not tens of thousands of Justin Bieber fans, bots and not bots, start messaging you, asking you to follow them, talking to you all the time. I still get the tweets all the time from all the Justin fans. >> Okay don't tweet me then. I'm nice and happy with 21,000 followers. Alright, so next level for you in terms of this venture. Obviously, they got some rock stars in there. What's the next step for you guys right now? Give us a little inside baseball in the venture status where you guys are at. What's the next step? >> We launched the company publicly in November, we started in May. We raised 1.6 million from general catalyst, from Sherpa Ventures, a couple of others. When we launched our new feature, Convos, which allows you to create shareable bots, shareable conversations with the way you share blog posts. And that came out with all those launch partners I mentioned before like Maroon 5. We're working on perfecting the experience and, mostly, trying to make a really, really compelling experience with the user with bots because if we can't do that, then there's no use to doing anything. >> So you provide the octane for the explosive conversations? (laughs) >> Yes, there you go, thank you, thank you. And we make it really easy. So we're just trying to make it easier to do this. This is a product that your mom could use, that an artist could use, any social media team could use. Writing a convo is like writing a blog post on media. >> Are moms really getting the chat bot scene? I, honestly, get the Hollywood. I'm going to go back to Hollywood in a second but being a general, middle America kind of tech/genre, what are they like? Are they grokking the whole bot thing? What's the feedback from middle America tech? >> But think of it this way. There are a billion people on Messenger and it's a, really, part of the question, they all use Facebook Messenger. And so, they may be communicating with a bot without knowing it. Or they might want to communicate with their fans. It's not about the technology as much as this is like connecting with who you really care about. If I really care about a Maroon 5 or Rachel Ray, I can now have that option. And it doesn't really matter what the technology is as much as it is that personal connection, that experience is good. >> John: Is it one-one-one or group? Cause it sounds like it's town hall, perfect for a town hall situation. >> It's one-on-one, it's scale. So you could have a conversation with a bot while each of the audience members is having a conversation one-on-one. When you can choose different options and it could be a different conversation for each person. >> Alright, so I got to ask about the Hollywood scene. You mentioned Justin Bieber. I wanted to go down that because Hollywood really has adopted social media pretty heavily because they can go direct to the audience. We're seeing that. Obviously, with the election, Trump was on Twitter. He bypasses all the press but Hollywood has done very well with social. How are they using the bots? They are a tell sign of where it's going. Can you share some antidotal stories or data around how Maroon 5, Justin, these guys are leveraging this and what's some of the impact? >> Sure, so about a month 1/2, 2 months before Maroon 5 launched their new song, new single, Cold. They came to us and wanted to build a distribution. They wanted to reach their audience in a more direct personal way. And so we helped them make a bot. It didn't take long. We helped them write convos. And so what they did was they wrote convos about things like exclusive behind the scenes photos from their recent tour or their top moments of 2016 or things that their fans really care about. And they shared em. They got a URL just like you would get, a blog poster URL. They shared it out with their 39 million Facebook fans, they shared it with their Twitter followers, they shared it across their social media. And 10's of thousand's of people started talking with their bot each time they did this. About 24 hours before the bot, before their new single release, they exclusively released a 10 second clip of Cold through their bot. And when they did that, within 24 hours, the size of their bot doubled because it went viral within the Maroon 5 community. There's a share function in our convos and people shared the convo with their friends and with their friends friends and it kept on spreading. We saw this viral graph happen. And the next day when they released the single, 1000's of people bought the song because of the bot alone. And now the bot is a core of their social strategy. They share a convo every single week and it's not just them but now Lohan and a whole bunch of others are doing the same thing. >> John: Lindsay Lohan. >> Lindsay Lohan is one of our most popular bots. Her fans are really dedicated. >> And so you can almost see it's, almost connecting with CGI, looking at what CGI's doing in film making. You could almost have a CGI component built-in. So it's all this stuff coming together. >> Ben: Multimedia matters. >> So what do you think about the Intel booth here? The AI experience? They got some Kinetic photo experience, amazing non-profit activities in deep loading (mumbles), missing children, what do you think? >> This is some of the best use cases for AI which is, people think of AI as just like the direct consumer interface which is what we do but AI is an underlying layer to everything we do. And if it can help even 1% or 1,000% identify and find missing children or increase the efficiency of our technology stacks so that we save energy. Or we figure out new ways to save energy. This is where AI can really make an impact. It is just a fundamental layer of everything. In the same way the internet is just a fundamental layer of everything. So I've seen some very cool things here. >> Alright, Ben Parr, great guest, in venture capitalist now founder of a great company Octane AI. High octane, explosive conversations looking forward to adopting. We're going to, definitely, take advantage of the chat bot and maybe we can get some back stage passes to Maroon 5. (laughs) >> (laughs) There will be some fun times in the future, I know it. >> Alright Ben Parr. >> Ben: Justin Bieber. >> Justin Bieber inside the Cube right here and Ben Parr. Thanks for watching. It's the Intel AI Lounge. A lot of great stuff. A lot of great people here. Thanks for joining us. Our next guest will be up after this short break. (lively music)

Published Date : Mar 11 2017

SUMMARY :

covering South by Southwest 2017, brought to you by Intel. a friend of the Cube. and you got a startup you're working on Octane AI, that's in the heart In the same way you create content for a blog A notification of the economy we're living in. that you can share, not just in your bot Because you and I have known each other And I feel like when you hear the word bot, a stack developing, if you will, around bots? the consumer interface, bots is one And I feel like the next step is the usefulness. What is the user interface? the same sets of AI's that you use daily. called the AI the bulldozer for data. the less you have to do. the cool things you're seeing. I feel like the coolest thing Okay, so I want to ask you a personal question. One of the king makers of the bot industry. One of Justin's best friends is the founder, John Shahidi. Could you just have him retweet I still get the tweets all the time in the venture status where you guys are at. And that came out with all those This is a product that your mom could use, Are moms really getting the chat bot scene? and it's a, really, part of the question, John: Is it one-one-one or group? So you could have a conversation with a bot He bypasses all the press but Hollywood and people shared the convo with their friends Lindsay Lohan is one of our most popular bots. And so you can almost see it's, almost This is some of the best use cases for AI of the chat bot and maybe we can get in the future, I know it. It's the Intel AI Lounge.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Brian FanzoPERSON

0.99+

MattPERSON

0.99+

John ShahidiPERSON

0.99+

Robert ScobelPERSON

0.99+

MayDATE

0.99+

Ben ParrPERSON

0.99+

Mike TysonPERSON

0.99+

Sherpa VenturesORGANIZATION

0.99+

TrumpPERSON

0.99+

NovemberDATE

0.99+

LohanPERSON

0.99+

JohnPERSON

0.99+

thousandsQUANTITY

0.99+

JustinPERSON

0.99+

John FurrierPERSON

0.99+

Justin BieberPERSON

0.99+

Robert ScogulPERSON

0.99+

1.6 millionQUANTITY

0.99+

Lindsay LohanPERSON

0.99+

UstreamORGANIZATION

0.99+

1%QUANTITY

0.99+

BenPERSON

0.99+

1,000%QUANTITY

0.99+

39 millionQUANTITY

0.99+

twiceQUANTITY

0.99+

30 yearsQUANTITY

0.99+

2016DATE

0.99+

IntelORGANIZATION

0.99+

Jason DeruloPERSON

0.99+

Silicone ValleyLOCATION

0.99+

21,000 followersQUANTITY

0.99+

Maroon 5ORGANIZATION

0.99+

yesterdayDATE

0.99+

Austin, TexasLOCATION

0.99+

South by SouthwestTITLE

0.99+

firstQUANTITY

0.99+

Rachel RayPERSON

0.99+

eachQUANTITY

0.98+

Facebook MessengerTITLE

0.98+

todayDATE

0.98+

WeChatTITLE

0.98+

MessengerTITLE

0.98+

SouthbyORGANIZATION

0.98+

OneQUANTITY

0.98+

HollywoodORGANIZATION

0.98+

this yearDATE

0.98+

1000's of peopleQUANTITY

0.98+

first yearQUANTITY

0.98+

about 20 yearsQUANTITY

0.98+

HollywoodLOCATION

0.98+

Octane AIORGANIZATION

0.97+

$50,000 a dayQUANTITY

0.97+

SXSW 2017EVENT

0.97+

FacebookORGANIZATION

0.97+

ShotsORGANIZATION

0.97+

oneQUANTITY

0.97+

zillion followersQUANTITY

0.97+

SaturdayDATE

0.96+

24 hoursQUANTITY

0.95+

each personQUANTITY

0.95+

10 second clipQUANTITY

0.94+

Nevine RouPERSON

0.94+

Dr. Dawn Nafus | SXSW 2017


 

>> Announcer: Live from Austin, Texas it's the Cube. Covering South by Southwest 2017. Brought to you by Intel. Now here's John Furrier. Okay we're back live here at the South by Southwest Intel AI Lounge, this is The Cube's special coverage of South by Southwest with Intel, #IntelAI where amazing starts with Intel. Our next guest is Dr. Dawn Nafus who's with Intel and you are a senior research scientist. Welcome to The Cube. >> Thank you. >> So you've got a panel coming up and you also have a book AI For Everything. And looking at a democratization of AI we had a quote yesterday that, "AI is the bulldozer for data." What bulldozers were in the real world, AI will be that bulldozer for data, surfacing new experiences. >> Right. >> This is the subject of your book, kind of. What's your take on this and what's your premise? >> Right well the book actually takes a step way back, it's actually called Self Tracking, the panel is AI For Everyone. But the book is on self tracking. And it's really about actually getting some meaning out of data before we start talking about bulldozers. So right now we've got this situation where there's a lot of talk about AI's going to sort of solve all of our problems in health and there's a lot that can get accomplished, whoops. But the fact of the matter is is that people are still struggling with gees, like, "What does my Fitbit actually mean, right?" So there's this, there's a real big gap. And I think probably part of what the industry has to do is not just sort of build new great technologies which we've got to do but also start to fill that gap in sort of data education, data literacy, all that sort of stuff. >> So we're kind of in this first generation of AI data you mentioned wearable, Fitbits. >> Dawn: Yup. >> So people are now getting used to this, so that it sounds this integration into lifestyle becomes kind of a dynamic. >> Yeah. >> Why are people grappling >> John: with this, what's your research say about that? >> Well right now with wearables frankly we're in the classic trough of disillusionment. (laughs) You know for those of you listening I don't know if you have sort of wearables in drawers right now, right? But a lot of people do. And it turns out that folks tend to use it, you know maybe about three or four weeks and either they've learned something really interesting and helpful or they haven't. And so there's actually a lot of people who do really interesting stuff to kind of combine it with symptoms tracking, location, right other sorts of things to actually really reveal the sorts of triggers for medical issues that you can't find in a clinical setting. It's all about being out in the real world and figuring out what's going on with you. Right, so then when we start to think about adding more complexity into that, which is the thing that AI's good at, we've got this problem of there's only so many data sets that AI's any actually any good at handling. And so I think there's going to have to be a moment where sort of people themselves actually start to say, "Okay you know what? "This is how I define my problem. "This is what I'm going to choose to keep track of." And some of that's going to be on a sensor and some of it isn't. Right and sort of being really intervening a little bit more strongly in what this stuff's actually doing. >> You mentioned the Fitbit and you were seeing a lot of disruption in the areas, innovation and disruption, same thing good and bad potentially. But I'll see autonomous vehicles is pretty clear, and knows what Tesla's tracking with their hot trend. But you mentioned Fitbit, that's a healthcare kind of thing. AIs might seem to be a perfect fit into healthcare because there's always alarms going off and all this data flying around. Is that a low hanging fruit for AI? Healthcare? >> Well I don't know if there's any such thing as low hanging fruit (John laughs) in this space. (laughs) But certainly if you're talking about like actual human benefit, right? That absolutely comes the top of the list. And we can see that in both formal healthcare in clinical settings and sort of imaging for diagnosis. Again I think there's areas to be cautious about, right? You know making sure that there's also an appropriate human check and there's also mechanisms for transparency, right? So that doctors, when there is a discrepancy between what the doctor believes and what the machine says you can actually go back and figure out what's actually going on. The other thing I'm particularly excited about is, and this is why I'm so interested in democratization is that health is not just about, you know, what goes on in clinical care. There are right now environmental health groups who are looking at slew of air quality data that they don't know what to do with, right? And a certain amount of machine assistance to sort of figure out you know signatures of sort of point source polluters, for example, is a really great use of AI. It's not going to make anybody any money anytime soon, but that's the kind of society that we want to live in right? >> You are the social good angle for sure, but I'd like to get your thoughts 'cause you mentioned democratization and it's kind of a nuance depending upon what you're looking at. Democratization with news and media is what you saw with social media now you got healthcare. So how do you define democratization in your context and you're excited about.? Is that more of freedom of information and data is it getting around gatekeepers and siloed stacks? I mean how do you look at democratization? >> All of the above. (laughs) (John laughs) I'd say there are two real elements to that. The first is making sure that you know, people are going to use this for more than just business, have the ability to actually do it and have access to the right sorts of infrastructures to, whether it's the environmental health case or there are actually artists now who use natural language processing to create art work. And people ask them, "Why are you using deblurting?" I said, "Well there's a real access issue frankly." It's also on the side of if you're not the person who's going to be directly using data a kind of a sense of, you know... Democratization to me means being able to ask questions of how the stuff's actually behaving. So that means building in mechanisms for transparency, building in mechanisms to allow journalists to do the work that they do. >> Sharing potentially? >> I'm sorry? >> And sharing as well more data? >> Very, very good. Right absolutely, I mean frankly we still have a problem right now in the wearable base of people even getting access to their own data. There's a guy I work with named Hugo Campos who has an arterial defibrillator and he's still fighting to get access to the very data that's coming out of his heart. Right? (laughs) >> Is it on SSD, in the cloud? I mean where is it? >> It is in the cloud. It's going back to the manufacturer. And there are very robust conversations about where it should be. >> That's super sad. So this brings up the whole thing that we've been talking about yesterday when we had a mini segment on The Cube is that there are all these new societal use cases that are just springing up that we've never seen before. Self-driving cars with transportation, healthcare access to data, all these things. What are some of the things that you see emerging on that tools or approaches that could help either scientists or practitioners or citizens deal with these new critical problem solving that needs to apply technology to. I was talking just last week at Stanford with folks that are looking at gender bias and algorithms. >> Right, uh-huh it's real. >> Something I would never have thought of that's an outlier. Like hey, what? >> Oh no, it's happened. >> But it's one of those things were okay, let's put that on the table. There's all this new stuff coming on the table. >> Yeah, yeah absolutely. >> What do you see? >> So they're-- >> How do we solve that >> John: what approaches? >> Yeah there are a couple of mechanisms and I would encourage listeners and folks in the audience to have a look at a really great report that just came out from the Obama Administration and NYU School of Law. It's called AI Now and they actually propose a couple of pathways to sort of making sure we get this right. So you know a couple of things. You know one is frankly making sure that women and people of color are in the room when the stuff's getting built, right? That helps. You know as I said earlier you know making sure that you know things will go awry. Like it just will we can't predict how these things are going to work and catching it after the fact and building in mechanisms to be able to do that really matter. So there was a great effort by ProPublica to look at a system that was predicting criminal recidivism. And what they did was they said, "Look you know "it is true that "the thing has the same failure rate "for both blacks and whites." But some hefty data journalism and data scraping and all the rest of it actually revealed that it was producing false positives for blacks and false negatives for whites. Meaning that black people were predicted to create more crime than white people right? So you know, we can catch that, right? And when we build in more system of people who had the skills to do it, then we can build stuff that we can live with. >> This is exactly to your point of democratization I think that fascinates me that I get so excited about. It's almost intoxicating when you think about it technically and also societal that there's all these new things that are emerging and the community has to work together. Because it's one of those things where there's no, there may be a board of governors out there. I mean who is the board of governors for this stuff? It really has to be community driven. >> Yeah, yeah. >> And NYU's got one, any other examples of communities that are out there that people can participate in or? >> Yup, absolutely. So I think that you know, they're certainly collaborating on projects that you actually care about and sort of asking good questions about, is this appropriate for AI or not, right? Is a great place to start of reaching out to people who have those technical skills. There are also the Engineering Professional Association actually just came out a couple months ago with a set of guidelines for developers to be able to... The kinds of things you have to think about if you're going to build an ethical AI system. So they came out with some very high level principles. Operationalizing those principles is going to be a real tough job and we're all going to have to pitch in. And I'm certainly involved in that. But yeah, there are actually systems of governance that are cohering, but it's early days. >> It's great way to get involved. So I got to ask you the personal question. In your efforts with the research and the book and all of your travels, what's some of the most amazing things that you've seen with AI that are out there that people may know about or may not know about that they should know about? >> Oh gosh. I'm going to reserve judgment, I don't know yet. I think we're too early on the curve to be able to talk about, you know, sort of the magic of it. What I can say is that there is real power when ordinary people who have no coding skills whatsoever and frankly don't even know what the heck machine learning is, get their heads around data that is collected about them personally. That opens up, you can teach five year olds statistical concepts that are learned in college with a wearable because the data applies to them. So they know how it's been collected. >> It's personal. >> Yeah they know what it is already. You don't have to tell them what a outlier effect is because they know because they wear that outlier. You know what I mean. >> They're immersed in the data. >> Absolutely and I think that's where the real social change is going to come from. >> I love immersion as a great way to teach kids. But the data's key. So I got to ask you with the big pillars of change going on and at Mobile World Congress I saw you, Intel in particular, talking about autonomous vehicles heavily, smart cities, media entertainment and the smart home. I'm just trying to get a peg a comparable of how big this shift will be. These will be, I mean the '60s revolution when chips started coming out, the PC revolution and server revolution and now we're kind of in this new wave. How big is it? I mean in order of magnitude, is it super huge with all of the other ships combined? Are we going to see radical >> I don't know. >> configuration changes? >> You know. You know I'm an anthropologist, right? (John laughs) You know everything changes and nothing changes at the same time, right? We're still going to wake up, we're still going to put on our shoes in the morning, right? We're still going to have a lot of the same values and social structures and all the rest of it that we've always had, right. So I don't think in terms of plonk, here's a bunch of technology now. Now that's a revolution. There's like a dialogue. And we are just at the very, very baby steps of having that dialogue. But when we do people in my field call it domestication, right? These become tame, they become part of our lives, we shape them and they shape us. And that's not radical change, that's the change we always have. >> That's evolution. So I got to ask you a question because I have four kids and I have this conversation with my wife and friends all the time because we have kids, digital natives are growing up. And we see a lot of also work place domestication, people kind of getting domesticated with the new technologies. What's your advice whether it's parents to their kids, kids to growing up in this world, whether it's education? How should people approach the technology that's coming at them so heavily? In the age of social media where all our voices are equal right now, getting more filters are coming out. It's pretty intense. >> Yeah, yeah. I think it's an occasion where people have to think a lot more deliberately than they ever have about the sources of information that they want exposure to. The kinds of interaction, the mechanisms that actual do and don't matter. And thinking very clearly about what's noise and what's not is a fine thing to do. (laughs) (John laughs) so yeah, probably the filtering mechanisms has to get a bit stronger. I would say too there's a whole set of practices, there are ways that you can scrutinize new devices for, you know, where the data goes. And often, kind of the higher bar companies will give you access back, right? So if you can't get your data out again, I would start asking questions. >> All right final two questions for you. What's your experiences like so far at South by Southwest? >> Yup. >> And where is the world going to take you next in terms of your research and your focus? >> Well this is my second year at South by Southwest. It's hugely fun, I am so pleased to see just a rip roaring crowd here at the Intel facility which is just amazing. I think this is our first time as in Dell proper. I'm having a really good time. The Self Tracking book is in the book shelf over in the convention center if you're interested. And what's next is we are going to get real about how to make, how to make these ethical principles actually work at an engineering level. >> Computer science meets social science, happening right now. >> Absolutely. >> Intel powering amazing here at South by Southwest. I'm John Furrier you're watching The Cube. We've got a great set of people here on The Cube. Also great AI Lounge experience, great demos, great technologists all about AI for social change with Dr. Dawn Nafus with Intel. We'll be right back with more coverage after this short break. (upbeat digital beats)

Published Date : Mar 11 2017

SUMMARY :

Brought to you by Intel. "AI is the bulldozer for data." This is the subject of your book, kind of. is that people are still struggling with gees, you mentioned wearable, Fitbits. so that it sounds this integration into lifestyle And so I think there's going to have to be a moment where You mentioned the Fitbit and you were seeing to sort of figure out you know signatures So how do you define democratization in your context have the ability to actually do it a problem right now in the wearable base of It's going back to the manufacturer. What are some of the things that you see emerging have thought of that's an outlier. let's put that on the table. had the skills to do it, and the community has to work together. So I think that you know, they're So I got to ask you the personal question. to be able to talk about, you know, You don't have to tell them what a outlier effect is is going to come from. So I got to ask you with the big pillars and social structures and all the rest of it So I got to ask you a question because kind of the higher bar companies will give you What's your experiences like so far It's hugely fun, I am so pleased to see happening right now. We'll be right back with more coverage

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JohnPERSON

0.99+

John FurrierPERSON

0.99+

NYU School of LawORGANIZATION

0.99+

Obama AdministrationORGANIZATION

0.99+

Dawn NafusPERSON

0.99+

five yearQUANTITY

0.99+

four kidsQUANTITY

0.99+

TeslaORGANIZATION

0.99+

yesterdayDATE

0.99+

ProPublicaORGANIZATION

0.99+

bothQUANTITY

0.99+

last weekDATE

0.99+

Hugo CamposPERSON

0.99+

second yearQUANTITY

0.99+

firstQUANTITY

0.99+

IntelORGANIZATION

0.99+

Austin, TexasLOCATION

0.99+

oneQUANTITY

0.99+

The CubeTITLE

0.99+

first timeQUANTITY

0.99+

two questionsQUANTITY

0.99+

four weeksQUANTITY

0.98+

DawnPERSON

0.98+

NYUORGANIZATION

0.98+

SXSW 2017EVENT

0.98+

Engineering Professional AssociationORGANIZATION

0.98+

first generationQUANTITY

0.97+

Mobile World CongressEVENT

0.97+

AI For EverythingTITLE

0.97+

couple months agoDATE

0.96+

South by SouthwestORGANIZATION

0.96+

two real elementsQUANTITY

0.95+

'60sDATE

0.95+

DellORGANIZATION

0.94+

Dr.PERSON

0.93+

about threeQUANTITY

0.92+

FitbitORGANIZATION

0.89+

StanfordORGANIZATION

0.86+

2017DATE

0.86+

Dr. Dawn NafusPERSON

0.85+

FitbitsORGANIZATION

0.74+

South by SouthwestLOCATION

0.72+

South by SouthwestTITLE

0.7+

one ofQUANTITY

0.7+

SouthwestLOCATION

0.6+

Self TrackingTITLE

0.6+

LoungeORGANIZATION

0.57+

coupleQUANTITY

0.47+

The CubeORGANIZATION

0.46+

CubeCOMMERCIAL_ITEM

0.44+

AILOCATION

0.44+

South byTITLE

0.42+

SouthwestORGANIZATION

0.39+

Bryce Olsen | SXSW 2017


 

>> Announcer: Live from Austin Texas, it's theCUBE, covering South by Southwest 2017, brought to you by Intel. Now, here's John Furrier. >> Welcome back everyone, we are live at the Intel AI Lounge, end of the day, day one at South by Southwest, I'm John Furrier, this is theCUBE, our flagship programming brought to the events and extract a signal from the noise. What a day it is here, it's the packed venue, AI Lounge, with Intel, it's the hottest spot in South by Southwest, of course, where our theme is AI for social good, and our next guest is Bryce Olson with Intel, and your title officially is, global marketing director health and live services, but you are an amazing story, cancer survivor, but a fighter, you took it to technology to stop your cancer, and also, a composer with your friend, called FACTS, Fighting Advanced Cancer Through Song, the stories. Welcome to theCUBE! >> Thank you, it's great to be here, this is awesome, this is amazing environment that we're in today. But yeah, you're right, when you look at data, genomics data, which is looking at your DNA, and running that out and being able to understand what could potentially be fueling disease, that's the biggest of big data. And when I was working at Intel, I was in a non-healthcare oriented group, and then all of a sudden, I got hit with cancer, like very aggressive, advanced cancer. And I went through the whole standard of care, and I went through that one-size-fits-all spin that wheel of treatments and hopefully you get something kind of thing, nothing-- >> General purpose, chemotherapy, whatever, blah blah blah. >> Nothing worked. And I came to the point where I was start to come to terms with the fact that I may not see my daughter get through elementary school. So, cancer's starting to grow again, I go back to work, at this point, I only want to work in healthcare, because, why would I want to do anything else? I want to try to-- >> John: But you have terminal cancer at this point. >> I have terminal cancer at this point, but I'm not sick yet. You know, I went through all the chemo and all that crap, but I'm not sick yet. So, I asked to get into Intel's healthcare group, because I want to try to help healthcare providers make this digital transformation. They let me in, and what I found out kind of blew my mind. I learned about this new space of genomics and precision medicine. >> Well, it turns out, hold on for a second, you were telling me the story before, but you skipped a step, it turns out Intel has a lot of work going on, so you come into Intel, you're like, they open up the kimono-- >> Open up the kimono, and I learn about this new era called, just basically genomics, so what is genomics? Genomics, essentially, is a way to look at disease differently. Why can't we go in and find out what's fueling disease deep in the DNA? Because every disease is diagnosable by DNA, we just have never had the technology, and the science, combining together to get to that answer before. Now we do. So I found out that Intel is working with all these genomic sequencing companies to increase the throughput so you can actually take something that costs $2 billion dollars back in 2003, and took 10 years to do, get it down to $1,000 and do it in a day, right? So now, it democratizes sequencing, so we can look at what's fueling disease and get the data. Then I learned about Intel working with all these major bioinformatics open stores and commercial providers, the Broad Institute at MIT, Harvard, largest genomic sequencing place on the planet, about how they take that data and then analyze it, get to what is really fueling disease. And then I learn about the cool things we're doing with customers, which I could talk about, like actual hospitals. >> Well, let's hold on for a second on that, your shirt says Sequence Me, but this is really key for the audience out there listening and watching, is that, literally 10 years ago the costs were astronomical, no one could afford it. Big grants, philanthropy-funded R&D centers, now, literally, you had your genome sequenced for thousands of dollars. >> Well, so, and this is what happened, right? I learned about all this stuff that Intel's up to, and I get kind of upset. I get kind of pissed off, right? Because nobody's giving this to me. Nobody's sequencing my cancer, right? So I go back to the cancer center that I was working with, this is January 2015, turns out they were getting ready, they were perfecting their lab diagnostic test on this, it was like a perfect storm, they were ready, I wanted it, they gave it to me, turns out my cancer grows along this particular mutated pathway that we had no idea. >> So the data was, so in your DNA sequence step one, step two is you go in massive compute power, which is available, and you go look at it, and it turns out there's a nuance to your cancer that's identifiable! >> Yeah, a needle in that haystack, right? The signal in the noise, if you will, right? So there's a specific molecular abnormality, and in my case, there was a pathway that was out of control, and the reason why I say it was out of control is, the pathway was mutated, but then there's this tumor suppressor gene that's supposed to stop cancer, he's gone! So it's like a freeway of traffic-- >> So he's checked out, and all of a sudden, this is going wild, but this is cancer for everyone has their own version of this. >> Yes they do. >> So this is now a new opportunity. >> Yes! Now we understand what's fueling my unique cancer. We took data, we took technology and science, and we got to the point where we understand what's fueling my cancer. With that data, I find a clinical trial testing a new inhibitor of that pathway. >> So I just got to stop and just pause, because it's very emotional, and first of all, man, yours is an inspiration to me and everyone watching. I'm looking at some sign this year at the Intel AI booth, and it says, "Your amazing starts with Intel," this is truly an amazing story. >> Yeah, thank you. >> It's really beyond amazing, it's life saving! >> And that's what happened to me. >> This is now at the beginning, so take me through, in your mind, where is the progress bar on this, in the AI evolution, or when I say AI, I mean like machine learning, compute, end-to-end technology innovation. It's available, obviously, when is it going to be mainstream? >> Yeah, so, we're at a point right now where we can go in, if you have advanced cancer, we're at a point now where we can sequence that person's cancer and find out what's driving it, we can do that. But where it's going to get problematic is, look at my case. The mutated pathway hypersegmented by cancer, right, so prostate cancer, a common cancer, now became a rare cancer, because we hypersegmented it by DNA, and I went after a treatment that was targeted, so when my cancer starts to grow again, now I'm a rare cancer. So how are going to find people that are just like me out there in the world? >> So your point about rare being, there's no comparable data to look at benchmarking, so that's the challenge. >> Yeah, no given hospital will ever have enough data in this new molecular genomics-guided medicine world to solve my problem, because the doctors are going to want to look, and they're going to say, "Who out there looks just like Bryce "from a DNA perspective, uniquely? "What treatments were given to people like that, "and what were the outcomes?" The only way we're going to solve that is as all these centers and hospitals start amassing data, it has to work together, it has to collaborate in a way that preserves patient privacy, and also protects individual IP. >> Okay, so Bryce, let me ask you a question, if you could put a bumper sticker or a soundbite around what AI means to this evolution innovation around fighting cancer and using data and technology, what is the impact of AI to this? >> So, where I'm kind of going with this analogy is that without artificial intelligence to sift through my data, and all the other millions of potential cancer patients to start getting DNA data, humans can't do it, it's impossible, humans will not have the mental ability to sift through reams and reams of DNA data that exists for every patient out there to look at treatments and outcomes and synthesize it, we can't do it. The only way someone like me will survive into the long term will be through artificial intelligence. Without it, I will extend my life, but I won't turn cancer into a manageable disease without AI. >> So the AI will extend your life. >> Because AI is going to solve the problems that humans can't. When you have the biggest of big data-- >> Love that soundbite, love that, say that again! AI solves the problems that-- >> AI is going to solve the problems that humans can't, they simply, humans don't have the capability to look at the entire genome, and all this other genomic, molecular, proteomic, all this other data, we can't make sense of it! >> Alright, so let me throw something out at you, 'cause I agree 100%, but also, there's a humanization factor, 'cause now algorithms are also biased by humans, so what's your thoughts, given your experience, the role of the human race, actual human beings, that have a pulse, not robots or algorithms? >> Yeah, so let me give you a real practical example. So, the way that we fought my cancer was through a targeted therapy. Molecular abnormality, targeted drug. The other way that people are fighting cancer is through immunotherapy. Wake up the immune system to fight it. Guess what? Right now, there are 800 combination therapies going on with immunotherapy to try to stop people's cancer. How the heck are we going to know what is the right combination for each person out there? Unless we have like an algorithm marketplace where people are creating these, and taking in predictive biomarkers, prognostic biomarkers, looking at all the data, and then pushing a button to help an oncologist decide which of the 800 combos to use, we'll never get there. So-- >> That's awesome. So let me ask you a question, so for people watching that are younger, like my daughter, she's 16, my other daughter's a premed, she's a sophomore in college, they're like, school's like old, like, school's like linear, they get classes, but this younger generation are hungry for data, they're hungry, they want to, they're young, they're what people do, they disrupt, they're bomb throwers, they want to create value, and so their incentive to go after cancer, and the means are out there, cancer cells, we all have relatives who have died of cancer, it's a sucky situation. There's a motivated force out there of scientists, and young people. How do they get involved? How would you look at, based on your experience, and your experience, obviously, you got these songs here, but on a more practical level, what discovery, what navigation can someone take in their life to just get involved, not a catalog, not the courseware. >> I think, so there's a number of different things that can happen, if you look at the precision medicine landscape, and you start with a patient, patients don't understand this. "Genomic what? "Sequencing what?" They don't understand that there's a new way to fight cancer, so guess what's going to become a 20% per year growth rate job in the next 10 to 20 years? Genomics counselors. You don't have to be a doctor, but you have to be able to understand enough about biology-- >> And math. >> To be able to offload doctors, and have a discussion with patients to say, "Let me explain something to you. "There's a way to understand your disease, it's in DNA, "this is what it means," and then help them guide them into new clinical trials and other therapy that's got it by that, huge growth opportunity for kids. >> But also, it's compounded by the fact we just said earlier, where these become rare cases on paper, are also need to be aggregated into a database of some sort so you can understand the data, so there's also a data science angle here. >> Absolutely, and it's not just cancer, by the way, I mean, little kids in the NICU, pediatric ailments. Have you ever know anybody who's got a kid with a very rare neurodevelopmental disorder, and the parents are on a diagnostic odyssey for 10 years, they can't figure out what it is? So they go from specialist to specialist, specialist, $100,000 dollars later, guess what, the answer's in the DNA. >> DNA sequencing, number one. >> DNA sequencing, number one, and then, once you start sequencing that, you got to make sense of all this data, so there's going to be tons of jobs, not only in biology, but in analytics, to take all this data and start finding-- >> Alright, we got a few minutes left, I want to get a plugin for your little album here, it's called FACTS, Fighting Against Cancer Through Song. >> So here's the story on that. So, when you go through something that could be terminal, it's really nice when you can have something productive to channel that energy. So for me, to be able to channel feelings of sadness and frustration, I started writing songs. Music was therapeutic for me. I took that, started collaborating with a bunch of musicians throughout Portland, including cancer survivors, and we said, why don't we use music as a way to reach people about a new message of how to fight cancer? So we created that, I have an organization that is raising awareness for a new way to fight cancer, and raising funds, to bring sequencing to more people. >> So the URL is factsmovement.com, so factsmovements.com, check it out. Okay, now, I'm so impressed with you, one, you are on a terminal track, you go back to work. >> But I don't look like I'm terminal! >> You look great, you look great. Now, you're at Intel, Intel's got technology, you harness it, now, you're on a mission now, your passion, it's obvious, the songs, now, what's going on in Intel, 'cause now you're out doing the Intel thing, gives us the Intel update. >> I can talk to you about this precision medicine, it's personalizing diagnostic and treatment plan, which I've already done, I could talk to you about other things that we're doing to help hospitals transform. Predictive clinical analytics, let's look at something like rapid response teamed events. Have you ever been in the hospital and heard the alarms go off? That's usually somebody having a heart attack unexpected. Data is out there, if you look at all the data about people that have had rapid response teams events, we can create predictive signals to actually predict that an hour before it would happen! So predictive clinical analytics, and enabling hospitals to look at populations as a whole to treat them better in this new value-based care, is a technology-driven thing, so we're working on that as well. Yeah. >> Well Bryce, thanks for coming on to theCUBE, we appreciate it, really inspirational, great to meet you in person, and I'm looking forward to following up with you when you get back to Portland, we'll get our gang in Palo Alto to get you on the horn Skype in, and keep in touch, really inspirational, but more importantly, this is very relevant, and the technology's now surfacing to change, not only people's lives in the sense of saving them, but other great things. >> And I'm so proud to be able to work for a company that is using its brand and its technology to basically change people's lives, it's amazing. >> Bryce Olson, my hero here at South by Southwest, amazing story, really, really, you can choose to be a victim or you can choose to go after it, so excited to have met you, it's theCUBE, breaking it all down here at South by Southwest at Intel's AI Lounge, it's hopping, music tonight, music tomorrow night, CUBE tomorrow, panels, AI changing the future powered by Intel, #IntelAI, I'm John Furrier, you're watching theCUBE, thanks for watching, we'll see you tomorrow.

Published Date : Mar 11 2017

SUMMARY :

covering South by Southwest 2017, brought to you by Intel. and extract a signal from the noise. and running that out and being able to understand And I came to the point where I was start to come to terms So, I asked to get into Intel's healthcare group, to increase the throughput so you can actually now, literally, you had your genome sequenced So I go back to the cancer center that I was working with, this is going wild, but this is cancer So this is now and we got to the point where we understand So I just got to stop and just pause, This is now at the beginning, so take me through, So how are going to find people that are just like me there's no comparable data to look at benchmarking, because the doctors are going to want to look, to look at treatments and outcomes and synthesize it, Because AI is going to solve the problems and then pushing a button to help an oncologist decide and so their incentive to go after cancer, You don't have to be a doctor, but you have "Let me explain something to you. rare cases on paper, are also need to be aggregated Absolutely, and it's not just cancer, by the way, I want to get a plugin for your little album here, and raising funds, to bring sequencing to more people. So the URL is factsmovement.com, You look great, you look great. I can talk to you about this precision medicine, and I'm looking forward to following up with you And I'm so proud to be able to work so excited to have met you, it's theCUBE,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Bryce OlsonPERSON

0.99+

JohnPERSON

0.99+

January 2015DATE

0.99+

Bryce OlsenPERSON

0.99+

BrycePERSON

0.99+

Palo AltoLOCATION

0.99+

100%QUANTITY

0.99+

Broad InstituteORGANIZATION

0.99+

PortlandLOCATION

0.99+

10 yearsQUANTITY

0.99+

16QUANTITY

0.99+

$1,000QUANTITY

0.99+

thousands of dollarsQUANTITY

0.99+

2003DATE

0.99+

John FurrierPERSON

0.99+

IntelORGANIZATION

0.99+

tomorrow nightDATE

0.99+

tomorrowDATE

0.99+

factsmovement.comOTHER

0.99+

Austin TexasLOCATION

0.99+

800 combosQUANTITY

0.99+

10 years agoDATE

0.98+

tonightDATE

0.98+

this yearDATE

0.97+

SkypeORGANIZATION

0.97+

SXSW 2017EVENT

0.97+

factsmovements.comOTHER

0.96+

20 yearsQUANTITY

0.96+

step twoQUANTITY

0.96+

each personQUANTITY

0.96+

$100,000 dollarsQUANTITY

0.96+

Fighting Against Cancer Through SongTITLE

0.94+

step oneQUANTITY

0.94+

FACTS,TITLE

0.94+

tons of jobsQUANTITY

0.94+

an hourQUANTITY

0.93+

oneQUANTITY

0.92+

todayDATE

0.92+

MITORGANIZATION

0.91+

millions of potential cancerQUANTITY

0.89+

day oneQUANTITY

0.88+

South by SouthwestLOCATION

0.88+

FACTSTITLE

0.86+

$2 billion dollarsQUANTITY

0.86+

20% perQUANTITY

0.85+

800 combination therapiesQUANTITY

0.85+

a dayQUANTITY

0.84+

AI LoungeLOCATION

0.84+

10QUANTITY

0.79+

HarvardORGANIZATION

0.71+

South by SouthwestORGANIZATION

0.69+

sizeQUANTITY

0.69+

2017EVENT

0.66+

theCUBEORGANIZATION

0.63+

minutesQUANTITY

0.6+

nextQUANTITY

0.58+

firstQUANTITY

0.57+

secondQUANTITY

0.56+

SouthwestTITLE

0.52+

AdvancedTITLE

0.5+

SouthTITLE

0.3+

Alison Yu, Cloudera - SXSW 2017 - #IntelAI - #theCUBE


 

(electronic music) >> Announcer: Live from Austin, Texas, it's The Cube. Covering South By Southwest 2017. Brought to you by Intel. Now, here's John Furrier. >> Hey, welcome back, everyone, we're here live in Austin, Texas, for South By Southwest Cube coverage at the Intel AI Lounge, #IntelAI if you're watching, put it out on Twitter. I'm John Furrier of Silicon Angle for the Cube. Our next guest is Alison Yu who's with Cloudera. And in the news today, although they won't comment on it. It's great to see you, social media manager at Cloudera. >> Yes, it's nice to see you as well. >> Great to see you. So, Cloudera has a strategic relationship with Intel. You guys have a strategic investment, Intel, and you guys partner up, so it's well-known in the industry. But what's going on here is interesting, AI for social good is our theme. >> Alison: Yes. >> Cloudera has always been a pay-it-forward company. And I've known the founders, Mike Olson and Amr Awadallah. >> Really all about the community and paying it forward. So Alison, talk about what you guys are working on. Because you're involved in a panel, but also Cloudera Cares. And you guys have teamed up with Thorn, doing some interesting things. >> Alison: Yeah (laughing). >> Take it away! >> Sure, thanks. Thanks for the great intro. So I'll give you a little bit of a brief introduction to Cloudera Cares. Cloudera Cares was founded roughly about three years ago. It was really an employee-driven and -led effort. I kind of stepped into the role and ended up being a little bit more of the leader just by the way it worked out. So we've really gone from, going from, you know, we're just doing soup kitchens and everything else, to strategic partnerships, donating software, professional service hours, things along those lines. >> Which has been very exciting to see our nonprofit partnerships grow in that way. So it really went from almost grass-root efforts to an organized organization now. And we start stepping up our strategic partnerships about a year and a half ago. We started with DataKind, is our initial one. About two years ago, we initiated that. Then we a year ago, about in September, we finalized our donation of an enterprise data hub to Thorn, which if you're not aware of they're all about using technology and innovation to stop child-trafficking. So last year, around September or so, we announced the partnership and we donated professional service hours. And then in October, we went with them to Grace Hopper, which is obviously the largest Women in Tech Conference in North America. And we hosted a hackathon and we helped mentor women entering into the tech workforce, and trying to come up with some really cool innovative solutions for them to track and see what's going on with the dark web, so we had quite a few interesting ideas coming out of that. >> Okay, awesome. We had Frederico Gomez Suarez on, who was the technical advisor. >> Alison: Yeah. >> A Microsoft employee, but he's volunteering at Thorn, and this is interesting because this is not just donating to the soup kitchens and what not. >> Alison: Yeah. >> You're starting to see a community approach to philanthropy that's coding RENN. >> Yeah. >> Hackathons turning into community galvanizing communities, and actually taking it to the next level. >> Yeah. So, I think one of the things we realize is tech, while it's so great, we have actually introduced a lot of new problems. So, I don't know if everyone's aware, but in the '80s and '90s, child exploitation had almost completely died. They had almost resolved the issue. With the introduction of technology and the Internet, it opened up a lot more ways for people to go ahead and exploit children, arrange things, in the dark web. So we're trying to figure out a way to use technology to combat a problem that technology kind of created as well, but not only solving it, but rescuing people. >> It's a classic security problem, the surface area has increased for this kind of thing. But big data, which is where you guys were founded on in the cloud era that we live in. >> Alison: Yeah. >> Pun intended. (laughing) Using the machine learning now you start with some scale now involved. >> Yes, exactly, and that's what we're really hoping, so we're partnering with Intel in the National Center of Missing Exploited Children. We're actually kicking off a virtual hackathon tomorrow, and our hope is we can figure out some different innovative ways that AI can be applied to scraping data and finding children. A lot of times we'll see there's not a lot of clues, but for example, if we can upload, if there can be a tool that can upload three or four different angles of a child's face when they go missing, maybe what happens is someone posts a picture on Instagram or Twitter that has a geo tag and this kid is in the background. That would be an amazing way of using AI and machine learning-- >> Yeah. >> Alison: To find a child, right. >> Well, I'll give you guy a plug for Cloudera. And I'll reference Dr. Naveen Rao, who's the GM of Intel's AI group, was on earlier. And he was talking about how there's a lot of storage available, not a lot of compute. Now, Cloudera, you guys have really pioneered the data lake, data hub concept where storage is critical. >> Yeah. >> Now, you got this compute power and machine learning, that's kind of where it comes together. Did I get that right? >> Yeah, and I think it's great that with the partnership with Intel we're able to integrate our technology directly into the hardware, which makes it so much more efficient. You're able to compute massive amounts of data in a very short amount of time, and really come up with real results. And with this partnership, specifically with Thorn and NCMEC, we're seeing that it's real impact for thousands of people last year, I think. In the 2016 impact report, Thorn said they identified over 6,000 trafficking victims, of which over 2,000 were children. Right, so that tool that they use is actually built on Cloudera. So, it's great seeing our technology put into place. >> Yeah, that's awesome. I was talking to an Intel person the other day, they have 72 cores now on a processor, on the high-end Xeons. Let's get down to some other things that you're working on. What are you doing here at the show? Do you have things that you're doing? You have a panel? >> Yeah, so at the show, at South by Southwest, we're kicking off a virtual hackathon tomorrow at our Austin offices for South by Southwest. Everyone's welcome to come. I just did the liquor order, so yes, everyone please come. (laughing) >> You just came from Austin's office, you're just coming there. >> Yeah, exactly. So we've-- >> Unlimited Red Bull, pizza, food. (laughing) >> Well, we'll be doing lots and lots tomorrow, but we're kicking that off, we have representatives from Thorn, NCMEC, Google, Intel, all on site to answer questions. That's kind of our kickoff of this month-long virtual hackathon. You don't need to be in Austin to participate, but that is one of the things that we are kicking off. >> And then on Sunday, actually here at the Intel AI Lounge we're doing a panel on AI for Good, and using artificial intelligence to solve problems. >> And we'll be broadcasting that live here on The Cube. So, folks, SiliconAngle.tv will carry that. Alison, talk about the trend that, you weren't here when we were talking about how there's now a new counterculture developing in a good way around community and social change. How real is the trend that you're starting to see these hackathons evolve from what used to be recruiting sessions to people just jamming together to meet each other. Now, you're starting to see the next level of formation where people are organizing collectively-- >> Yeah. >> To impact real issues. >> Yeah. >> Is this a real trend or where is that trend, can you speak to that? >> Sure, so from what I've seen from the hackathons what we've been seeing before was it's very company-specific. Only one company wanted to do it, and they would kind of silo themselves, right? Now, we're kind of seeing this coming together of companies that are generally competitors, but they see a great social cause and they decide that they want to band together, regardless of their differences in technology, product, et cetera, for a common good. And, so. >> Like a Thorn. >> For Thorn, you'll see a lot of competitors, so you'll see Facebook and Twitter or Google and Amazon, right? >> John: Yeah. >> And we'll see all these different competitors come together, lend their workforce to us, and have them code for one great project. >> So, you see it as a real trend. >> I do see it as a trend. I saw Thorn last year did a great one with Facebook and on-site with Facebook. This year as we started to introduce this hackathon, we decided that we wanted to do a hackathon series versus just a one-off hackathon. So we're seeing people being able to share code, contribute, work on top of other code, right, and it's very much a sharing community, so we're very excited for that. >> All right, so I got to ask you what's they culture like at Cloudera these days, as you guys prepare to go public? What's the vibe internally of the company, obviously Mike Olson, the founder, is still around, Amr's around. You guys have been growing really fast. Got your new space. What's the vibe like in Cloudera now? >> Honestly, the culture at Cloudera hasn't really changed. So, when I joined three years ago we were much smaller than we are now. But I think one thing that we're really excited about is everyone's still so collaborative, and everyone makes sure to help one another out. So, I think our common goal is really more along the lines of we're one team, and let's put out the best product we can. >> Awesome. So, what's South by Southwest mean to you this year? If you had to kind of zoom out and say, okay. What's the theme? We heard Robert Scoble earlier say it's a VR theme. We hear at Intel it's AI. So, there's a plethora of different touchpoints here. What do you see? >> Yeah, so I actually went to the opening keynote this morning, which was great. There was an introduction, and then I don't know if you realized, but Cory Booker was on as well, which is great. >> John: Yep. >> But I think a lot of what we had seen was they called out on stage that artificial intelligence is something that will be a trend for the next year. And I think that's very exciting that Intel really hit the nail on the head with the AI Lounge, right? >> Cory Booker, I'm a big fan. He's from my neighborhood, went to the same school I went to, that my family. So in Northern Valley, Old Tappan. Cory, if you're watching, retweet us, hashtag #IntelAI. So AI's there. >> AI is definitely there. >> No doubt, it's on stage. >> Yes, but I think we're also seeing a very large, just community around how can we make our community better versus let's try to go in these different silos, and just be hyper-aware of what's only in front of us, right? So, we're seeing a lot more from the community as well, just being interested in things that are not immediately in front of us, the wider, either nation, global, et cetera. So, I think that's very exciting people are stepping out of just their own little bubbles, right? And looking and having more compassion for other people, and figuring out how they can give back. >> And, of course, open source at the center of all the innovation as always. (laughing) >> I would like to think so, right? >> It is! I would testify. Machine learning is just a great example, how that's now going up into the cloud. We started to see that really being part of all the apps coming out, which is great because you guys are in the big data business. >> Alison: Yeah. >> Okay, Alison, thanks so much for taking the time. Real quick plug for your panel on Sunday here. >> Yeah. >> What are you going to talk about? >> So we're going to be talking a lot about AI for good. We're really going to be talking about the NCMEC, Thorn, Google, Intel, Cloudera partnership. How we've been able to do that, and a lot of what we're going to also concentrate on is how the everyday tech worker can really get involved and give back and contribute. I think there is generally a misconception of if there's not a program at my company, how do I give back? >> John: Yeah. >> And I think Cloudera's a shining example of how a few employees can really enact a lot of change. We went from grassroots, just a few employees, to a global program pretty quickly, so. >> And it's organically grown, which is the formula for success versus some sort of structured company program (laughing). >> Exactly, so we definitely gone from soup kitchen to strategic partnerships, and being able to donate our own time, our engineers' times, and obviously our software, so. >> Thanks for taking the time to come on our Cube. It's getting crowded in here. It's rocking the house, the house is rocking here at the Intel AI Lounge. If you're watching, check out the hashtag #IntelAI or South by Southwest. I'm John Furrie. I'll be back with more after this short break. (electronic music)

Published Date : Mar 10 2017

SUMMARY :

Brought to you by Intel. And in the news today, although they won't comment on it. and you guys partner up, And I've known the founders, Mike Olson and Amr Awadallah. So Alison, talk about what you guys are working on. I kind of stepped into the role for them to track and see what's going on with the dark web, We had Frederico Gomez Suarez on, donating to the soup kitchens and what not. You're starting to see a community approach and actually taking it to the next level. but in the '80s and '90s, child exploitation in the cloud era that we live in. Using the machine learning now and our hope is we can figure out some different the data lake, data hub concept Now, you got this compute power and machine learning, into the hardware, which makes it so much more efficient. on the high-end Xeons. I just did the liquor order, so yes, everyone please come. You just came from Austin's office, So we've-- (laughing) but that is one of the things that we are kicking off. actually here at the Intel AI Lounge Alison, talk about the trend that, you weren't here and they would kind of silo themselves, right? and have them code for one great project. and on-site with Facebook. All right, so I got to ask you the best product we can. What's the theme? and then I don't know if you realized, that Intel really hit the nail on the head I went to, that my family. and just be hyper-aware of And, of course, open source at the center which is great because you guys are in the Okay, Alison, thanks so much for taking the time. and a lot of what we're going to also concentrate on is And I think Cloudera's a shining example of And it's organically grown, and being able to donate our own time, Thanks for taking the time to come on our Cube.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Mike OlsonPERSON

0.99+

AlisonPERSON

0.99+

Robert ScoblePERSON

0.99+

NCMECORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

John FurriePERSON

0.99+

ClouderaORGANIZATION

0.99+

AustinLOCATION

0.99+

John FurrierPERSON

0.99+

AmazonORGANIZATION

0.99+

JohnPERSON

0.99+

OctoberDATE

0.99+

Naveen RaoPERSON

0.99+

MicrosoftORGANIZATION

0.99+

Cory BookerPERSON

0.99+

Alison YuPERSON

0.99+

SundayDATE

0.99+

IntelORGANIZATION

0.99+

Cloudera CaresORGANIZATION

0.99+

72 coresQUANTITY

0.99+

ThornORGANIZATION

0.99+

last yearDATE

0.99+

This yearDATE

0.99+

Amr AwadallahPERSON

0.99+

a year agoDATE

0.99+

FacebookORGANIZATION

0.99+

CoryPERSON

0.99+

tomorrowDATE

0.99+

Austin, TexasLOCATION

0.99+

TwitterORGANIZATION

0.99+

Northern ValleyLOCATION

0.99+

SeptemberDATE

0.99+

2016DATE

0.99+

DataKindORGANIZATION

0.99+

over 6,000 trafficking victimsQUANTITY

0.99+

Frederico Gomez SuarezPERSON

0.99+

next yearDATE

0.99+

todayDATE

0.99+

over 2,000QUANTITY

0.99+

three years agoDATE

0.99+

National Center of Missing Exploited ChildrenORGANIZATION

0.98+

SXSW 2017EVENT

0.98+

oneQUANTITY

0.98+

About two years agoDATE

0.98+

AmrORGANIZATION

0.98+

thousands of peopleQUANTITY

0.97+

North AmericaLOCATION

0.95+

about a year and a half agoDATE

0.95+

this yearDATE

0.95+

one teamQUANTITY

0.95+

Dr. Naveen Rao | SXSW 2017


 

(bright music) >> Narrator: Live from Austin, Texas. It's theCUBE, covering South by Southwest 2017. Brought to you by Intel. Now here's John Furrier. >> We're here live in South by Southwest Austin, Texas. Silicon Angle, theCUBE, our broadcast, we go out and extract the signal from noise. I'm John Furrier, I'm here with Naveene Rao, the vice president general manager of the artificial intelligence solutions group at Intel. Welcome to theCUBE. >> Thank you, yeah. >> So we're here, big crowd here at Intel, Intel AI lounge. Okay, so that's your wheelhouse. You're the general manager of AI solutions. >> Naveene: That's right. >> What is AI? (laughs) I mean-- >> AI has been redefined through time a few times. Today AI means generally applied machine learning. Basically ways to find useful structure in data to do something with. It's a tool, really, more than anything else. >> So obviously AI is a mental model, people can understand kind of what's going on with software. Machine learning and IoT gets kind of in the industry, it's a hot area, but this really is points to a future world where you're seeing software tackling new problems at scale. So cloud computing, what you guys are doing with the chips and software has now created a scale dynamic. Similar to Moore's, but Moore's Law is done for devices. You're starting to see software impact society. So what are some of those game changing impacts that you see and that you're looking at at Intel? >> There are many different thought labors that many of us will characterize as drudgery. For instance, if I'm an insurance company, and I want to assess the risk of 10 million pages of text, I can't do that very easily. I have to have a team of analysts run through, write summaries. These are the kind of problems we can start to attack. So the way I always look at it is what a bulldozer was to physical labor, AI is to data. To thought labor, we can really get through much more of it and use more data to make our decisions better. >> So what are the big game changing things that are going on that people can relate to? Obviously, autonomous vehicles is one that we can all look at and say, "Wow, that's mind blowing." Smart cities is one that you say, "Oh my god, I'm a resident of a community. "Do they have to re-change the roads? "Who writes the software, is there a budget for that?" Smart home, you see Alexa with Amazon, you see Google with their home product. Voice bots, voice interfaces. So the user interface is certainly changing. How is that impacting some of the things that you guys are working on? >> Well, to the user interface changing, I think that has an entire dynamic on how people use tools. Easier something is, the more people use, the more pervasive it becomes, and we start discovering these emergent dynamics. Like an iPod, for instance. Storing music in a digital form, small devices around before the iPod. But when it made it easy to use, that sort of gave rise to the smartphone. So I think we're going to start seeing some really interesting dynamics like that. >> One of the things that I liked about this past week in San Francisco, Google had their big event, their cloud event, and they talked a lot about, and by the way, Intel was on stage with the new Xeon processor, up to 72 cores, amazing compute capabilities, but cloud computing does bring that scale together. But you start thinking about data science has moved into using data, and now you have a tsunami of data, whether it's taking an analog view of the world and having now multiple datasets available. If you can connect the dots, okay, a lot of data, now you have a lot of data plus a lot of datasets, and you have almost unlimited compute capability. That starts to draw in some of the picture a little bit. >> It does, but actually there's one thing missing from what you just described, is that our ability to scale data storage and data collection has outpaced our ability to compute on it. Computing on it typically is some sort of quadratic function, something faster than when your growth on amount of data. And our compute has really not caught up with that, and a lot of that has been more about focus. Computers were really built to automate streams of tasks, and this sort of idea of going highly parallel and distributed, it's something somewhat new. It's been around a lot in academic circles, but the real use case to drive it home and build technologies around it is relatively new. And so we're right now in the midst of transforming computer architecture, and it's something that becomes a data inference machine, not just a way to automate compute tasks, but to actually do data inference and find useful inferences in data. >> And so machine learning is the hottest trend right now that kind of powers AI, but also there's some talk in the leader circles around learning machines. Data learning from engaged data, or however you want to call it, also brings out another question. How do you see that evolving, because do we need to have algorithms to police the algorithms? Who teaches the algorithms? So you bring in this human aspect of it. So how does the machine become a learning machine? Who teaches the machine, is it... (laughs) I mean, it's crazy. >> Let me answer that a little bit with a question. Do you have kids? >> Yes, four. >> Does anyone police you on raising your kids? >> (laughs) Kind of, a little bit, but not much. They complain a lot. >> I would argue that it's not so dissimilar. As a parent, your job is to expose them to the right kind of biases or not biased data as much as possible, like experiences, they're exactly that. I think this idea of shepherding data is extremely important. And we've seen it in solutions that Google has brought out. There are these little unexpected biases, and a lot of those come from just what we have in the data. And AI is no different than a regular intelligence in that way, it's presented with certain data, it learns from that data and its biases are formed that way. There's nothing inherent about the algorithm itself that causes that bias other than the data. >> So you're saying to me that exposing more data is actually probably a good thing? >> It is. Exposing different kinds of data, diverse data. To give you an example from the biological world, children who have never seen people of different races tend to be more, it's something new and unique and they'll tease it out. It's like, oh, that's something different. Whereas children who are raised with people of many diverse face types or whatever are perfectly okay seeing new diverse face types. So it's the same kind of thing in AI, right? It's going to hone in on the trends that are coming, and things that are outliers, we're going to call as such. So having good, balanced datasets, the way we collect that data, the way we sift through it and actually present it to an AI is extremely important. >> So one of the most exciting things that I like, obviously autonomous vehicles, I geek out on because, not that I'm a car head, gear head or car buff, but it just, you look at what it encapsulates technically. 5G overlay, essentially sensors all over the car, you have software powering it, you now have augmented reality, mixed reality coming into it, and you have an interface to consumers and their real world in a car. Some say it's a moving data center, some say it's also a human interface to the world, as they move around in transportation. So it kind of brings out the AI question, and I want to ask you specifically. Intel talks about this a lot in their super demos. What actually is Intel doing with the compute and what are you guys doing to make that accelerate faster and create a good safe environment? Is it just more chips, is it software? Can you explain, take a minute to explain what Intel's doing specifically? >> Intel is uniquely positioned in this space, 'cause it's a great example of a full end to end problem. We have in-car compute, we have software, we have interfaces, we have actuators. That's maybe not Intel's suite. Then we have connectivity, and then we have cloud. Intel is every one of those things, and so we're extremely well positioned to drive this field forward. Now you ask what are we doing in terms of hardware and software, yes, it's all of it. This is a big focus area for Intel now. We see autonomous vehicles as being one of the major ways that people interact with the world, like locality between cars and interaction through social networks and these kinds of things. This is a big focus area, we are working on the in-car compute actively, we're going to lead that, 5G is a huge focus for Intel, as you might've seen in other, Mobile World Congress, other places. And then the data center. And so we own the data center today, and we're going to continue to do that with new technologies and actually enable these solutions, not just from a pure hardware primitives perspective, but from the software-hardware interaction in full stack. >> So for those people who think of Intel as a chip company, obviously you guys abstract away complexities and put it into silicon, I obviously get that. Google Next this week, one thing I was really impressed by was the TensorFlow machine learning algorithms in open source, you guys are optimizing the Xeon processor to offload, not offload, but kind of take on... Is this kind of the paradigm that Intel looks at, that you guys will optimize the highest performance in the chip where possible, and then to let the software be more functional? Is that a guiding principle, is that a one off? >> I would say that Intel is not just a chip company. We make chips, but we're a platform solutions company. So we sell primitives to various levels, and so, in certain cases, yes, we do optimize for software that's out there because that drives adoption of our solutions, of course. But in new areas, like the car for instance, we are driving the whole stack, it's not just the chip, it's the entire package end to end. And so with TensorFlow, definitely. Google is a very strong partner of ours, and we continue to team up on activities like that. >> We are talking with Naveene Rao, vice president general manager Intel's AI solutions. Breaking it down for us. This end to end thing is really interesting to me. So I want to get just double click on that a little bit. It requires a community to do that, right? So it's not just Intel, right? Intel's always had a great rising tide floats all boats kind of concept over the life of the company, but now, more than ever, it's an API world, you see integration points between companies. This becomes an interesting part. Can you talk up to that point about how you guys are enabling partners to work with, and if people want to work with Intel, how do they work, from a developer to whoever? How do you guys view this community aspect? I mean, sure you'd agree with that, right? >> Yeah, absolutely. Working with Intel can take on many different forms. We're very active in the open source community. The Intel Nervana AI solutions are completely open source. We're very happy to enable people in the open source, help them develop their solutions on our hardware, but also, the open source is there to form that community and actually give us feedback on what to build. The next piece is kind of one quick down, if you're actually trying to build an end to end solution, like you're saying, you got a camera. We're not building cameras. But these interfaces are pretty well defined. Generally what we'll do is, we like to select some partners that we think are high value add. And we work with them very closely, and we build stuff that our customers can rely on. Intel stands for quality. We're not going to put Intel branding on something, unless it sort of conforms to some really high standard. And so that's I think a big power here. It doesn't mean we're not going to enable the people that aren't our channel partners or whatever, they're going to have to be enabled through a more of a standard set of interfaces, software or hardware. >> Naveene, I'll ask you, in the final couple minutes we have left, to kind of zoom out and look at the coolness of the industry right now. So you're exposed, your background, we got your PhD, and then you topic wise now heading up the AI solutions. You probably see a lot of stuff. Go down the what's cool to you scene, share with the audience some of the cool things that you can point to that we should pay attention to or even things that are cool that we should be aware that we might not be aware of. What are some of the coolest things that are out there that you could share? >> To share new things, we'll get to that in a second. Things I think are one of my favorites, AlphaGo, I know this is like, maybe it's hackneyed. But as an engineering student in CS in the mid-90s, studying artificial intelligence back then or what we called artificial intelligence, Go was just off the table. That was less than 20 years ago. In that time, it looked like such an insurmountable problem, the brain is doing something so special that we're just not going to figure it out in my lifetime, to actually doing it is incredible. So to me, that represents a lot. So that's a big one. Interesting things that you may not be aware of are other use cases of AI, like we see it in farming. This is something we take for granted. We go to the grocery store, we pick up our food and we're happy, but the reality is, that's a whole economy in and of itself, and scaling it as our population scales is an extremely difficult thing to do. And we're actually interacting with companies that are doing this at multiple levels. One is at the farming level itself, automating things, using AI to determine the state of different props and actually taking action in the field automatically. That's huge, this is back-breaking work. Humans don't necessarily-- >> And it's important too, because people are worried about the farming industry in general. >> Absolutely. And what I love about that use case of like applying AI to farming techniques is that, by doing that, we actually get more consistency and you get better yields. And you're doing it without any additional chemicals, no genetic engineering, nothing like that, you're just applying the same principles we know better. And so I think that's where we see a lot of wonderful things happening. It's a solved problem, but just not at scale. How do I scale this problem up? I can't do that in many instances, like I talked about with the legal documents and trying to come up with a summary. You just can't scale it today. But with these techniques, we can. And so that's what I think is extremely exciting, any interaction there, where we start to see scale-- >> And new stuff, and new stuff? >> New stuff. Well, some of it I can't necessarily talk about. In the robot space, there's a lot happening there. I'm seeing a lot in the startup world right now. We have a convergence of the mechanical part of it becoming cheaper and easier to build with 3D printing, the Maker revolution, all these kind of things happening, which our CEO is really big on. So that, combined with these techniques becoming mature, is going to come up with some really cool stuff. We're going to start seeing The Jetsons kind of thing. It's kind of neat to think about, really. I don't want to clean my room, hey robot, go clean my room. >> John: I'd love that. >> I'd love that too. Make me dinner, maybe like a gourmet dinner, that'd be really awesome. So we're actually getting to a point where there's a line of sight. We're not there yet, I can see it in the next 10 years. >> So the fog is lifting. All right, final question, just more of a personal note. Obviously, you have a neuroscience background, you mentioned that Go is cool. But the humanization factor's coming in. And we mentioned ethics, came up, we don't have time to talk about the ethics role, but as societal changes are happening, with these new impacts of technologies, there's real impact. Whether it's solving diseases and farming, or finding missing children, there's some serious stuff that's really being done. But the human aspects of converging with algorithms and software and scale. Your thoughts on that, how do you see that and how would you, a lot of people are trying to really put this in a framework to try to advance more either sociology thinking, how do I bring sociology into computer science in a way that's relevant. What are some of your thought here? Can you share any color commentary? >> I think it's a very difficult thing to comment on, especially because there are these emergent dynamics. But I think what we'll see is, just as like social network have interfered in some ways and actually helped our interaction with each other, we're going to start seeing that more and more. We can have AIs that are filtering interactions for us. A positive of that is that we can actually understand more about what's going on around in our world, and we're more tightly interconnected. You can sort of think of it as a higher bandwidth communication between all of us. When we're in hunter-gatherer societies, we can only talk to so many people in a day. Now we can actually do more, and so we can gather more information. Bad things are maybe that things become more impersonal, or people have to start doing weird things to stand out in other people's view. There's all these weird interactions-- >> It's kind of like Twitter. (laughs) >> A little bit like Twitter. You can say ridiculous things sometimes to get noticed. We're going to continue to see that, we're already starting to see that at this point. And so I think that's really where the social dynamic happened. It's just how it impacts our day to day communication. >> Talk to Naveene Rao, great conversation here inside the Intel AI lounge. These are the kind of conversations that are going to be on more and more kitchen tables across the world, I'm John Furrier with theCUBE. Be right back with more after this short break. >> Thanks, John. (bright music)

Published Date : Mar 10 2017

SUMMARY :

Brought to you by Intel. the vice president general manager of You're the general manager of AI solutions. in data to do something with. So cloud computing, what you guys are doing with the chips These are the kind of problems we can start to attack. How is that impacting some of the things that sort of gave rise to the smartphone. and you have almost unlimited compute capability. and a lot of that has been more about focus. And so machine learning is the hottest trend right now Let me answer that a little bit with a question. (laughs) Kind of, a little bit, but not much. that causes that bias other than the data. that data, the way we sift through it and what are you guys doing to make that accelerate faster 'cause it's a great example of a full end to end problem. that you guys will optimize the highest performance it's the entire package end to end. it's an API world, you see integration points the open source is there to form that community Go down the what's cool to you scene, and actually taking action in the field automatically. the farming industry in general. and you get better yields. is going to come up with some really cool stuff. So we're actually getting to a point But the human aspects of converging with algorithms A positive of that is that we can actually It's kind of like Twitter. You can say ridiculous things sometimes to get noticed. that are going to be on more and more kitchen tables (bright music)

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
NaveenePERSON

0.99+

JohnPERSON

0.99+

Naveene RaoPERSON

0.99+

John FurrierPERSON

0.99+

GoogleORGANIZATION

0.99+

San FranciscoLOCATION

0.99+

iPodCOMMERCIAL_ITEM

0.99+

IntelORGANIZATION

0.99+

Naveen RaoPERSON

0.99+

AmazonORGANIZATION

0.99+

Austin, TexasLOCATION

0.99+

10 million pagesQUANTITY

0.99+

mid-90sDATE

0.98+

OneQUANTITY

0.98+

oneQUANTITY

0.98+

SXSW 2017EVENT

0.98+

AlexaTITLE

0.97+

TodayDATE

0.96+

SouthwestLOCATION

0.95+

XeonCOMMERCIAL_ITEM

0.94+

todayDATE

0.94+

TwitterORGANIZATION

0.92+

fourQUANTITY

0.91+

less than 20 years agoDATE

0.9+

Next this weekDATE

0.88+

up to 72 coresQUANTITY

0.88+

one thingQUANTITY

0.87+

Moore'sTITLE

0.86+

South by SouthwestTITLE

0.86+

next 10 yearsDATE

0.84+

AlphaGoORGANIZATION

0.82+

5GORGANIZATION

0.8+

this past weekDATE

0.8+

vice presidentPERSON

0.72+

2017DATE

0.72+

a dayQUANTITY

0.71+

theCUBEORGANIZATION

0.68+

Silicon AngleLOCATION

0.65+

TensorFlowTITLE

0.65+

CongressORGANIZATION

0.64+

NervanaCOMMERCIAL_ITEM

0.62+

Mobile WorldEVENT

0.61+

doubleQUANTITY

0.6+

peopleQUANTITY

0.6+

GoTITLE

0.6+

dataQUANTITY

0.57+

secondQUANTITY

0.57+

MooreORGANIZATION

0.57+

Narrator: LiveTITLE

0.56+

Dr.PERSON

0.53+

thingsQUANTITY

0.51+

JetsonsORGANIZATION

0.4+

Frederico Gomez Suarez, Thorn | SXSW 2017


 

(upbeat pop music) >> Narrator: Live from Austin, Texas, it's theCUBE. Covering South by Southwest 2017. Brought to you by Intel. Now, here's John Furrier. >> Okay welcome back everyone. We are here live at South by Southwest at the Intel AI lounge. This is SiliconANGLE's theCUBE, talking to some great guests. The theme for this week is AI for Social Good. I'm John Furrier with SiliconANGLE, our next guest is Federico Gomez Suarez, technical advisor and volunteer at Thorn, doing some really amazing things with technology for the betterment of society. Specifically a use case. So Federico, welcome to theCUBE, welcome to the AI Lounge here at Intel. >> Thank you very much for having me. >> So talk about Thorn. First of all, you work for Microsoft, but you're a volunteer? >> Correct. >> Talk about what Thorn is, and what you guys do. It's really a great story. >> So Thorn is a non-profit which focuses on driving technological innovation to fight child sexual exploitation. And it does it two ways. One of them is by doing research to find the new trends and the new ways that this is happening. But also by using the latest technology to find ways that we can actually fight this problem. Thorn has something called an innovation lab, where we're always trying new technology, we're trying AI just to find new ways to fight the problem. >> So this is really a great use case of where technology is being used for the betterment of society and good, because what you're doing is taking really cutting edge big data, machine learning, AI techniques. And the rage right now is facial recognition. >> Oh Yes! >> So talk about where and how it works. And what's the results? And can you share some of the impact? >> Yeah! So as part of my volunteer work, one of the projects that I have been working, is called a child finder service. And the idea of this work is, if we have an image, particularly an image of a child who have been missing, can we use facial recognition to determine whether another image is the same child. And this is actually a pretty challenging problem because the child may have gone missing many years back and now we want to match against another picture where the child may show much growth. >> Depending on the duration, right? >> And you know, if you imagine the impact of actually having this technology, a person who is trying to look for a missing child, if they have to go through a lot of pictures, it's actually hard to determine whether two people are the same person or not. So we're helping in that case. We're helping so that you don't have to go through so many pictures. So that we can highlight the ones that the machine thinks is actually the same person. >> Take us through how it works, in just a use case, just as an illustration. >> Yeah, So when a child goes missing, the National Center for Missing Children, which we work with, they publish a poster and that poster has an image of a missing child. Now once you have that image, you may want to say well are there places where the picture of that child may be showing up. One place that there's usually pictures of children being exploited are online ads. So let's say that there's online ads and you want to say, well in any of these ads that they use for exploitation, could there be the same child in both of them. So that's actually a use case. And just using face recognition technology, we can try to make the problem easier, faster than it would be if you were trying to do it manually. >> And you're doing a demo here in the Intel AI Lounge. What's in the demo? What are you showing? >> So in the demo, I'm showing how difficult it really is to do face recognition by hand. And how by just having some assistance from a machine, you can go from having to look at hundreds of images and spending potentially hours, to doing it seconds. >> So how to do you involved? I mean, this is a volunteer organization, take us through your journey. How did you get involved? And talk about how you guys are getting more people involved, and how can someone get involved? >> Absolutely! So, you know as for Microsoft, there is the Hack for Good community, and they encourage us to go and donate our time, our skill to non-profits. Two years ago, I had this idea, and I did a hackathon. And after the hackathon, I got connected with Thorn. I learn about what they do, and that's how I pretty much got involved. I was really fortunate that Microsoft supported me to actually go spend time with a non-profit. And when I start working with Thorn, I realized, hey there's other tech companies also willing to help. So in this child finder service project, I work with Intel, I work with other companies all coming together to find ways to solve this problem using the cutting edge technology available. And you know, Thorn is always looking for volunteers, we're looking for what we call our Tech Defenders. If you go to our website, which is wearethorn.org/Sxsw, you'll find the link where you can actually volunteer your skills as a technical defender for Thorn. >> So talk about, that's very cool by the way. People should check out Thorn. Is there a website, Thorn? >> Yeah, it's wearethorn.org/sxsw. >> Okay, wearethorn.org/sxsw. For South by Southwest. So talk about the technology, because obviously Intel makes chips, makes stuff go faster, you got more compute, you've got more cores, you got now, cloud technology. And you've seen at Google Next, where they were showcasing their Xeon processor, that the AI trend now is becoming really, really, really big. I know Microsoft as your Amazon web services. They're all having these machine learning libraries, and the big trend is self-learning machines or deep-learning. So this is a tech trend. But now when you apply it to this, it really can work. So, what is some of the technology, and what are some of the data sets that you use, how does it work under the covers? >> Yeah so, we actually start with an open source technology for face recognition. And after we started with this technology, we realized that we had to make it better. So we had to build data sets ourselves. For the data sets we have images of the posters that are published from the National Center. We have also started asking people to donate images over time, of themselves. Because we need images of people when they were children, and when they're older. And that's how we've been building data sets. And then having the data set, we need to go and train them. And that's where we're using hardware, in particularly using GPUs to actually do training is really is key for us. The technology really under this is deep learning for us. We used an existing deep-learning models, and improving them with our particular scenario, cause there's special challenges in our case. Not only with the age, but also a lot of the images that we process. Sometimes there's heavy makeup, sometimes there's things like that. >> Or res, resolution right? Depending on the photo? Right? >> Yeah. And you now, low resolution images particularly they're a challenge, so we need to improve it, we need to keep training to actually get to the point where we feel we have a really robust system. >> I want to ask you a personal question. And this is something we were talking about on our intro segment, and something that I've been thinking a lot about. I haven't written about it yet, but I've been starting to tease it out on some of my thought leader interviews. Is that, in every major inflection point in the business of technology, there's always been a counter-culture movement. And it seems to be that, if you look at all the news, whether it's political or tech company news, and all this stuff happening around the world, there seems to be a social good culture developing. We're seeing a counter-culture where what was once valued, tech or public proprietary algorithms, is now changing to open source, community, societal benefits. There seems to be a lot of activity, and no one's kind of put their finger on it. And you're a great use case of that example. >> And I feel like, the Hack for Good community in Microsoft is growing, and there's people, peers of mine, working on all this kind of interesting projects helping non-profits. >> And that's called Hack for Good? >> Yes. >> What's it called? >> Hack for Good in Microsoft. >> So that's a Microsoft hackathon with employees who just say, hey let's pick something good to do and they apply their programming technical skills to... >> Yeah, and you know there's a lot of support, and we're encouraged to do it. And it's to me inspiring to work in a company that really encourage that, and you know what? I see the same when I look across the industry. I see people willing to spend their evenings, like I spend my evenings working on some of this, or weekends, but we're passionate about making a difference. And I know I'm not alone. I've met a lot of people, and I know there's a lot more out there. >> Is there a community people can check out? Is it on the website? Is there open source community? Is there a certain software groups that are playing more than others? >> Actually I don't know. I know in my space, I think a I think a great place to start is joining Thorn's Digital Defenders. But I would say if someone is passionate about a cause, it could be anything, and say I want to help, there's non-profits out there for that. And when I work with non-profits, they're so passionate about it, and sometimes they just need help in little things. And having so many tech communities go in and help them makes a huge difference. I would invite people to just go. If you're passionate about it, just go for it. Find a non-profit, they'll be happy to work with you. >> Federico, I want to ask you if you could share just some anecdotal impact that you guys have had. Can you share some successes, some advances? Just highlight some of the things. >> Yeah, so Thorn just published their yearly report and it was really encouraging. So, Thorn has a couple of different tools that they build. One of them is called Spotlight. Through the use of this tool last year, about 2,000 children who were victims of trafficking, were recovered from around 6,000 victims. And you know, each victim is a person. And the fact that we're making a difference in those lives is extremely encouraging. And that's just one of the things that we were able to contribute. So that's one of the stories that we have. And to me it's not only that. To me, it's also the fact that I see people who are willing to actually get engaged, learn more about these problems is another huge win. >> Final question for you Federico. Describe the scene here at the AI Lounge at Intel. For folks watching who aren't at South by Southwest, what is the vibe here? What are they showing? Obviously AI is the theme. AI for Social Good is our broadcast here. Hashtag is #intelai, if you're interested in sharing, we'd appreciate if you could retweet and share the love. What's your thoughts on with the vibe here? Describe the scene here. >> You know, when I look around, all the demos are amazing. Like each one of them, you're blown away by it. And it just shows you how in a practical way, AI can be changing lives or doing amazing things. There's the drones there on the video. The drones, I love those, they look amazing. And then there's also the demo around using an art style and getting your picture. I'm going to get mine in a second. I think if you come by, you'll see how AI really in practice, is able to contribute to people's lives. And the vibe is awesome. And I'm loving it here. >> Well I want to say congratulations. You do amazing things. >> Thank you. >> It's really a real testament to where the society's going AI for Social Change. Microsoft has a Hackathon for Good, and this is not a one-off. I mean Microsoft certainly has had that. Google's got the 20% work on your own project. Intel has it. Companies are getting involved, a counter-culture is developing for societal benefits. And all these new things happening, like autonomous vehicles, smart cities, these are paradigm shifting society changes around the world and will require a human involvement. Congratulations, and thanks for sharing. >> Thank you very much. And we have a hashtag just for our product which is #defendhappiness. >> John: Defend happiness? >> Yeah, which is all about stopping sexual exploitation and trafficking all around the world. >> Okay, #defendhappiness. Please put it out there and share it, tweet this video. And for the betterment of society, I'm John Furrier with Federico here at the Intel AI Lounge. More coverage from South by Southwest. Three days of coverage, full day Cube today, some interviews tomorrow. Intel has some amazing super demos they're going to be showing here throughout the weekend. Stay tuned on theCUBE, we'll be covering it. We'll be right back with more, after this short break. (electronic music)

Published Date : Mar 10 2017

SUMMARY :

Brought to you by Intel. at the Intel AI lounge. First of all, you work for Microsoft, Talk about what Thorn is, and what you guys do. and the new ways that this is happening. And the rage right now is facial recognition. And can you share some of the impact? And the idea of this work is, And you know, if you imagine the impact of actually having in just a use case, just as an illustration. So let's say that there's online ads and you want What's in the demo? So in the demo, I'm showing how difficult it really is So how to do you involved? And after the hackathon, I got connected with Thorn. So talk about, that's very cool by the way. the data sets that you use, And after we started with this technology, And you now, low resolution images particularly they're And it seems to be that, if you look at all the news, And I feel like, the Hack for Good community So that's a Microsoft hackathon with employees And it's to me inspiring to work in a company And when I work with non-profits, Federico, I want to ask you if you could And that's just one of the things Obviously AI is the theme. And it just shows you how in a practical way, Well I want to say congratulations. Google's got the 20% work on your own project. And we have a hashtag just for our product which and trafficking all around the world. And for the betterment of society,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
FedericoPERSON

0.99+

MicrosoftORGANIZATION

0.99+

Federico Gomez SuarezPERSON

0.99+

John FurrierPERSON

0.99+

tomorrowDATE

0.99+

two peopleQUANTITY

0.99+

National Center for Missing ChildrenORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

OneQUANTITY

0.99+

AmazonORGANIZATION

0.99+

oneQUANTITY

0.99+

Three daysQUANTITY

0.99+

20%QUANTITY

0.99+

two waysQUANTITY

0.99+

bothQUANTITY

0.99+

IntelORGANIZATION

0.99+

JohnPERSON

0.99+

around 6,000 victimsQUANTITY

0.99+

last yearDATE

0.99+

Austin, TexasLOCATION

0.98+

todayDATE

0.98+

Two years agoDATE

0.98+

ThornORGANIZATION

0.98+

each victimQUANTITY

0.98+

SiliconANGLEORGANIZATION

0.98+

about 2,000 childrenQUANTITY

0.97+

wearethorn.org/sxswOTHER

0.97+

Frederico Gomez SuarezPERSON

0.96+

Thorn's Digital DefendersORGANIZATION

0.95+

SXSW 2017EVENT

0.95+

hundreds of imagesQUANTITY

0.95+

wearethorn.org/SxswOTHER

0.95+

ThornPERSON

0.94+

CubeCOMMERCIAL_ITEM

0.91+

each oneQUANTITY

0.91+

this weekDATE

0.88+

FirstQUANTITY

0.88+

National CenterORGANIZATION

0.88+

#intelaiORGANIZATION

0.86+

South by SouthwestTITLE

0.86+

2017DATE

0.86+

one of the storiesQUANTITY

0.8+

a personQUANTITY

0.8+

AI LoungeLOCATION

0.77+

SouthwestLOCATION

0.75+

One placeQUANTITY

0.75+

Google NextORGANIZATION

0.73+

one ofQUANTITY

0.7+

LoungeORGANIZATION

0.68+

thingsQUANTITY

0.65+

Hack for GoodORGANIZATION

0.61+

coupleQUANTITY

0.61+

Hack forTITLE

0.57+

rofitORGANIZATION

0.54+

a secondQUANTITY

0.54+

AILOCATION

0.53+

themQUANTITY

0.52+

yearsDATE

0.49+

XeonOTHER

0.46+

Brian Fanzo | SXSW 2017


 

>> Narrator: Live from Austin, Texas, it's the Cube, covering South by Southwest 2017. Brought to you by Intel. (electronic music) Now, here's John Furrier. >> Hello, and welcome to a special broadcast of Silicon Angles, the Cube. This is our flagship program. We go out to the events and extract the signal from the noise. We're here for a special broadcast, kicking off South by Southwest. This show is the center of the entertainment/media universe and we are here in the Intel AI Lounge, the hashtag Intel AI, and of course, hashtag The Cube, hashtag South by Southwest, and, again, South by Southwest, I call it the Burning Man for the tech industry, the music industry. It is where all the creative, the talented, and the innovators, the bomb throwers, the disrupters, and also the innovators building the next generation technologies. We're going to have wall-to-wall coverage, all day interviews here, and our theme this week at South by Southwest, is really powered by Intel AI, and that is, AI for social good. We're going to be unpacking all the cutting edge technology that's taking us into the next generation. What's this world look like with AI? What's this world look like with autonomous vehicles? These are significant shifts that we've never seen in the computer industry before. We're going to be breaking them down. And here to kickoff day one of our Cube coverage is, my friend, Brian Fanzo. iSocialFanz, is the founder. Great guy, young guy-- younger than me but, you know, still in the front lines. Brian, welcome to our kickoff. >> Thanks for having me. I like to be here. First time on the show was 2013, VM world. So, we were inside VM world, 2013, and now outside the Intel Lounge at South by. Pretty exciting. >> So, it's high noon here. We got our sunglasses on. High noon in Texas. I'm wearing my Ray Bans, but you have your Snapchat spectacles on. What's going on? Do you like them? Give us the update. >> Yeah, I'm actually a new user of them. I'm one who likes to jump on new technology, embrace the FOMO. I kind of waited a little bit on the specs. I also wanted to have something cool to release them with. After I got them, I decided to keep them in wraps until South by Southwest, but it's kind of fun. It's interactive. They are definitely-- now that you can buy them online, I think they're going to be seen a little bit more frequent, but here at South by, just walking down the streets, people are still stopping and saying, "Hey, take a picture of me," and, "How does it work?" I've been impressed. The quality's been pretty good, and it's really easy to use. I think battery life has a long way to go but we'll see. I think battery life in everything mobile has a long way to go. >> Well, that leads to our whole theme here. We're going to have Robert Scoble on, good friend, he's been doing a lot in Virtual Reality and AR, benpar, and a lot of scientists from Intel. Really, folks, talking about this kind of movement. There's a shift going on, user behavior shifting. You're seeing actually entrepreneurship, young companies coming out and changing the world, and not changing the world to go public and some of those vanity things around money, but really around social change, and that's our theme. You have been really prolific over the past couple years, this year in particular, going out, pounding the pavement. You've been at a zillion events. We see each other all the time. Of course, we do over a hundred events last year. You see a lot of stuff. What's the pattern that you're seeing out right now? In this new world order, there's certainly a couple key trends, and the big ones are autonomous vehicles, smart cities. Median entertainment's changing. The home, Alexa, Google Home, automation, but a paradigm shift is happening. What is your take on this? >> I think it comes down to, a lot of it, I think we've all realized we want an experience. Experience is extremely important. But what does an experience mean? And how do you make an experience stand out? I think that's one of the bigger problems today, is, with so much noise, so many things that are out there, I think a lot of people-- the idea of social good, people want to know that what they're working with, what they're working on, has a greater purpose. And I think, today's world, you're connected with no limitations, no silos, and not only being connected at all times, but how can you be connected at the right time and reach the right audience. I think technology like AI and some of the things-- especially cognitive, the idea that machines are learning with us, so it's not just machines learning and leaving the humans behind, but it's humans teaching machines, machines teaching humans, and then moving forward together. I think that's some exciting change. And it's from TV entertainment to enterprise tech, to even the social media space where I do a lot of work in. >> We're here in the Intel AI Lounge. We're on 77 Rainey St, so come by if you're watching here in South by Southwest. Always on Twitter. The hashtag is Intel AI at the Cube, ping us. Brian, the whole theme is here at Intel, and at South by Southwest, is real progressive thinkers, Intel's tag line is, "Your amazing starts with Intel." You start to see, even Intel, which powered the PC revolution, servers, are starting to make chips not just for machines anymore, for the Cloud, for cars. If you just think about autonomous vehicles, for instance. You think about what that does for the younger generation coming in, the computing landscape isn't about a device anymore, it's about an integrated experience, and one of the things we've been talking about on the Cube, and we're going to talk about this year, is, my vision of counterculture. >> Right. >> Every single movement, if you go look at the 60s, the computer industry was impacted by the counterculture of the 60s. You look at the PC revolution with Steve Jobs in the 80s, that was a counterculture. We're starting to see a counterculture now around new amazing new things. >> Brian: Right. >> With software, machine learning, AI-- I mean, it's mind boggling. >> Brian: It is. >> So, what is this counterculture? Do you have any thoughts on it? Do you agree, do you have any thoughts on that? >> I like to say, when Henry Ford said, that if he would've asked then what they wanted, they would have said faster horses not cars. I think today's generation has a bigger megaphone, is not afraid to say what they want, and because now, we have all of the data, they're not afraid to share that data. We're being much more transparent, allowing people to be a little bit more authentic with what they're sharing. I think we now have the opportunity to really shape new technology based on more data than we've ever had, more understanding of our consumers than we've ever had, and I like to say the consumer's no longer dumb, therefore, we have to start really pushing the boundaries. I love the tagline with awesome in it, because I think we are now creating awesome experiences and connecting things, probably in ways we would have never imagined. >> Yeah, I mean, one of the things we've been unpacking on Silicon Angle on the Cube, is this notion of all these trends that we're watching. A couple things we can talk about-- Delete Uber campaign came out of nowhere. The company's reeling because of one blog post by a woman who worked there, accusing the CEO of having a misogynistic culture. Fake news during the election. Global communication, now network, with instant sharing. We start to see these points where the voices of the internet of people is now part and disrupting traditional sacred cows, whether it's government, play, academia, so you can almost see it if you look at it and zoom out, you can say, "Woah, a new set of amazing things are happening, good and bad." >> Yeah, for sure, and I think, also, in that same realm, where now, it's kind of this idea where-- I think for the longest time, technology was taking us further away from the human condition, and we were able to be fake online, throw up a website, and really distance ourselves from the consumer and the community. And I believe now, because people are seeing through that, and the idea where people are faking profiles, we're now coming full circle where live video and a lot of these other things are saying, "Hey, we want humans, we want-- and then we want to be able to connect and come together." And I love the idea that we don't need-- a movement doesn't require a resume, doesn't require you to live in the same location. You can come together around a shared purpose, a shared passion, leveraging technology, and you can do it anywhere in the world. Especially from a mobile perspective, it's exciting to see people being able to have their voice heard, no matter where they are in the world. >> I mean, they literally-- I hate to use the phrase democratization, but that is really what's happening here, and if you look at how politics is changing and media-- the gatekeepers used to be a few parts of the world, whether it's a group of guys or a group of media companies or whatever, they were the gatekeepers. That's now leveled. You have now a leveling of that where you have these voices. So, what's happening, in my mind, is this whole AI for social good is super interesting to me because, if you think about it, the younger generation that's coming online right now and growing up into adulthood or teens is post-9/11 generation. When you think about 9/11, what that meant for our world, and now you're seeing the whole terrorist thing, these are people who are digital natives. There's a sense of, I won't say philanthropy, but societal thinking. >> And I think a part of it is, I think everyone has always wanted the ability to make a bigger impact on the world, but they also, now, I believe-- chapter three of my upcoming book is actually the future of marketing as social good, because I believe people want to know that what they're investing their money, their time in, has a greater purpose than themselves, and I think, because they're able to be connected, and we're able to expose cultures-- I mean, my daughter says good night to Alexa when she goes to bed, as if it's a human, and she's like, "Well, I got to say good night to it." It's this idea where, we're able to share, connect, and communicate-- computers are as much a part of that as humans are online, and it's an exciting movement because I think it's going to highlight and amplify the good and we're going to start to be able to drown out the noise and the bad that, before, oftentimes had a larger microphone and now, we're able to kind of equalize that. >> This is what I like about what Intel's doing. If you think about AI for social good. First of all, Intel benefits, thanks to Intel for sponsoring the Cube here, appreciate that. Plug for Intel. But if you know what they're doing under the hood, Intel makes chips. Moore's law has been one of those things that, for the folks who don't know, look it up on Google, Moore's law. Doubling the power every x-number of months, that creates really good processing power. That powers your glasses. That powers your car. The car is now a data center. The car is now an internet device. A human might have implants, chips some day. So this notion of the power, the computing power and now software's creation an amazing thing, but if you look at what you just said, it has nothing to do with computers. >> Brian: Right. >> So, computers are enabling us to do things and be connected, but if you think about that next generation of impact, it's going to come from human beings. Human beings, part of communities. And I think, if you look at the community dynamic, which has always been kind of like, oh yeah, I'm part of a community, but now, that there's intercommunication, your glasses are doing a streaming a video, we're doing a live broadcast, Twitter's out there, people can talk all over the place. You have a self-forming governance, a network. >> Which is awesome, because now, it's connecting great people no matter where you're at, you're not limited by your resume or where you grew up, and I also think there's an element here where, if you look at collaboration-- I believe collaboration is this key for the future of innovation. I think it's the idea of chips coming together with hardware and software, working together, not only in the post-product stage, but also in the innovation stage. And also, R&D Teams working together to now make things faster and smaller and able to really push the envelope. Things like, in the glasses, having sound and video, and having it connected to my phone, and transmitting with very little human input, we're now able to get perspectives that we would have never imagined, especially from just a regular person walking the streets. >> One of the things I want to get your thoughts on, because you're in the front lines, and also, I look at you, and you're not a young guy, you're an adult, but you're part of a new generation. I was talking with some folks at Stanford just last week around algorithms, and it's kind of an AI conversation, and something popped up. There is actually an issue of gender bias in algorithms. Who would have ever thought? So, now, there's kind of like algorithms for algorithms. This is kind of this AI for social good where, we don't want to actually start bringing our biases into the algorithms, so we have to always be monitoring that. But that brings up the whole point of-- Okay, we're living in a world of first time opportunities and problems and challenges. In the old days in the tech, we knew what the processes were: automated accounting software, automate this, automate some IT department, with unknown technology. And the technology would come out, like Intel and others-- now, we have unknown processes and problems, and known technology developing faster. So, what that's going to require is the human involvement, the communities to be very agile. >> Without question. Not only embrace change, but you also have to look at communities now where, I don't believe we are doing things massively different as humans today than we were years ago, we just now have more transparency and more exposure and access to all of our lives, and I think, with that becomes, as technology exposes more of our vulnerabilities, we as humans have to start to realize that people are more vulnerable and no one's perfect, and things are migrating in a different pattern. Give me that collaboration because we have to be able to trust the algorithms, there has to be that transparency there, but we also want some version of our own privacy, but I kind of live in the space where I don't think of privacy anymore. I think of things as transparently sharing, engaging, and then, hopefully, technology amplifying that and giving us the controls. >> And that's why I like how the AI for social good that Intel's doing here at South by, because it's not just the tech, it's the humanization of it, and South by Southwest represents a global culture of tech, creative tech practitioners, tech visionaries, futurists, kind of all kind of coming together. So, give us the update so far. You've been on the streets. You've been seeing folks last night. I've been on the influencers list last night on Facebook, there's a special group there, all our friends are on there. What's the update so far at South by Southwest, what's the current vibe, how do you see it going this week, what are some of the themes you see popping out of the woodwork at South by Southwest? >> I think last year was interesting. This is my third year in a row at South by, and I present and talk on a bunch of different topics, but I think last year, it was a lot about what is VR, and VR was shiny and fancy, and the conversations now seem to be, what is VR doing, what's the content look like, and where is it going and how do I get there. That's an exciting conversation because, I think, instead of it being a shiny object, it's now VR and AR and AI, how do they intertwine into our lives. The idea of interactive-- South by Southwest Interactive, really what these tools and technology are, is connecting that interactive capabilities. It's interesting to see the different car brands here. You have Intel, you have Dell, you have IBM, but then you also have some of these other brands that are trying to push the, I'd say, the startup agenda. That's exciting, because I remember, I wasn't here for Twitter when you were here for Twitter, but Meerkat, two years ago, for me, was the darling live streaming app that launched here, and it died a year later, but I'm glad to see that innovation and the startup culture is now mixing, kind of hand-in-hand with the enterprise. >> Well, I'm going to see some of my old peeps from the Web 2.0 days, and a lot of people were like, "Oh, the Web 2.0 days didn't happen," just like the bubble burst and the internet bubble, and that burst, but it all happened. Everything that was put out there, pets online, everything online went online. Everything that was promoted in Web 2.0 is happening now, so I believe that you're seeing now the absolute operationalizing, the globalization of democratization. The technology has now come with software for that democratization and now, what's exciting is, with machine learning, data sets, and all the stuff happening with the cloud technology and 5G, it's going to get faster now. >> Which is exciting, because I think real time is a powerful element, but if you're able to get multiple senses of data, interact with machines, and ultimately push that forward at the right time, I think that collaboration of machine, human, and experience at the right time is where we start pushing new innovations. AR and VR, even some of this cognitive type learning, starts hitting to mainstream, which I'm excited about because, I think, we're getting to this culture now where we look at change and we're hopefully now embracing the opportunities rather than looking and saying what you do. I think, now we're realizing no one cares what the product is, we want to know how does it impact us and why should we care. >> Brian Fanzo, new generation, a millennial, making things happen out there, checking things out. Of course, iSocialFanz is his Twitter handle, check him out. Always great content, always out there, the canary in the coalmine, poking at the new stuff and analyzing it and sharing it, oversharing, as some people would say, but not in my book. Always great to have you on. Good to see you. Thanks for spending the time, taking off our AI Lounge. >> My pleasure. Happy South by Southwest. >> Alright, we'll be back with more Intel AI Lounge after this short break. Hashtag Intel AI. I'm John Furrier with the Cube. We'll be right back. (electronic music)

Published Date : Mar 10 2017

SUMMARY :

Narrator: Live from Austin, Texas, it's the Cube, and extract the signal from the noise. and now outside the Intel Lounge at South by. but you have your Snapchat spectacles on. and it's really easy to use. and not changing the world to go public and leaving the humans behind, but it's humans and one of the things we've been talking You look at the PC revolution with Steve Jobs in the 80s, I mean, it's mind boggling. I love the tagline with awesome in it, because I think of the internet of people is now part and disrupting and the idea where people are faking profiles, and media-- the gatekeepers used to be a few and the bad that, before, oftentimes had a larger microphone for the folks who don't know, look it up on Google, And I think, if you look at the community dynamic, and able to really push the envelope. the communities to be very agile. and access to all of our lives, because it's not just the tech, it's the humanization of it, and the conversations now seem to be, from the Web 2.0 days, and a lot of people were like, and experience at the right time is where we start Thanks for spending the time, Happy South by Southwest. I'm John Furrier with the Cube.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
IBMORGANIZATION

0.99+

Brian FanzoPERSON

0.99+

Steve JobsPERSON

0.99+

DellORGANIZATION

0.99+

BrianPERSON

0.99+

IntelORGANIZATION

0.99+

TexasLOCATION

0.99+

Robert ScoblePERSON

0.99+

Henry FordPERSON

0.99+

last yearDATE

0.99+

John FurrierPERSON

0.99+

a year laterDATE

0.99+

77 Rainey StLOCATION

0.99+

two years agoDATE

0.99+

GoogleORGANIZATION

0.99+

South by SouthwestTITLE

0.99+

last weekDATE

0.99+

80sDATE

0.99+

9/11EVENT

0.99+

Austin, TexasLOCATION

0.98+

TwitterORGANIZATION

0.98+

SXSW 2017EVENT

0.98+

60sDATE

0.98+

last nightDATE

0.98+

this yearDATE

0.98+

firstQUANTITY

0.97+

chapter threeQUANTITY

0.97+

oneQUANTITY

0.97+

First timeQUANTITY

0.97+

todayDATE

0.97+

OneQUANTITY

0.96+

UberORGANIZATION

0.96+

MoorePERSON

0.96+

this weekDATE

0.96+

2013DATE

0.95+

CubeCOMMERCIAL_ITEM

0.95+

Southwest InteractiveORGANIZATION

0.94+

third yearQUANTITY

0.94+

AlexaTITLE

0.93+

one blog postQUANTITY

0.93+

StanfordORGANIZATION

0.93+

FacebookORGANIZATION

0.91+

FirstQUANTITY

0.9+

South by SouthwestLOCATION

0.89+

past couple yearsDATE

0.89+

iSocialFanzORGANIZATION

0.89+

Silicon AnglesLOCATION

0.88+

Intel LoungeLOCATION

0.88+

over a hundred eventsQUANTITY

0.87+

day oneQUANTITY

0.87+

SnapchatORGANIZATION

0.86+

The CubeTITLE

0.84+

South by SouthwestORGANIZATION

0.83+

iSocialFanzPERSON

0.82+

zillion eventsQUANTITY

0.81+

SouthLOCATION

0.8+

Robert Scoble, Transformation Group - SXSW 2017 - #IntelAI - #theCUBE


 

>> Narrator: Live from Austin, Texas, it's the Cube covering South by Southwest 2017. Brought to you by Intel. Now, here's John Furrier. >> Hey, welcome back everyone. We're live here in the Cube coverage of South by Southwest. We're at the Intel AI Lounge, hashtag Intel AI. And the theme is AI for social good. So if you really support that, go in Twitter and use the hashtag Intel AI and support our cause. I'm John Furrier with Silicon Angle, I'm here with Robert Scoble, @Scobalizer. Just announcing this week the new formation of his new company, the Transformation Group. I've known Robert for over 12 years now. Influencer, futurist. You've been out and about with the virtual reality, augmented reality, you're wearing the products. >> Yup. >> You've been all over the world, you were just at Mobile World Con, we've been following you. You are the canary in the coalmine poking at all the new technology. >> Well, the next five years, you're going to see some mind blowing things. In fact, just the next year, I predict that this thing is going to turn into a three ounce pair of glasses that's going to put virtual stuff on top of the world. So think about coming back to South by Southwest, you're wearing a couple pairs of glasses, and you are going to see blue lines on the floor taking you to your next meeting or TV screens up here so I can watch the Cube while I walk around the streets here. It's going to be a lot of crazy stuff. >> So, we've been on our opening segment, we talked about it, we just had a segment on social good around volunteering, but what the theme is coming out is this counter culture where there's now this humanization aspect they called the consumerization of IT in the past. But in the global world, the human involvement now has these emersion experiences with technology, and now is colliding with impacting lives. >> Well, absolutely true. >> This is a Microsoft HoloLens first of all. And HoloLens puts virtual stuff on top of the real world. But at home, I have an HTC Vibe, and I have an Oculus Rift for VR, and VR is that immersive media. This is augmented reality or what we call mixed reality, where the images are put on top of the world. So I can see something pop off of you. In fact, last year at South by, I met a guy who started a company called iFluence, he showed me a pair of glasses and you look at a bottle like this and a little menu pops off the side of a bottle, tells you how much it is, tells you what's in the bottle, and lets you buy new versions of this bottle, like a case of it and have it shipped to my house all with my eyes. That's coming out from Google next year. >> So the big thing on the immersion the AR, you look at what's going on at societal impact. What are the things that you see? Obviously, we've been seeing at Mobile World Congress before Peelers came out, autonomous vehicles is game changing, smart cities, median entertainment, the world that we know close to our world, and then smart home. >> Oh yeah. >> Smart home's been around for years, but autonomous vehicles truly is a societal change. >> Yes. >> The car is a data center now. It's got experiences. And there's three new startups you should pay attention to, in the new cars that are coming in the next 18 months. Quanergy is one. They make a new kind of light R, a new sensor. In fact, there's sensors here that are sensing the world as I walk around and seeing all the surfaces. The car works the same way. It has to see ahead to know that there's a kid in front of your car, the car needs to stop, right. And Quanergy is making a focusable semiconductor light R, that's going to be one to watch. And then there's a new kind of brain, a new kind AI coming, and DeepScale is the one that I'm watching. The DeepScale brain uses a new third company called Luminar Technologies, which is making a new kind of 3D map of the world. So think about going down the street. This new map is going to know every pot hole, every piece of paint, every bridge on the street, and it's going to, the brain, the AI, is going to compare the virtual map to the real map, to the real world and see if there's anything new, like a kid crossing across the street. Then the car needs to do something and make a new decision. So 3D startups are going to really change the car. But the reason I'm so focused on mixed reality, is mixed reality is the user interface for the self-driving car, for the smart city, for the internet of things, the fields in your farm or what not, and for your robot, and for your drone. You're going to have drones that are going to know this space, and you can fly it right, I've seen drones already in the R & D labs at Intel. You can fly them straight at the wall, it'll stop an inch from the wall because it knows where the wall is. >> 'Cause it's got the software, it's got the sensors, the internet of things. We are putting out a new research report at Wikibound called IOT and P, Internet Things and People. And this is the key point. I want to get your thoughts on this because you nailed a bunch of things, and I want you to define for the folks watching what you mean by mixed reality because this is not augmented reality. >> Well it is. >> John: You're talking about mixed reality. >> It is augmented reality, it's just-- >> John: But why mixed reality? >> We came up with the new term called mixed reality because on our, we have augmented reality on phones. But the augmented reality you have on phones like the Pokemon's we've been talking about. They're not locked to the world. So when I'm wearing this, there's actually a shark right here on this table, and it's locked on the table, and I can walk around that shark. And it seems like it's sitting here just like this bottle of water is sitting on the table. This is mind blowing. And now we can actually change the table itself and make it something else. Because every pixel in this space is going to be mapped by these new sensors on it. >> So, let's take that to the next level. You had mentioned earlier in your talk just now about user interface to cars. You didn't say in user interface to cars, you didn't say just smart, you kind of implied, I think you meant it's interface to all the environments. >> Robert: Yes. >> Can you expand on what your thoughts on that? >> You're going to be wearing glasses that look like yours in about a year, much smaller than this. This is too dorky and too big for an average consumer to wear around right, but if they're three ounces and they look something like what you're wearing right now. >> Some nice Ray Bans, yup. >> And they're coming. I've seen them in the R & D labs. They're coming from a variety of different companies. Google, Facebook, Loomis, Magic Leap, all sorts of different companies are coming with these lightweight small glasses. You're going to wear them around and it's going to lay interface elements on everything. So think about my watch. Why if I do this gesture, why do I have to look at a little tiny screen right here? Why isn't the whole screen of my calendar pop up right here? They could do that, that's a gesture. This computer in here can sense that I'm doing a gesture and can put a new user interface on top of that. Now, I've seen tractors that have sensors in them. Now, using a glass like this, it shows me what the pumps are doing in the tractor on the glasses. I can walk around a factory floor and see the sensors in the pipes on the factory floor and see the sensors in my electric motors on the factory. All with a one pair of glasses. >> So this is why the Intel AI thing interests me, this whole theme. Because what you just described requires data. So one, you need to have the data available. >> Robert: Yes. >> The data's got to be a frictionless, it can't be locked in some schema as they say in the database world. It's got to be free to be addressed by software. >> Yes. >> You need software that understands what that is. And then you need horsepower, compute power, chips to make it all happen. >> Yeah, think about a new kind of TV that's coming soon. I'm going to look at TV like this one, a physical TV. But it's too small and it's in the wrong angle. So I can just grab the image off the TV and virtually move it over here. And I'll see it, nobody else will see it. But I can put that TV screen right here, so I can watch my TV the way I want to watch it. >> Alright so this is all sci-fi great stuff, which actually-- >> It's not sci-fi, it's here already. You just don't have it. I have it (laughs). >> Well, you can see it's kind of dorky, but I'm not going to say you're a dork 'cause I know you. To mainstream America, mainstream world, it's a bit sci-fi but people are grokking this now. Certainly the younger generation that are digital native all are coming in post-9/11, they understand that this is a native world to them, and they take to it like a fish to water. >> Yes. >> Us old guys, but we are the software guys, we're the tech guys. So continue to the mainstream America, what has to happen in your mind to mainstream this stuff? Obviously self driving cars is coming. It's in fleets first, and then cars. >> We have to take people on a journey away from computing like this or computing like this to computing on glasses. So how do we do that? Well, you have to show deep utility. And these glasses show that. Wearing a HoloLens, I see aliens coming out of the walls. Blowing holes in this physical wall. >> John: Like right now? >> Yeah. >> What are you smoking (laughs)? >> Nothing yet. And then I can shoot them with my fingers because the virtual things are mixing with the real world. It's a mind blowing experience. >> So do you see this being programmed by users or being a library of stuff? >> Some are going to be programmed by users like Minecraft is today on a phone or on a tablet. Most of it is going to be built by developers. So there's a huge opportunity coming for developers. >> Talk about the developer angle, because that's huge. We're seeing massive changes in the developer ecosystems. Certainly, open source is going to be around for awhile. But which friends do you see in open source, I mean, I'm sorry, in the developer community, with this new overlay of 5G connectivity, all this amazing cloud technology? >> There's a new 3D mapping and it's a slam based map. So think about this space, this physical space. These sensors that are on the front of these new kinds of glasses that are coming out are going to sense the world in a new way and put it into a new kind of database, one that we can put programmatic information into. So think about me walking around a shopping mall. I walk in the front door of a shopping mall, I cross geo fence in that shopping mall. And the glasses then show me information about the shopping mall 'cause it knows it's in the shopping mall. And then I say, hey Intel, can you show me, or Siri, or Alexa, or Cortana, or whoever you're talking to. >> Mostly powered by Intel (laughs). >> Most of it is powered by Intel 'cause Intel's in all the data centers and all these glasses. In fact, Intel is the manufacturer of the new kind of controller that's inside this new HoloLens. And when I ask it, I can say, hey, where's the blue jeans in this shopping mall? And all of a sudden, three new pairs of blue jeans will appear in the air, virtual blue jeans, and it'll say this one's a Guess, this one's a Levi's, this one's a whatever. And I'll say, oh I want the Levi's 501, and I'll click on it, and a blue line will appear on the floor taking me right to the product. You know, the shopping mall companies already have the data. They already know where the jeans are in the shopping mall and these glasses are going to take you right to it. >> Robert, so AI is the theme, it's hot, but AI, I mean I love AI, don't get me wrong. AI is a mental model in my mind for people to kind of figure out that this futuristic world's here and it's moving fast. But machine learning is a big part of what AI is becoming. >> Yes. >> So machine learning is becoming automated. >> Well it's becoming a lot faster. >> Faster and available. >> Because it use to take 70,000 images of something like a bottle to train the system that this is a bottle versus a can, bottle versus can. And the scientists have figured out how to make it two images now. So all I need is two images of something new to train the system that we have a bottle versus a can. >> And also the fact that computes available. There's more and more faster processors that this stuff can get crunched, the data can be crunched. >> Absolutely, but it's the data that trains these things. So let's talk about the bleeding edge of AI. I've seen AIs coming out of Israel that are just mind blowing. They take a 3D image of this table, they separate everything into an object. So this is an object. It's separate from the table that it's on. And it then lets me do AI look-ups on the object. So this is a Roxanne bottle of water. The 3D sensor can see the logo in this bottle of water, can look to the cloud, find all sorts of information about the manufacturer here, what the product is, all sorts of stuff. It might even pull down a CAD drawing like the computer that you're on. Pull down a CAD drawing, overlay it on top of the real product, and now we can put videos on the back of your Macintosh or something like that. You can do mind blowing stuff coming soon. That's one angle. Let's talk about medical. In Israel, I went to the AI manufacturers. They're training the MRI machines to recognize cancers. So you're going to be lying in an MRI machine and it's going to tell the people around the machine whether you have cancer or not and which cancer. And it's already faster than the doctor, cheaper than the doctor, and obviously doesn't need a doctor. And that's going to lead into a whole discussion-- >> The Christopher thing. These are societal problems by the way. The policy is the issue, not the technology. How do you deal with the ethical issues around gene sequencing and gene editing? >> That's a whole other thing. I'm just recognizing whether you have cancer on this example. But now we need to talk about jobs. How do we make new jobs in massive quantities. Because we're going to decimate a lot of peoples' jobs with these new technologies, so we need to talk about that, probably on a future Cube. But I think mixed reality is going to create millions of jobs because think about this bottle. In the future, I'm going to be wearing a pair of glasses and Skrillex is going to jump out of the bottle, on to the table, and give a performance, and then jump back into the bottle. That's only four years away according to the guy who's running a new startup called 8i. He's making a new volumetric camera, it's a camera 40 or 50 cameras around-- >> If you don't like Skrillex, Martin Garrix can come on. >> Whatever you want. Remember, this media's going to be personalized to your liking. Spotify is already doing that. Do you listen to Spotify? >> John: Yeah, of course. >> Do you listen to the discovery weekly feature on that? >> No. >> You should. It's magical. It brings you the best music based on what you've already listened and it's personalized. So your discovery weekly on your phone is different than the discovery weekly on my phone. And that's run by AI. >> So these are new collaborative filters. This is all about software? >> Yeah. Software and a little bit of hardware. Because you still need to sense the world in a new way. You're going to get new watches this year that have many more sensors that are looking in your veins for whether you have high blood pressure, whether you're a in shape for running. By the way, you're going to have an artificial coach when you go running in the morning, running next to you, just like when you see Mark Zuckerberg. He can afford to pay a real coach, I can't. So he has a real coach running with him every morning and saying hey, we're going to do some interval training today, we're going to do some sprints to get your cardio up. Well, now the glasses are going to do that for you. It's going to say, let's do some intervals today and you're going to wear the watch that's going to sense your blood pressure and your heart rate and the artificial coach running next you. And that's only two years away. >> Of course, great stuff. Robert Scoble, we have to close the segment. Quickly, how has South by changed in ten years? >> Well, 20, I've been coming for 20 years. I've been coming since it was 500 people and now it's 50,000, 70,000 people, it's crazy. >> How has it changed this year? What's going on this year? >> This is the VR year. Every year we have a year right. There was the Twitter year, there was the Foursquare year. This is the VR year, so if you're over at Capital Factory, you're going to see dozens of VR experiences. In fact, my co-author's playing the Mummy right now. I had to come on your show, I got the short straw (laughs). Sit in the sun instead of playing some cool stuff. But there's VR all over the place. Next year is going to be the mixed reality year, and this is a predictor of the next year that's coming. >> Alright, Robert Scoble, futurist right here on the Cube. Also, congratulations on your new company. You're going out on your own, Transformation Group. >> Yeah, we're helping out brands figure out this mixed reality world. >> Congratulations of course. As always, it is a transformational time in the history of our world and certainly the computer industry is going to a whole other level that we haven't seen before. And this is going to be exciting. Thanks for spending the time with us. It's the Cube here live at South by Southwest special Cube coverage, sponsored by Intel. And the hashtag is Intel AI. If you like it, tweet us at Twitter. We'll be happy to talk to you online. I'm John Furrier. More after this short break. (electronic music)

Published Date : Mar 10 2017

SUMMARY :

Austin, Texas, it's the Cube of his new company, the the world, you were just at the floor taking you to your But in the global world, the and have it shipped to my What are the things that you see? for years, but autonomous Then the car needs to do for the folks watching what John: You're talking it's locked on the table, So, let's take that to the next level. You're going to be wearing in my electric motors on the factory. have the data available. say in the database world. And then you need horsepower, So I can just grab the image I have it (laughs). Certainly the younger generation are the software guys, aliens coming out of the walls. the virtual things are Some are going to be in the developer ecosystems. And the glasses then show me information In fact, Intel is the Robert, so AI is the theme, it's hot, So machine learning And the scientists have And also the fact And it's already faster than the doctor, These are societal problems by the way. In the future, I'm going to If you don't like Skrillex, going to be personalized is different than the This is all about software? and the artificial coach running next you. to close the segment. and now it's 50,000, This is the VR year, so if futurist right here on the Cube. this mixed reality world. And this is going to be exciting.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Robert ScoblePERSON

0.99+

RobertPERSON

0.99+

FacebookORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

Mark ZuckerbergPERSON

0.99+

IsraelLOCATION

0.99+

John FurrierPERSON

0.99+

70,000 imagesQUANTITY

0.99+

JohnPERSON

0.99+

20 yearsQUANTITY

0.99+

two imagesQUANTITY

0.99+

two imagesQUANTITY

0.99+

MicrosoftORGANIZATION

0.99+

LoomisORGANIZATION

0.99+

Magic LeapORGANIZATION

0.99+

40QUANTITY

0.99+

CortanaTITLE

0.99+

three ouncesQUANTITY

0.99+

ten yearsQUANTITY

0.99+

SiriTITLE

0.99+

MinecraftTITLE

0.99+

last yearDATE

0.99+

Levi'sORGANIZATION

0.99+

Transformation GroupORGANIZATION

0.99+

WikiboundORGANIZATION

0.99+

Martin GarrixPERSON

0.99+

Luminar TechnologiesORGANIZATION

0.99+

500 peopleQUANTITY

0.99+

next yearDATE

0.99+

@ScobalizerPERSON

0.99+

Austin, TexasLOCATION

0.99+

IntelORGANIZATION

0.99+

AmericaLOCATION

0.99+

50,000, 70,000 peopleQUANTITY

0.99+

two yearsQUANTITY

0.99+

HTCORGANIZATION

0.99+

Ray BansORGANIZATION

0.99+

Next yearDATE

0.99+

four yearsQUANTITY

0.99+

20QUANTITY

0.98+

AlexaTITLE

0.98+

Mobile World ConEVENT

0.98+

over 12 yearsQUANTITY

0.98+

this yearDATE

0.98+

SkrillexPERSON

0.98+

MacintoshCOMMERCIAL_ITEM

0.98+

HoloLensCOMMERCIAL_ITEM

0.98+

QuanergyORGANIZATION

0.98+

one angleQUANTITY

0.98+

SpotifyORGANIZATION

0.98+

Silicon AngleORGANIZATION

0.98+

VibeCOMMERCIAL_ITEM

0.98+

iFluenceORGANIZATION

0.98+

three new pairsQUANTITY

0.98+

SXSW 2017EVENT

0.97+

Capital FactoryORGANIZATION

0.97+

8iORGANIZATION

0.97+

Mobile World CongressEVENT

0.96+

dozensQUANTITY

0.96+

one pair of glassesQUANTITY

0.95+

about a yearQUANTITY

0.95+

todayDATE

0.95+

this weekDATE

0.92+

ChristopherPERSON

0.92+

three new startupsQUANTITY

0.91+

three ounce pair of glassesQUANTITY

0.91+

OculusORGANIZATION

0.9+