Image Title

Search Results for Nervana:

Ziya Ma, Intel | Big Data SV 2018


 

>> Live from San Jose, it's theCUBE! Presenting Big Data Silicon Valley, brought to you by SiliconANGLE Media and its ecosystem partners. >> Welcome back to theCUBE. Our continuing coverage of our event, Big data SV. I'm Lisa Martin with my co-host George Gilbert. We're down the street from the Strata Data Conference, hearing a lot of interesting insights on big data. Peeling back the layers, looking at opportunities, some of the challenges, barriers to overcome but also the plethora of opportunities that enterprises alike have that they can take advantage of. Our next guest is no stranger to theCUBE, she was just on with me a couple days ago at the Women in Data Science Conference. Please welcome back to theCUBE, Ziya Ma. Vice President of Software and Services Group and the Director of Big Data Technologies from Intel. Hi Ziya! >> Hi Lisa. >> Long time, no see. >> I know, it was just really two to three days ago. >> It was, well and now I can say happy International Women's Day. >> The same to you, Lisa. >> Thank you, it's great to have you here. So as I mentioned, we are down the street from the Strata Data Conference. You've been up there over the last couple days. What are some of the things that you're hearing with respect to big data? Trends, barriers, opportunities? >> Yeah, so first it's very exciting to be back at the conference again. The one biggest trend, or one topic that's hit really hard by many presenters, is the power of bringing the big data system and data science solutions together. You know, we're definitely seeing in the last few years the advancement of big data and advancement of data science or you know, machine learning, deep learning truly pushing forward business differentiation and improve our life quality. So that's definitely one of the biggest trends. Another thing I noticed is there was a lot of discussion on big data and data science getting deployed into the cloud. What are the learnings, what are the use cases? So I think that's another noticeable trend. And also, there were some presentations on doing the data science or having the business intelligence on the edge devices. That's another noticeable trend. And of course, there were discussion on security, privacy for data science and big data so that continued to be one of the topics. >> So we were talking earlier, 'cause there's so many concepts and products to get your arms around. If someone is looking at AI and machine learning on the back end, you know, we'll worry about edge intelligence some other time, but we know that Intel has the CPU with the Xeon and then this lower power one with Atom. There's the GPU, there's ASICs, FPGAS, and then there are these software layers you know, with higher abstraction layer, higher abstraction level. Help us put some of those pieces together for people who are like saying, okay, I know I've got a lot of data, I've got to train these sophisticated models, you know, explain this to me. >> Right, so Intel is a real solution provider for data science and big data. So at the hardware level, and George, as you mentioned, we offer a wide range of products from general purpose like Xeon to targeted silicon such as FPGA, Nervana, and other ASICs chips like Nervana. And also we provide adjacencies like networking the hardware, non-volatile memory and mobile. You know, those are the other adjacent products that we offer. Now on top of the hardware layer, we deliver fully optimized software solutions stack from libraries, frameworks, to tools and solutions. So that we can help engineers or developers to create AI solutions with greater ease and productivity. For instance, we deliver Intel optimized math kernel library. That leverage of the latest instruction set gives us significant performance boosts when you are running your software on Intel hardware. We also deliver framework like BigDL and for Spark and big data type of customers if they are looking for deep learning capabilities. We also optimize some popular open source deep learning frameworks like Caffe, like TensorFlow, MXNet, and a few others. So our goal is to provide all the necessary solutions so that at the end our customers can create the applications, the solutions that they really need to address their biggest pinpoints. >> Help us think about the maturity level now. Like, we know that the very most sophisticated internet service providers who are sort of all over this machine learning now for quite a few years. Banks, insurance companies, people who've had this. Statisticians and actuaries who have that sort of skillset are beginning to deploy some of these early production apps. Where are we in terms of getting this out to the mainstream? What are some of the things that have to happen? >> To get it to mainstream, there are so many things we could do. First I think we will continue to see the wide range of silicon products but then there are a few things Intel is pushing. For example, we're developing this in Nervana, graph compiler that will encapsulate the hardware integration details and present a consistent API for developers to work with. And this is one thing that we hope that we can eventually help the developer community with. And also, we are collaborating with the end user. Like, from the enterprise segment. For example, we're working with the financial services industry, we're working with a manufacturing sector and also customers from the medical field. And online retailers, trying to help them to deliver or create the data science and analytics solutions on Intel-based hardware or Intel optimized software. So that's another thing that we do. And we're seeing actually very good progress in this area. Now we're also collaborating with many cloud service providers. For instance, we work with some of the top seven cloud service providers, both in the U.S. and also in China to democratize the, not only our hardware, but also our libraries and tools, BigDL, MKL, and other frameworks and libraries so that our customers, including individuals and businesses, can easily access to those building blocks from the cloud. So definitely we're working from different factors. >> So last question in the last couple of minutes. Let's kind of vibe on this collaboration theme. Tell us a little bit about the collaboration that you're having with, you mentioned customers in some highly regulated industries, for as an example. But a little bit to understand what's that symbiosis? What is Intel learning from your customers that's driving Intel's innovation of your technologies and big data? >> That's an excellent question. So Lisa, maybe I can start my sharing a couple of customer use cases. What kind of a solution that we help our customer to address. I think it's always wise not to start a conversation with the customer on technology that you deliver. You want to understand the customer's needs first. And then so that you can provide a solution that really address their biggest pinpoint rather than simply selling technology. So for example, we have worked with an online retailer to better understand their customers' shopping behavior and to assess their customers' preferences and interests. And based upon that analysis, the online retailer made different product recommendations and maximized its customers' purchase potential. And it drove up the retailer's sales. You know, that's one type of use case that we have worked. We also have partnered with the customers from the medical field. Actually, today at the Strata Conference we actually had somebody highlighting, we had a joint presentation with UCSF where we helped the medical center to automate the diagnosis and grading of meniscus lesions. And so today actually, that's all done manually by the radiologist but now that entire process is automated. The result is much more accurate, much more consistent, and much more timely. Because you don't have to wait for the availability of a radiologist to read all the 3D MRI images. And that can all be done by machines. You know, so those are the areas that we work with our customers, understand their business need, and give them the solution they are looking for. >> Wow, the impact there. I wish we had more time to dive into some of those examples. But we thank you so much, Ziya, for stopping by twice in one week to theCUBE and sharing your insights. And we look forward to having you back on the show in the near future. >> Thanks, so thanks Lisa, thanks George for having me. >> And for my co-host George Gilbert, I'm Lisa Martin. We are live at Big Data SV in San Jose. Come down, join us for the rest of the afternoon. We're at this cool place called Forager Tasting and Eatery. We will be right back with our next guest after a short break. (electronic outro music)

Published Date : Mar 8 2018

SUMMARY :

brought to you by SiliconANGLE Media some of the challenges, barriers to overcome What are some of the things that you're So that's definitely one of the biggest trends. on the back end, So at the hardware level, and George, as you mentioned, What are some of the things that have to happen? and also customers from the medical field. So last question in the last couple of minutes. customers from the medical field. And we look forward to having you We will be right back with our

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
George GilbertPERSON

0.99+

Lisa MartinPERSON

0.99+

UCSFORGANIZATION

0.99+

GeorgePERSON

0.99+

LisaPERSON

0.99+

San JoseLOCATION

0.99+

ChinaLOCATION

0.99+

Ziya MaPERSON

0.99+

U.S.LOCATION

0.99+

International Women's DayEVENT

0.99+

SiliconANGLE MediaORGANIZATION

0.99+

ZiyaPERSON

0.99+

one weekQUANTITY

0.99+

todayDATE

0.99+

twiceQUANTITY

0.99+

FirstQUANTITY

0.99+

Strata Data ConferenceEVENT

0.99+

one topicQUANTITY

0.98+

SparkTITLE

0.98+

bothQUANTITY

0.98+

IntelORGANIZATION

0.98+

one thingQUANTITY

0.98+

three days agoDATE

0.98+

Women in Data Science ConferenceEVENT

0.97+

Strata ConferenceEVENT

0.96+

firstQUANTITY

0.96+

BigDLTITLE

0.96+

TensorFlowTITLE

0.96+

one typeQUANTITY

0.95+

twoDATE

0.94+

MXNetTITLE

0.94+

CaffeTITLE

0.92+

theCUBEORGANIZATION

0.91+

oneQUANTITY

0.9+

Software and Services GroupORGANIZATION

0.9+

Forager Tasting and EateryORGANIZATION

0.88+

Vice PresidentPERSON

0.86+

Big Data TechnologiesORGANIZATION

0.84+

seven cloud service providersQUANTITY

0.81+

last couple daysDATE

0.81+

AtomCOMMERCIAL_ITEM

0.76+

Silicon ValleyLOCATION

0.76+

Big Data SV 2018EVENT

0.74+

a couple days agoDATE

0.72+

Big Data SVORGANIZATION

0.7+

XeonCOMMERCIAL_ITEM

0.7+

NervanaORGANIZATION

0.68+

Big DataEVENT

0.62+

lastDATE

0.56+

dataEVENT

0.54+

caseQUANTITY

0.52+

3DQUANTITY

0.48+

coupleQUANTITY

0.47+

yearsDATE

0.47+

NervanaTITLE

0.45+

BigORGANIZATION

0.32+

Bill Jenkins, Intel | Super Computing 2017


 

>> Narrator: From Denver, Colorado, it's theCUBE. Covering Super Computing 17. Brought to you by Intel. (techno music) Hey, welcome back, everybody. Jeff Frick here with theCUBE. We're in Denver, Colorado at the Super Computing Conference 2017. About 12 thousand people, talking about the outer edges of computing. It's pretty amazing. The keynote was huge. The square kilometer array, a new vocabulary word I learned today. It's pretty exciting times, and we're excited to have our next guest. He's Bill Jenkins. He's a Product Line Manager for AI on FPGAs at Intel. Bill, welcome. Thank you very much for having me. Nice to meet you, and nice to talk to you today. So you're right in the middle of this machine-learning AI storm, which we keep hearing more and more about. Kind of the next generation of big data, if you will. That's right. It's the most dynamic industry I've seen since the telecom industry back in the 90s. It's evolving every day, every month. Intel's been making some announcements. Using this combination of software programming and FPGAs on the acceleration stack to get more performance out of the data center. Did I get that right? Sure, yeah, yeah. Pretty exciting. The use of both hardware, as well as software on top of it, to open up the solution stack, open up the ecosystem. What of those things are you working on specifically? I really build first the enabling technology that brings the FPGA into that Intel ecosystem. Where Intel is trying to provide that solution from top to bottom to deliver AI products. >> Jeff: Right. Into that market. FPGAs are a key piece of that because we provide a different way to accelerate those machine-learning and AI workloads. Where we can be an offload engine to a CPU. We can be inline analytics to offload the system, and get higher performance that way. We tie into that overall Intel ecosystem of tools and products. Right. So that's a pretty interesting piece because the real-time streaming data is all the rage now, right? Not in batch. You want to get it now. So how do you get it in? How do you get it written to the database? How do you get it into the micro-processor? That's a really, really important piece. That's different than even two years ago. You didn't really hear much about real-time. I think it's, like I said, it's evolving quite a bit. Now, a lot of people deal with training. It's the science behind it. The data scientists work to figure out what topologies they want to deploy and how they want to deploy 'em. But now, people are building products around it. >> Jeff: Right. And once they start deploying these technologies into products, they realize that they don't want to compensate for limitations in hardware. They want to work around them. A lot of this evolution that we're building is to try to find ways to more efficiently do that compute. What we call inferencing, the actual deployed machine-learning scoring, as they will. >> Jeff: Right. In a product, it's all about how quickly can I get the data out. It's not about waiting two seconds to start the processing. You know, in an autonomous-driven car where someone's crossing the road, I'm not waiting two seconds to figure out it's a person. Right, right. I need it right away. So I need to be able to do that with video feeds, right off a disk drive, from the ethernet data coming in. I want to do that directly in line, so that my processor can do what it's good at, and we offload that processor to get better system performance. Right. And then on the machine-learning specifically, 'cause that is all the rage. And it is learning. So there is a real-time aspect to it. You talked about autonomous vehicles. But there's also continuous learning over time, that's not necessarily dependent on learning immediately. Right. But continuous improvement over time. What are some of the unique challenges in machine-learning? And what are some of the ways that you guys are trying to address those? Once you've trained the network, people always have to go back and retrain. They say okay, I've got a good accuracy, but I want better performance. Then they start lowering the precision, and they say well, today we're at 32-bit, maybe 16-bit. Then they start looking into eight. But the problem is, their accuracy drops. So they retrain that into eight topology, that network, to get the performance benefit, but with the higher accuracy. The flexibility of FPGA actually allows people to take that network at 32-bit, with the 32-bit trained weights, but deploy it in lower precision. So we can abstract away the fact that the hardware's so flexible, we can do what we call floating point 11-bit floating point. Or even 8-bit floating point. Even here today at the show, we've got a binary and ternary demo, showcasing the flexibility that the FPGA can provide today with that building block piece of hardware that the FPGA can be. And really provide, not only the topologies that people are trying to build today, but tomorrow. >> Jeff: Right. Future proofing their hardware. But then the precisions that they may want to do. So that they don't have to retrain. They can get less than a 1% accuracy loss, but they can lower that precision to get all the performance benefits of that data scientist's work to come up with a new architecture. Right. But it's interesting 'cause there's trade-offs, right? >> Bill: Sure. There's no optimum solution. It's optimum as to what you're trying to optimize for. >> Bill: Right. So really, the ability to change the ability to continue to work on those learning algorithms, to be able to change your priority, is pretty key. Yeah, a lot of times today, you want this. So this has been the mantra of the FPGA for 30 plus years. You deploy it today, and it works fine. Maybe you build an ASIC out of it. But what you want tomorrow is going to be different. So maybe if it's changing so rapidly, you build the ASIC because there's runway to that. But if there isn't, you may just say, I have the FPGA, I can just reprogram it to do what's the next architecture, the next methodology. Right. So it gives you that future proofing. That capability to sustain different topologies. Different architectures, different precisions. To kind of keep people going with the same piece of hardware. Without having to say, spin up a new ASIC every year. >> Jeff: Right, right. Which, even then, it's so dynamic it's probably faster then, every year, the way things are going today. So the other thing you mentioned is topography, and it's not the same topography you mentioned, but this whole idea of edge. Sure. So moving more and more compute, and store, and smarts to the edge. 'Cause there's just not going to be time, you mentioned autonomous vehicles, a lot of applications to get everything back up into the cloud. Back into the data center. You guys are pushing this technology, not only in the data center, but progressively closer and closer to the edge. Absolutely. The data center has a need. It's always going to be there, but they're getting big. The amount of data that we're trying to process every day is growing. I always say that the telecom industry started the Information Age. Well, the Information Age has done a great job of collecting a lot of data. We have to process that. If you think about where, maybe I'll allude back to autonomous vehicles. You're talking about thousands of gigabytes, per day, of data generated. Smart factories. Exabytes of data generated a day. What are you going to do with all that? It has to be processed. We need that compute in the data center. But we have to start pushing it out into the edge, where I start thinking, well even a show like this, I want security. So, I want to do real-time weapons detection, right? Security prevention. I want to do smart city applications. Just monitoring how traffic moves through a mall, so that I can control lighting and heating. All of these things at the edge, in the camera, that's deployed on the street. In the camera that's deployed in a mall. All of that, we want to make those smarter, so that we can do more compute. To offload the amount of data that needs to be sent back to the data center. >> Jeff: Right. As much as possible. Relevant data gets sent back. No shortage of demand for compute store networking, is there? No, no. It's really a heterogeneous world, right? We need all the different compute. We need all the different aspects of transmission of the data with 5G. We need disk space to store it. >> Jeff: Right. We need cooling to cool it. It's really becoming a heterogeneous world. All right, well, I'm going to give you the last word. I can't believe we're in November of 2017. Yeah. Which is bananas. What are you working on for 2018? What are some of your priorities? If we talk a year from now, what are we going to be talking about? Intel's acquired a lot of companies over the past couple years now on AI. You're seeing a lot of merging of the FPGA into that ecosystem. We've got the Nervana. We've got Movidius. We've got Mobileye acquisitions. Saffron Technologies. All of these things, when the FPGA is kind of a key piece of that because it gives you that flexibility of the hardware, to extend those pieces. You're going to see a lot more stuff in the cloud. A lot more stuff with partners next year. And really enabling that edge to data center compute, with things like binary neural networks, ternary neural networks. All the different next generation of topologies to kind of keep that leading edge flexibility that the FPGA can provide for people's products tomorrow. >> Jeff: Exciting times. Yeah, great. All right, Bill Jenkins. There's a lot going on in computing. If you're not getting your computer science degree, kids, think about it again. He's Bill Jenkins. I'm Jeff Frick. You're watching theCUBE from Super Computing 2017. Thanks for watching. Thank you. (techno music)

Published Date : Nov 14 2017

SUMMARY :

Kind of the next generation of big data, if you will. We can be inline analytics to offload the system, A lot of this evolution that we're building is to try to of hardware that the FPGA can be. So that they don't have to retrain. It's optimum as to what you're trying to optimize for. So really, the ability to change the ability to continue We need that compute in the data center. We need all the different aspects of of the hardware, to extend those pieces. There's a lot going on in computing.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JeffPERSON

0.99+

Jeff FrickPERSON

0.99+

Bill JenkinsPERSON

0.99+

two secondsQUANTITY

0.99+

2018DATE

0.99+

November of 2017DATE

0.99+

8-bitQUANTITY

0.99+

16-bitQUANTITY

0.99+

32-bitQUANTITY

0.99+

todayDATE

0.99+

next yearDATE

0.99+

BillPERSON

0.99+

30 plus yearsQUANTITY

0.99+

11-bitQUANTITY

0.99+

tomorrowDATE

0.99+

Denver, ColoradoLOCATION

0.99+

IntelORGANIZATION

0.98+

eightQUANTITY

0.98+

MovidiusORGANIZATION

0.98+

Super Computing Conference 2017EVENT

0.98+

a dayQUANTITY

0.96+

Saffron TechnologiesORGANIZATION

0.96+

thousands of gigabytesQUANTITY

0.95+

MobileyeORGANIZATION

0.95+

About 12 thousand peopleQUANTITY

0.95+

two years agoDATE

0.95+

90sDATE

0.94+

less than a 1%QUANTITY

0.94+

NervanaPERSON

0.94+

FPGAORGANIZATION

0.9+

both hardwareQUANTITY

0.89+

firstQUANTITY

0.84+

Exabytes of dataQUANTITY

0.76+

Super Computing 2017EVENT

0.75+

past couple yearsDATE

0.73+

every yearQUANTITY

0.69+

yearQUANTITY

0.69+

per dayQUANTITY

0.6+

5GQUANTITY

0.58+

Super Computing 17EVENT

0.55+

theCUBEORGANIZATION

0.52+

FPGATITLE

0.42+

AI for Good Panel - Precision Medicine - SXSW 2017 - #IntelAI - #theCUBE


 

>> Welcome to the Intel AI Lounge. Today, we're very excited to share with you the Precision Medicine panel discussion. I'll be moderating the session. My name is Kay Erin. I'm the general manager of Health and Life Sciences at Intel. And I'm excited to share with you these three panelists that we have here. First is John Madison. He is a chief information medical officer and he is part of Kaiser Permanente. We're very excited to have you here. Thank you, John. >> Thank you. >> We also have Naveen Rao. He is the VP and general manager for the Artificial Intelligence Solutions at Intel. He's also the former CEO of Nervana, which was acquired by Intel. And we also have Bob Rogers, who's the chief data scientist at our AI solutions group. So, why don't we get started with our questions. I'm going to ask each of the panelists to talk, introduce themselves, as well as talk about how they got started with AI. So why don't we start with John? >> Sure, so can you hear me okay in the back? Can you hear? Okay, cool. So, I am a recovering evolutionary biologist and a recovering physician and a recovering geek. And I implemented the health record system for the first and largest region of Kaiser Permanente. And it's pretty obvious that most of the useful data in a health record, in lies in free text. So I started up a natural language processing team to be able to mine free text about a dozen years ago. So we can do things with that that you can't otherwise get out of health information. I'll give you an example. I read an article online from the New England Journal of Medicine about four years ago that said over half of all people who have had their spleen taken out were not properly vaccinated for a common form of pneumonia, and when your spleen's missing, you must have that vaccine or you die a very sudden death with sepsis. In fact, our medical director in Northern California's father died of that exact same scenario. So, when I read the article, I went to my structured data analytics team and to my natural language processing team and said please show me everybody who has had their spleen taken out and hasn't been appropriately vaccinated and we ran through about 20 million records in about three hours with the NLP team, and it took about three weeks with a structured data analytics team. That sounds counterintuitive but it actually happened that way. And it's not a competition for time only. It's a competition for quality and sensitivity and specificity. So we were able to indentify all of our members who had their spleen taken out, who should've had a pneumococcal vaccine. We vaccinated them and there are a number of people alive today who otherwise would've died absent that capability. So people don't really commonly associate natural language processing with machine learning, but in fact, natural language processing relies heavily and is the first really, highly successful example of machine learning. So we've done dozens of similar projects, mining free text data in millions of records very efficiently, very effectively. But it really helped advance the quality of care and reduce the cost of care. It's a natural step forward to go into the world of personalized medicine with the arrival of a 100-dollar genome, which is actually what it costs today to do a full genome sequence. Microbiomics, that is the ecosystem of bacteria that are in every organ of the body actually. And we know now that there is a profound influence of what's in our gut and how we metabolize drugs, what diseases we get. You can tell in a five year old, whether or not they were born by a vaginal delivery or a C-section delivery by virtue of the bacteria in the gut five years later. So if you look at the complexity of the data that exists in the genome, in the microbiome, in the health record with free text and you look at all the other sources of data like this streaming data from my wearable monitor that I'm part of a research study on Precision Medicine out of Stanford, there is a vast amount of disparate data, not to mention all the imaging, that really can collectively produce much more useful information to advance our understanding of science, and to advance our understanding of every individual. And then we can do the mash up of a much broader range of science in health care with a much deeper sense of data from an individual and to do that with structured questions and structured data is very yesterday. The only way we're going to be able to disambiguate those data and be able to operate on those data in concert and generate real useful answers from the broad array of data types and the massive quantity of data, is to let loose machine learning on all of those data substrates. So my team is moving down that pathway and we're very excited about the future prospects for doing that. >> Yeah, great. I think that's actually some of the things I'm very excited about in the future with some of the technologies we're developing. My background, I started actually being fascinated with computation in biological forms when I was nine. Reading and watching sci-fi, I was kind of a big dork which I pretty much still am. I haven't really changed a whole lot. Just basically seeing that machines really aren't all that different from biological entities, right? We are biological machines and kind of understanding how a computer works and how we engineer those things and trying to pull together concepts that learn from biology into that has always been a fascination of mine. As an undergrad, I was in the EE, CS world. Even then, I did some research projects around that. I worked in the industry for about 10 years designing chips, microprocessors, various kinds of ASICs, and then actually went back to school, quit my job, got a Ph.D. in neuroscience, computational neuroscience, to specifically understand what's the state of the art. What do we really understand about the brain? And are there concepts that we can take and bring back? Inspiration's always been we want to... We watch birds fly around. We want to figure out how to make something that flies. We extract those principles, and then build a plane. Don't necessarily want to build a bird. And so Nervana's really was the combination of all those experiences, bringing it together. Trying to push computation in a new a direction. Now, as part of Intel, we can really add a lot of fuel to that fire. I'm super excited to be part of Intel in that the technologies that we were developing can really proliferate and be applied to health care, can be applied to Internet, can be applied to every facet of our lives. And some of the examples that John mentioned are extremely exciting right now and these are things we can do today. And the generality of these solutions are just really going to hit every part of health care. I mean from a personal viewpoint, my whole family are MDs. I'm sort of the black sheep of the family. I don't have an MD. And it's always been kind of funny to me that knowledge is concentrated in a few individuals. Like you have a rare tumor or something like that, you need the guy who knows how to read this MRI. Why? Why is it like that? Can't we encapsulate that knowledge into a computer or into an algorithm, and democratize it. And the reason we couldn't do it is we just didn't know how. And now we're really getting to a point where we know how to do that. And so I want that capability to go to everybody. It'll bring the cost of healthcare down. It'll make all of us healthier. That affects everything about our society. So that's really what's exciting about it to me. >> That's great. So, as you heard, I'm Bob Rogers. I'm chief data scientist for analytics and artificial intelligence solutions at Intel. My mission is to put powerful analytics in the hands of every decision maker and when I think about Precision Medicine, decision makers are not just doctors and surgeons and nurses, but they're also case managers and care coordinators and probably most of all, patients. So the mission is really to put powerful analytics and AI capabilities in the hands of everyone in health care. It's a very complex world and we need tools to help us navigate it. So my background, I started with a Ph.D. in physics and I was computer modeling stuff, falling into super massive black holes. And there's a lot of applications for that in the real world. No, I'm kidding. (laughter) >> John: There will be, I'm sure. Yeah, one of these days. Soon as we have time travel. Okay so, I actually, about 1991, I was working on my post doctoral research, and I heard about neural networks, these things that could compute the way the brain computes. And so, I started doing some research on that. I wrote some papers and actually, it was an interesting story. The problem that we solved that got me really excited about neural networks, which have become deep learning, my office mate would come in. He was this young guy who was about to go off to grad school. He'd come in every morning. "I hate my project." Finally, after two weeks, what's your project? What's the problem? It turns out he had to circle these little fuzzy spots on these images from a telescope. So they were looking for the interesting things in a sky survey, and he had to circle them and write down their coordinates all summer. Anyone want to volunteer to do that? No? Yeah, he was very unhappy. So we took the first two weeks of data that he created doing his work by hand, and we trained an artificial neural network to do his summer project and finished it in about eight hours of computing. (crowd laughs) And so he was like yeah, this is amazing. I'm so happy. And we wrote a paper. I was the first author of course, because I was the senior guy at age 24. And he was second author. His first paper ever. He was very, very excited. So we have to fast forward about 20 years. His name popped up on the Internet. And so it caught my attention. He had just won the Nobel Prize in physics. (laughter) So that's where artificial intelligence will get you. (laughter) So thanks Naveen. Fast forwarding, I also developed some time series forecasting capabilities that allowed me to create a hedge fund that I ran for 12 years. After that, I got into health care, which really is the center of my passion. Applying health care to figuring out how to get all the data from all those siloed sources, put it into the cloud in a secure way, and analyze it so you can actually understand those cases that John was just talking about. How do you know that that person had had a splenectomy and that they needed to get that pneumovax? You need to be able to search all the data, so we used AI, natural language processing, machine learning, to do that and then two years ago, I was lucky enough to join Intel and, in the intervening time, people like Naveen actually thawed the AI winter and we're really in a spring of amazing opportunities with AI, not just in health care but everywhere, but of course, the health care applications are incredibly life saving and empowering so, excited to be here on this stage with you guys. >> I just want to cue off of your comment about the role of physics in AI and health care. So the field of microbiomics that I referred to earlier, bacteria in our gut. There's more bacteria in our gut than there are cells in our body. There's 100 times more DNA in that bacteria than there is in the human genome. And we're now discovering a couple hundred species of bacteria a year that have never been identified under a microscope just by their DNA. So it turns out the person who really catapulted the study and the science of microbiomics forward was an astrophysicist who did his Ph.D. in Steven Hawking's lab on the collision of black holes and then subsequently, put the other team in a virtual reality, and he developed the first super computing center and so how did he get an interest in microbiomics? He has the capacity to do high performance computing and the kind of advanced analytics that are required to look at a 100 times the volume of 3.2 billion base pairs of the human genome that are represented in the bacteria in our gut, and that has unleashed the whole science of microbiomics, which is going to really turn a lot of our assumptions of health and health care upside down. >> That's great, I mean, that's really transformational. So a lot of data. So I just wanted to let the audience know that we want to make this an interactive session, so I'll be asking for questions in a little bit, but I will start off with one question so that you can think about it. So I wanted to ask you, it looks like you've been thinking a lot about AI over the years. And I wanted to understand, even though AI's just really starting in health care, what are some of the new trends or the changes that you've seen in the last few years that'll impact how AI's being used going forward? >> So I'll start off. There was a paper published by a guy by the name of Tegmark at Harvard last summer that, for the first time, explained why neural networks are efficient beyond any mathematical model we predict. And the title of the paper's fun. It's called Deep Learning Versus Cheap Learning. So there were two sort of punchlines of the paper. One is is that the reason that mathematics doesn't explain the efficiency of neural networks is because there's a higher order of mathematics called physics. And the physics of the underlying data structures determined how efficient you could mine those data using machine learning tools. Much more so than any mathematical modeling. And so the second thing that was a reel from that paper is that the substrate of the data that you're operating on and the natural physics of those data have inherent levels of complexity that determine whether or not a 12th layer of neural net will get you where you want to go really fast, because when you do the modeling, for those math geeks in the audience, a factorial. So if there's 12 layers, there's 12 factorial permutations of different ways you could sequence the learning through those data. When you have 140 layers of a neural net, it's a much, much, much bigger number of permutations and so you end up being hardware-bound. And so, what Max Tegmark basically said is you can determine whether to do deep learning or cheap learning based upon the underlying physics of the data substrates you're operating on and have a good insight into how to optimize your hardware and software approach to that problem. >> So another way to put that is that neural networks represent the world in the way the world is sort of built. >> Exactly. >> It's kind of hierarchical. It's funny because, sort of in retrospect, like oh yeah, that kind of makes sense. But when you're thinking about it mathematically, we're like well, anything... The way a neural can represent any mathematical function, therfore, it's fully general. And that's the way we used to look at it, right? So now we're saying, well actually decomposing the world into different types of features that are layered upon each other is actually a much more efficient, compact representation of the world, right? I think this is actually, precisely the point of kind of what you're getting at. What's really exciting now is that what we were doing before was sort of building these bespoke solutions for different kinds of data. NLP, natural language processing. There's a whole field, 25 plus years of people devoted to figuring out features, figuring out what structures make sense in this particular context. Those didn't carry over at all to computer vision. Didn't carry over at all to time series analysis. Now, with neural networks, we've seen it at Nervana, and now part of Intel, solving customers' problems. We apply a very similar set of techniques across all these different types of data domains and solve them. All data in the real world seems to be hierarchical. You can decompose it into this hierarchy. And it works really well. Our brains are actually general structures. As a neuroscientist, you can look at different parts of your brain and there are differences. Something that takes in visual information, versus auditory information is slightly different but they're much more similar than they are different. So there is something invariant, something very common between all of these different modalities and we're starting to learn that. And this is extremely exciting to me trying to understand the biological machine that is a computer, right? We're figurig it out, right? >> One of the really fun things that Ray Chrisfall likes to talk about is, and it falls in the genre of biomimmicry, and how we actually replicate biologic evolution in our technical solutions so if you look at, and we're beginning to understand more and more how real neural nets work in our cerebral cortex. And it's sort of a pyramid structure so that the first pass of a broad base of analytics, it gets constrained to the next pass, gets constrained to the next pass, which is how information is processed in the brain. So we're discovering increasingly that what we've been evolving towards, in term of architectures of neural nets, is approximating the architecture of the human cortex and the more we understand the human cortex, the more insight we get to how to optimize neural nets, so when you think about it, with millions of years of evolution of how the cortex is structured, it shouldn't be a surprise that the optimization protocols, if you will, in our genetic code are profoundly efficient in how they operate. So there's a real role for looking at biologic evolutionary solutions, vis a vis technical solutions, and there's a friend of mine who worked with who worked with George Church at Harvard and actually published a book on biomimmicry and they wrote the book completely in DNA so if all of you have your home DNA decoder, you can actually read the book on your DNA reader, just kidding. >> There's actually a start up I just saw in the-- >> Read-Write DNA, yeah. >> Actually it's a... He writes something. What was it? (response from crowd member) Yeah, they're basically encoding information in DNA as a storage medium. (laughter) The company, right? >> Yeah, that same friend of mine who coauthored that biomimmicry book in DNA also did the estimate of the density of information storage. So a cubic centimeter of DNA can store an hexabyte of data. I mean that's mind blowing. >> Naveen: Highly done soon. >> Yeah that's amazing. Also you hit upon a really important point there, that one of the things that's changed is... Well, there are two major things that have changed in my perception from let's say five to 10 years ago, when we were using machine learning. You could use data to train models and make predictions to understand complex phenomena. But they had limited utility and the challenge was that if I'm trying to build on these things, I had to do a lot of work up front. It was called feature engineering. I had to do a lot of work to figure out what are the key attributes of that data? What are the 10 or 20 or 100 pieces of information that I should pull out of the data to feed to the model, and then the model can turn it into a predictive machine. And so, what's really exciting about the new generation of machine learning technology, and particularly deep learning, is that it can actually learn from example data those features without you having to do any preprogramming. That's why Naveen is saying you can take the same sort of overall approach and apply it to a bunch of different problems. Because you're not having to fine tune those features. So at the end of the day, the two things that have changed to really enable this evolution is access to more data, and I'd be curious to hear from you where you're seeing data come from, what are the strategies around that. So access to data, and I'm talking millions of examples. So 10,000 examples most times isn't going to cut it. But millions of examples will do it. And then, the other piece is the computing capability to actually take millions of examples and optimize this algorithm in a single lifetime. I mean, back in '91, when I started, we literally would have thousands of examples and it would take overnight to run the thing. So now in the world of millions, and you're putting together all of these combinations, the computing has changed a lot. I know you've made some revolutionary advances in that. But I'm curious about the data. Where are you seeing interesting sources of data for analytics? >> So I do some work in the genomics space and there are more viable permutations of the human genome than there are people who have ever walked the face of the earth. And the polygenic determination of a phenotypic expression translation, what are genome does to us in our physical experience in health and disease is determined by many, many genes and the interaction of many, many genes and how they are up and down regulated. And the complexity of disambiguating which 27 genes are affecting your diabetes and how are they up and down regulated by different interventions is going to be different than his. It's going to be different than his. And we already know that there's four or five distinct genetic subtypes of type II diabetes. So physicians still think there's one disease called type II diabetes. There's actually at least four or five genetic variants that have been identified. And so, when you start thinking about disambiguating, particularly when we don't know what 95 percent of DNA does still, what actually is the underlining cause, it will require this massive capability of developing these feature vectors, sometimes intuiting it, if you will, from the data itself. And other times, taking what's known knowledge to develop some of those feature vectors, and be able to really understand the interaction of the genome and the microbiome and the phenotypic data. So the complexity is high and because the variation complexity is high, you do need these massive members. Now I'm going to make a very personal pitch here. So forgive me, but if any of you have any role in policy at all, let me tell you what's happening right now. The Genomic Information Nondiscrimination Act, so called GINA, written by a friend of mine, passed a number of years ago, says that no one can be discriminated against for health insurance based upon their genomic information. That's cool. That should allow all of you to feel comfortable donating your DNA to science right? Wrong. You are 100% unprotected from discrimination for life insurance, long term care and disability. And it's being practiced legally today and there's legislation in the House, in mark up right now to completely undermine the existing GINA legislation and say that whenever there's another applicable statute like HIPAA, that the GINA is irrelevant, that none of the fines and penalties are applicable at all. So we need a ton of data to be able to operate on. We will not be getting a ton of data to operate on until we have the kind of protection we need to tell people, you can trust us. You can give us your data, you will not be subject to discrimination. And that is not the case today. And it's being further undermined. So I want to make a plea to any of you that have any policy influence to go after that because we need this data to help the understanding of human health and disease and we're not going to get it when people look behind the curtain and see that discrimination is occurring today based upon genetic information. >> Well, I don't like the idea of being discriminated against based on my DNA. Especially given how little we actually know. There's so much complexity in how these things unfold in our own bodies, that I think anything that's being done is probably childishly immature and oversimplifying. So it's pretty rough. >> I guess the translation here is that we're all unique. It's not just a Disney movie. (laughter) We really are. And I think one of the strengths that I'm seeing, kind of going back to the original point, of these new techniques is it's going across different data types. It will actually allow us to learn more about the uniqueness of the individual. It's not going to be just from one data source. They were collecting data from many different modalities. We're collecting behavioral data from wearables. We're collecting things from scans, from blood tests, from genome, from many different sources. The ability to integrate those into a unified picture, that's the important thing that we're getting toward now. That's what I think is going to be super exciting here. Think about it, right. I can tell you to visual a coin, right? You can visualize a coin. Not only do you visualize it. You also know what it feels like. You know how heavy it is. You have a mental model of that from many different perspectives. And if I take away one of those senses, you can still identify the coin, right? If I tell you to put your hand in your pocket, and pick out a coin, you probably can do that with 100% reliability. And that's because we have this generalized capability to build a model of something in the world. And that's what we need to do for individuals is actually take all these different data sources and come up with a model for an individual and you can actually then say what drug works best on this. What treatment works best on this? It's going to get better with time. It's not going to be perfect, because this is what a doctor does, right? A doctor who's very experienced, you're a practicing physician right? Back me up here. That's what you're doing. You basically have some categories. You're taking information from the patient when you talk with them, and you're building a mental model. And you apply what you know can work on that patient, right? >> I don't have clinic hours anymore, but I do take care of many friends and family. (laughter) >> You used to, you used to. >> I practiced for many years before I became a full-time geek. >> I thought you were a recovering geek. >> I am. (laughter) I do more policy now. >> He's off the wagon. >> I just want to take a moment and see if there's anyone from the audience who would like to ask, oh. Go ahead. >> We've got a mic here, hang on one second. >> I have tons and tons of questions. (crosstalk) Yes, so first of all, the microbiome and the genome are really complex. You already hit about that. Yet most of the studies we do are small scale and we have difficulty repeating them from study to study. How are we going to reconcile all that and what are some of the technical hurdles to get to the vision that you want? >> So primarily, it's been the cost of sequencing. Up until a year ago, it's $1000, true cost. Now it's $100, true cost. And so that barrier is going to enable fairly pervasive testing. It's not a real competitive market becaue there's one sequencer that is way ahead of everybody else. So the price is not $100 yet. The cost is below $100. So as soon as there's competition to drive the cost down, and hopefully, as soon as we all have the protection we need against discrimination, as I mentioned earlier, then we will have large enough sample sizes. And so, it is our expectation that we will be able to pool data from local sources. I chair the e-health work group at the Global Alliance for Genomics and Health which is working on this very issue. And rather than pooling all the data into a single, common repository, the strategy, and we're developing our five-year plan in a month in London, but the goal is to have a federation of essentially credentialed data enclaves. That's a formal method. HHS already does that so you can get credentialed to search all the data that Medicare has on people that's been deidentified according to HIPPA. So we want to provide the same kind of service with appropriate consent, at an international scale. And there's a lot of nations that are talking very much about data nationality so that you can't export data. So this approach of a federated model to get at data from all the countries is important. The other thing is a block-chain technology is going to be very profoundly useful in this context. So David Haussler of UC Santa Cruz is right now working on a protocol using an open block-chain, public ledger, where you can put out. So for any typical cancer, you may have a half dozen, what are called sematic variance. Cancer is a genetic disease so what has mutated to cause it to behave like a cancer? And if we look at those biologically active sematic variants, publish them on a block chain that's public, so there's not enough data there to reidentify the patient. But if I'm a physician treating a woman with breast cancer, rather than say what's the protocol for treating a 50-year-old woman with this cell type of cancer, I can say show me all the people in the world who have had this cancer at the age of 50, wit these exact six sematic variants. Find the 200 people worldwide with that. Ask them for consent through a secondary mechanism to donate everything about their medical record, pool that information of the core of 200 that exactly resembles the one sitting in front of me, and find out, of the 200 ways they were treated, what got the best results. And so, that's the kind of future where a distributed, federated architecture will allow us to query and obtain a very, very relevant cohort, so we can basically be treating patients like mine, sitting right in front of me. Same thing applies for establishing research cohorts. There's some very exciting stuff at the convergence of big data analytics, machine learning, and block chaining. >> And this is an area that I'm really excited about and I think we're excited about generally at Intel. They actually have something called the Collaborative Cancer Cloud, which is this kind of federated model. We have three different academic research centers. Each of them has a very sizable and valuable collection of genomic data with phenotypic annotations. So you know, pancreatic cancer, colon cancer, et cetera, and we've actually built a secure computing architecture that can allow a person who's given the right permissions by those organizations to ask a specific question of specific data without ever sharing the data. So the idea is my data's really important to me. It's valuable. I want us to be able to do a study that gets the number from the 20 pancreatic cancer patients in my cohort, up to the 80 that we have in the whole group. But I can't do that if I'm going to just spill my data all over the world. And there are HIPAA and compliance reasons for that. There are business reasons for that. So what we've built at Intel is this platform that allows you to do different kinds of queries on this genetic data. And reach out to these different sources without sharing it. And then, the work that I'm really involved in right now and that I'm extremely excited about... This also touches on something that both of you said is it's not sufficient to just get the genome sequences. You also have to have the phenotypic data. You have to know what cancer they've had. You have to know that they've been treated with this drug and they've survived for three months or that they had this side effect. That clinical data also needs to be put together. It's owned by other organizations, right? Other hospitals. So the broader generalization of the Collaborative Cancer Cloud is something we call the data exchange. And it's a misnomer in a sense that we're not actually exchanging data. We're doing analytics on aggregated data sets without sharing it. But it really opens up a world where we can have huge populations and big enough amounts of data to actually train these models and draw the thread in. Of course, that really then hits home for the techniques that Nervana is bringing to the table, and of course-- >> Stanford's one of your academic medical centers? >> Not for that Collaborative Cancer Cloud. >> The reason I mentioned Standford is because the reason I'm wearing this FitBit is because I'm a research subject at Mike Snyder's, the chair of genetics at Stanford, IPOP, intrapersonal omics profile. So I was fully sequenced five years ago and I get four full microbiomes. My gut, my mouth, my nose, my ears. Every three months and I've done that for four years now. And about a pint of blood. And so, to your question of the density of data, so a lot of the problem with applying these techniques to health care data is that it's basically a sparse matrix and there's a lot of discontinuities in what you can find and operate on. So what Mike is doing with the IPOP study is much the same as you described. Creating a highly dense longitudinal set of data that will help us mitigate the sparse matrix problem. (low volume response from audience member) Pardon me. >> What's that? (low volume response) (laughter) >> Right, okay. >> John: Lost the school sample. That's got to be a new one I've heard now. >> Okay, well, thank you so much. That was a great question. So I'm going to repeat this and ask if there's another question. You want to go ahead? >> Hi, thanks. So I'm a journalist and I report a lot on these neural networks, a system that's beter at reading mammograms than your human radiologists. Or a system that's better at predicting which patients in the ICU will get sepsis. These sort of fascinating academic studies that I don't really see being translated very quickly into actual hospitals or clinical practice. Seems like a lot of the problems are regulatory, or liability, or human factors, but how do you get past that and really make this stuff practical? >> I think there's a few things that we can do there and I think the proof points of the technology are really important to start with in this specific space. In other places, sometimes, you can start with other things. But here, there's a real confidence problem when it comes to health care, and for good reason. We have doctors trained for many, many years. School and then residencies and other kinds of training. Because we are really, really conservative with health care. So we need to make sure that technology's well beyond just the paper, right? These papers are proof points. They get people interested. They even fuel entire grant cycles sometimes. And that's what we need to happen. It's just an inherent problem, its' going to take a while. To get those things to a point where it's like well, I really do trust what this is saying. And I really think it's okay to now start integrating that into our standard of care. I think that's where you're seeing it. It's frustrating for all of us, believe me. I mean, like I said, I think personally one of the biggest things, I want to have an impact. Like when I go to my grave, is that we used machine learning to improve health care. We really do feel that way. But it's just not something we can do very quickly and as a business person, I don't actually look at those use cases right away because I know the cycle is just going to be longer. >> So to your point, the FDA, for about four years now, has understood that the process that has been given to them by their board of directors, otherwise known as Congress, is broken. And so they've been very actively seeking new models of regulation and what's really forcing their hand is regulation of devices and software because, in many cases, there are black box aspects of that and there's a black box aspect to machine learning. Historically, Intel and others are making inroads into providing some sort of traceability and transparency into what happens in that black box rather than say, overall we get better results but once in a while we kill somebody. Right? So there is progress being made on that front. And there's a concept that I like to use. Everyone knows Ray Kurzweil's book The Singularity Is Near? Well, I like to think that diadarity is near. And the diadarity is where you have human transparency into what goes on in the black box and so maybe Bob, you want to speak a little bit about... You mentioned that, in a prior discussion, that there's some work going on at Intel there. >> Yeah, absolutely. So we're working with a number of groups to really build tools that allow us... In fact Naveen probably can talk in even more detail than I can, but there are tools that allow us to actually interrogate machine learning and deep learning systems to understand, not only how they respond to a wide variety of situations but also where are there biases? I mean, one of the things that's shocking is that if you look at the clinical studies that our drug safety rules are based on, 50 year old white guys are the peak of that distribution, which I don't see any problem with that, but some of you out there might not like that if you're taking a drug. So yeah, we want to understand what are the biases in the data, right? And so, there's some new technologies. There's actually some very interesting data-generative technologies. And this is something I'm also curious what Naveen has to say about, that you can generate from small sets of observed data, much broader sets of varied data that help probe and fill in your training for some of these systems that are very data dependent. So that takes us to a place where we're going to start to see deep learning systems generating data to train other deep learning systems. And they start to sort of go back and forth and you start to have some very nice ways to, at least, expose the weakness of these underlying technologies. >> And that feeds back to your question about regulatory oversight of this. And there's the fascinating, but little known origin of why very few women are in clinical studies. Thalidomide causes birth defects. So rather than say pregnant women can't be enrolled in drug trials, they said any woman who is at risk of getting pregnant cannot be enrolled. So there was actually a scientific meritorious argument back in the day when they really didn't know what was going to happen post-thalidomide. So it turns out that the adverse, unintended consequence of that decision was we don't have data on women and we know in certain drugs, like Xanax, that the metabolism is so much slower, that the typical dosing of Xanax is women should be less than half of that for men. And a lot of women have had very serious adverse effects by virtue of the fact that they weren't studied. So the point I want to illustrate with that is that regulatory cycles... So people have known for a long time that was like a bad way of doing regulations. It should be changed. It's only recently getting changed in any meaningful way. So regulatory cycles and legislative cycles are incredibly slow. The rate of exponential growth in technology is exponential. And so there's impedance mismatch between the cycle time for regulation cycle time for innovation. And what we need to do... I'm working with the FDA. I've done four workshops with them on this very issue. Is that they recognize that they need to completely revitalize their process. They're very interested in doing it. They're not resisting it. People think, oh, they're bad, the FDA, they're resisting. Trust me, there's nobody on the planet who wants to revise these review processes more than the FDA itself. And so they're looking at models and what I recommended is global cloud sourcing and the FDA could shift from a regulatory role to one of doing two things, assuring the people who do their reviews are competent, and assuring that their conflicts of interest are managed, because if you don't have a conflict of interest in this very interconnected space, you probably don't know enough to be a reviewer. So there has to be a way to manage the conflict of interest and I think those are some of the keypoints that the FDA is wrestling with because there's type one and type two errors. If you underregulate, you end up with another thalidomide and people born without fingers. If you overregulate, you prevent life saving drugs from coming to market. So striking that balance across all these different technologies is extraordinarily difficult. If it were easy, the FDA would've done it four years ago. It's very complicated. >> Jumping on that question, so all three of you are in some ways entrepreneurs, right? Within your organization or started companies. And I think it would be good to talk a little bit about the business opportunity here, where there's a huge ecosystem in health care, different segments, biotech, pharma, insurance payers, etc. Where do you see is the ripe opportunity or industry, ready to really take this on and to make AI the competitive advantage. >> Well, the last question also included why aren't you using the result of the sepsis detection? We do. There were six or seven published ways of doing it. We did our own data, looked at it, we found a way that was superior to all the published methods and we apply that today, so we are actually using that technology to change clinical outcomes. As far as where the opportunities are... So it's interesting. Because if you look at what's going to be here in three years, we're not going to be using those big data analytics models for sepsis that we are deploying today, because we're just going to be getting a tiny aliquot of blood, looking for the DNA or RNA of any potential infection and we won't have to infer that there's a bacterial infection from all these other ancillary, secondary phenomenon. We'll see if the DNA's in the blood. So things are changing so fast that the opportunities that people need to look for are what are generalizable and sustainable kind of wins that are going to lead to a revenue cycle that are justified, a venture capital world investing. So there's a lot of interesting opportunities in the space. But I think some of the biggest opportunities relate to what Bob has talked about in bringing many different disparate data sources together and really looking for things that are not comprehensible in the human brain or in traditional analytic models. >> I think we also got to look a little bit beyond direct care. We're talking about policy and how we set up standards, these kinds of things. That's one area. That's going to drive innovation forward. I completely agree with that. Direct care is one piece. How do we scale out many of the knowledge kinds of things that are embedded into one person's head and get them out to the world, democratize that. Then there's also development. The underlying technology's of medicine, right? Pharmaceuticals. The traditional way that pharmaceuticals is developed is actually kind of funny, right? A lot of it was started just by chance. Penicillin, a very famous story right? It's not that different today unfortunately, right? It's conceptually very similar. Now we've got more science behind it. We talk about domains and interactions, these kinds of things but fundamentally, the problem is what we in computer science called NP hard, it's too difficult to model. You can't solve it analytically. And this is true for all these kinds of natural sorts of problems by the way. And so there's a whole field around this, molecular dynamics and modeling these sorts of things, that are actually being driven forward by these AI techniques. Because it turns out, our brain doesn't do magic. It actually doesn't solve these problems. It approximates them very well. And experience allows you to approximate them better and better. Actually, it goes a little bit to what you were saying before. It's like simulations and forming your own networks and training off each other. There are these emerging dynamics. You can simulate steps of physics. And you come up with a system that's much too complicated to ever solve. Three pool balls on a table is one such system. It seems pretty simple. You know how to model that, but it actual turns out you can't predict where a balls going to be once you inject some energy into that table. So something that simple is already too complex. So neural network techniques actually allow us to start making those tractable. These NP hard problems. And things like molecular dynamics and actually understanding how different medications and genetics will interact with each other is something we're seeing today. And so I think there's a huge opportunity there. We've actually worked with customers in this space. And I'm seeing it. Like Rosch is acquiring a few different companies in space. They really want to drive it forward, using big data to drive drug development. It's kind of counterintuitive. I never would've thought it had I not seen it myself. >> And there's a big related challenge. Because in personalized medicine, there's smaller and smaller cohorts of people who will benefit from a drug that still takes two billion dollars on average to develop. That is unsustainable. So there's an economic imperative of overcoming the cost and the cycle time for drug development. >> I want to take a go at this question a little bit differently, thinking about not so much where are the industry segments that can benefit from AI, but what are the kinds of applications that I think are most impactful. So if this is what a skilled surgeon needs to know at a particular time to care properly for a patient, this is where most, this area here, is where most surgeons are. They are close to the maximum knowledge and ability to assimilate as they can be. So it's possible to build complex AI that can pick up on that one little thing and move them up to here. But it's not a gigantic accelerator, amplifier of their capability. But think about other actors in health care. I mentioned a couple of them earlier. Who do you think the least trained actor in health care is? >> John: Patients. >> Yes, the patients. The patients are really very poorly trained, including me. I'm abysmal at figuring out who to call and where to go. >> Naveen: You know as much the doctor right? (laughing) >> Yeah, that's right. >> My doctor friends always hate that. Know your diagnosis, right? >> Yeah, Dr. Google knows. So the opportunities that I see that are really, really exciting are when you take an AI agent, like sometimes I like to call it contextually intelligent agent, or a CIA, and apply it to a problem where a patient has a complex future ahead of them that they need help navigating. And you use the AI to help them work through. Post operative. You've got PT. You've got drugs. You've got to be looking for side effects. An agent can actually help you navigate. It's like your own personal GPS for health care. So it's giving you the inforamation that you need about you for your care. That's my definition of Precision Medicine. And it can include genomics, of course. But it's much bigger. It's that broader picture and I think that a sort of agent way of thinking about things and filling in the gaps where there's less training and more opportunity, is very exciting. >> Great start up idea right there by the way. >> Oh yes, right. We'll meet you all out back for the next start up. >> I had a conversation with the head of the American Association of Medical Specialties just a couple of days ago. And what she was saying, and I'm aware of this phenomenon, but all of the medical specialists are saying, you're killing us with these stupid board recertification trivia tests that you're giving us. So if you're a cardiologist, you have to remember something that happens in one in 10 million people, right? And they're saying that irrelevant anymore, because we've got advanced decision support coming. We have these kinds of analytics coming. Precisely what you're saying. So it's human augmentation of decision support that is coming at blazing speed towards health care. So in that context, it's much more important that you have a basic foundation, you know how to think, you know how to learn, and you know where to look. So we're going to be human-augmented learning systems much more so than in the past. And so the whole recertification process is being revised right now. (inaudible audience member speaking) Speak up, yeah. (person speaking) >> What makes it fathomable is that you can-- (audience member interjects inaudibly) >> Sure. She was saying that our brain is really complex and large and even our brains don't know how our brains work, so... are there ways to-- >> What hope do we have kind of thing? (laughter) >> It's a metaphysical question. >> It circles all the way down, exactly. It's a great quote. I mean basically, you can decompose every system. Every complicated system can be decomposed into simpler, emergent properties. You lose something perhaps with each of those, but you get enough to actually understand most of the behavior. And that's really how we understand the world. And that's what we've learned in the last few years what neural network techniques can allow us to do. And that's why our brain can understand our brain. (laughing) >> Yeah, I'd recommend reading Chris Farley's last book because he addresses that issue in there very elegantly. >> Yeah we're seeing some really interesting technologies emerging right now where neural network systems are actually connecting other neural network systems in networks. You can see some very compelling behavior because one of the things I like to distinguish AI versus traditional analytics is we used to have question-answering systems. I used to query a database and create a report to find out how many widgets I sold. Then I started using regression or machine learning to classify complex situations from this is one of these and that's one of those. And then as we've moved more recently, we've got these AI-like capabilities like being able to recognize that there's a kitty in the photograph. But if you think about it, if I were to show you a photograph that happened to have a cat in it, and I said, what's the answer, you'd look at me like, what are you talking about? I have to know the question. So where we're cresting with these connected sets of neural systems, and with AI in general, is that the systems are starting to be able to, from the context, understand what the question is. Why would I be asking about this picture? I'm a marketing guy, and I'm curious about what Legos are in the thing or what kind of cat it is. So it's being able to ask a question, and then take these question-answering systems, and actually apply them so that's this ability to understand context and ask questions that we're starting to see emerge from these more complex hierarchical neural systems. >> There's a person dying to ask a question. >> Sorry. You have hit on several different topics that all coalesce together. You mentioned personalized models. You mentioned AI agents that could help you as you're going through a transitionary period. You mentioned data sources, especially across long time periods. Who today has access to enough data to make meaningful progress on that, not just when you're dealing with an issue, but day-to-day improvement of your life and your health? >> Go ahead, great question. >> That was a great question. And I don't think we have a good answer to it. (laughter) I'm sure John does. Well, I think every large healthcare organization and various healthcare consortiums are working very hard to achieve that goal. The problem remains in creating semantic interoperatability. So I spent a lot of my career working on semantic interoperatability. And the problem is that if you don't have well-defined, or self-defined data, and if you don't have well-defined and documented metadata, and you start operating on it, it's real easy to reach false conclusions and I can give you a classic example. It's well known, with hundreds of studies looking at when you give an antibiotic before surgery and how effective it is in preventing a post-op infection. Simple question, right? So most of the literature done prosectively was done in institutions where they had small sample sizes. So if you pool that, you get a little bit more noise, but you get a more confirming answer. What was done at a very large, not my own, but a very large institution... I won't name them for obvious reasons, but they pooled lots of data from lots of different hospitals, where the data definitions and the metadata were different. Two examples. When did they indicate the antibiotic was given? Was it when it was ordered, dispensed from the pharmacy, delivered to the floor, brought to the bedside, put in the IV, or the IV starts flowing? Different hospitals used a different metric of when it started. When did surgery occur? When they were wheeled into the OR, when they were prepped and drapped, when the first incision occurred? All different. And they concluded quite dramatically that it didn't matter when you gave the pre-op antibiotic and whether or not you get a post-op infection. And everybody who was intimate with the prior studies just completely ignored and discounted that study. It was wrong. And it was wrong because of the lack of commonality and the normalization of data definitions and metadata definitions. So because of that, this problem is much more challenging than you would think. If it were so easy as to put all these data together and operate on it, normalize and operate on it, we would've done that a long time ago. It's... Semantic interoperatability remains a big problem and we have a lot of heavy lifting ahead of us. I'm working with the Global Alliance, for example, of Genomics and Health. There's like 30 different major ontologies for how you represent genetic information. And different institutions are using different ones in different ways in different versions over different periods of time. That's a mess. >> Our all those issues applicable when you're talking about a personalized data set versus a population? >> Well, so N of 1 studies and single-subject research is an emerging field of statistics. So there's some really interesting new models like step wedge analytics for doing that on small sample sizes, recruiting people asynchronously. There's single-subject research statistics. You compare yourself with yourself at a different point in time, in a different context. So there are emerging statistics to do that and as long as you use the same sensor, you won't have a problem. But people are changing their remote sensors and you're getting different data. It's measured in different ways with different sensors at different normalization and different calibration. So yes. It even persists in the N of 1 environment. >> Yeah, you have to get started with a large N that you can apply to the N of 1. I'm actually going to attack your question from a different perspective. So who has the data? The millions of examples to train a deep learning system from scratch. It's a very limited set right now. Technology such as the Collaborative Cancer Cloud and The Data Exchange are definitely impacting that and creating larger and larger sets of critical mass. And again, not withstanding the very challenging semantic interoperability questions. But there's another opportunity Kay asked about what's changed recently. One of the things that's changed in deep learning is that we now have modules that have been trained on massive data sets that are actually very smart as certain kinds of problems. So, for instance, you can go online and find deep learning systems that actually can recognize, better than humans, whether there's a cat, dog, motorcycle, house, in a photograph. >> From Intel, open source. >> Yes, from Intel, open source. So here's what happens next. Because most of that deep learning system is very expressive. That combinatorial mixture of features that Naveen was talking about, when you have all these layers, there's a lot of features there. They're actually very general to images, not just finding cats, dogs, trees. So what happens is you can do something called transfer learning, where you take a small or modest data set and actually reoptimize it for your specific problem very, very quickly. And so we're starting to see a place where you can... On one end of the spectrum, we're getting access to the computing capabilities and the data to build these incredibly expressive deep learning systems. And over here on the right, we're able to start using those deep learning systems to solve custom versions of problems. Just last weekend or two weekends ago, in 20 minutes, I was able to take one of those general systems and create one that could recognize all different kinds of flowers. Very subtle distinctions, that I would never be able to know on my own. But I happen to be able to get the data set and literally, it took 20 minutes and I have this vision system that I could now use for a specific problem. I think that's incredibly profound and I think we're going to see this spectrum of wherever you are in your ability to get data and to define problems and to put hardware in place to see really neat customizations and a proliferation of applications of this kind of technology. >> So one other trend I think, I'm very hopeful about it... So this is a hard problem clearly, right? I mean, getting data together, formatting it from many different sources, it's one of these things that's probably never going to happen perfectly. But one trend I think that is extremely hopeful to me is the fact that the cost of gathering data has precipitously dropped. Building that thing is almost free these days. I can write software and put it on 100 million cell phones in an instance. You couldn't do that five years ago even right? And so, the amount of information we can gain from a cell phone today has gone up. We have more sensors. We're bringing online more sensors. People have Apple Watches and they're sending blood data back to the phone, so once we can actually start gathering more data and do it cheaper and cheaper, it actually doesn't matter where the data is. I can write my own app. I can gather that data and I can start driving the correct inferences or useful inferences back to you. So that is a positive trend I think here and personally, I think that's how we're going to solve it, is by gathering from that many different sources cheaply. >> Hi, my name is Pete. I've very much enjoyed the conversation so far but I was hoping perhaps to bring a little bit more focus into Precision Medicine and ask two questions. Number one, how have you applied the AI technologies as you're emerging so rapidly to your natural language processing? I'm particularly interested in, if you look at things like Amazon Echo or Siri, or the other voice recognition systems that are based on AI, they've just become incredibly accurate and I'm interested in specifics about how I might use technology like that in medicine. So where would I find a medical nomenclature and perhaps some reference to a back end that works that way? And the second thing is, what specifically is Intel doing, or making available? You mentioned some open source stuff on cats and dogs and stuff but I'm the doc, so I'm looking at the medical side of that. What are you guys providing that would allow us who are kind of geeks on the software side, as well as being docs, to experiment a little bit more thoroughly with AI technology? Google has a free AI toolkit. Several other people have come out with free AI toolkits in order to accelerate that. There's special hardware now with graphics, and different processors, hitting amazing speeds. And so I was wondering, where do I go in Intel to find some of those tools and perhaps learn a bit about the fantastic work that you guys are already doing at Kaiser? >> Let me take that first part and then we'll be able to talk about the MD part. So in terms of technology, this is what's extremely exciting now about what Intel is focusing on. We're providing those pieces. So you can actually assemble and build the application. How you build that application specific for MDs and the use cases is up to you or the one who's filling out the application. But we're going to power that technology for multiple perspectives. So Intel is already the main force behind The Data Center, right? Cloud computing, all this is already Intel. We're making that extremely amenable to AI and setting the standard for AI in the future, so we can do that from a number of different mechanisms. For somebody who wants to develop an application quickly, we have hosted solutions. Intel Nervana is kind of the brand for these kinds of things. Hosted solutions will get you going very quickly. Once you get to a certain level of scale, where costs start making more sense, things can be bought on premise. We're supplying that. We're also supplying software that makes that transition essentially free. Then taking those solutions that you develop in the cloud, or develop in The Data Center, and actually deploying them on device. You want to write something on your smartphone or PC or whatever. We're actually providing those hooks as well, so we want to make it very easy for developers to take these pieces and actually build solutions out of them quickly so you probably don't even care what hardware it's running on. You're like here's my data set, this is what I want to do. Train it, make it work. Go fast. Make my developers efficient. That's all you care about, right? And that's what we're doing. We're taking it from that point at how do we best do that? We're going to provide those technologies. In the next couple of years, there's going to be a lot of new stuff coming from Intel. >> Do you want to talk about AI Academy as well? >> Yeah, that's a great segway there. In addition to this, we have an entire set of tutorials and other online resources and things we're going to be bringing into the academic world for people to get going quickly. So that's not just enabling them on our tools, but also just general concepts. What is a neural network? How does it work? How does it train? All of these things are available now and we've made a nice, digestible class format that you can actually go and play with. >> Let me give a couple of quick answers in addition to the great answers already. So you're asking why can't we use medical terminology and do what Alexa does? Well, no, you may not be aware of this, but Andrew Ian, who was the AI guy at Google, who was recruited by Google, they have a medical chat bot in China today. I don't speak Chinese. I haven't been able to use it yet. There are two similar initiatives in this country that I know of. There's probably a dozen more in stealth mode. But Lumiata and Health Cap are doing chat bots for health care today, using medical terminology. You have the compound problem of semantic normalization within language, compounded by a cross language. I've done a lot of work with an international organization called Snowmed, which translates medical terminology. So you're aware of that. We can talk offline if you want, because I'm pretty deep into the semantic space. >> Go google Intel Nervana and you'll see all the websites there. It's intel.com/ai or nervanasys.com. >> Okay, great. Well this has been fantastic. I want to, first of all, thank all the people here for coming and asking great questions. I also want to thank our fantastic panelists today. (applause) >> Thanks, everyone. >> Thank you. >> And lastly, I just want to share one bit of information. We will have more discussions on AI next Tuesday at 9:30 AM. Diane Bryant, who is our general manager of Data Centers Group will be here to do a keynote. So I hope you all get to join that. Thanks for coming. (applause) (light electronic music)

Published Date : Mar 12 2017

SUMMARY :

And I'm excited to share with you He is the VP and general manager for the And it's pretty obvious that most of the useful data in that the technologies that we were developing So the mission is really to put and analyze it so you can actually understand So the field of microbiomics that I referred to earlier, so that you can think about it. is that the substrate of the data that you're operating on neural networks represent the world in the way And that's the way we used to look at it, right? and the more we understand the human cortex, What was it? also did the estimate of the density of information storage. and I'd be curious to hear from you And that is not the case today. Well, I don't like the idea of being discriminated against and you can actually then say what drug works best on this. I don't have clinic hours anymore, but I do take care of I practiced for many years I do more policy now. I just want to take a moment and see Yet most of the studies we do are small scale And so that barrier is going to enable So the idea is my data's really important to me. is much the same as you described. That's got to be a new one I've heard now. So I'm going to repeat this and ask Seems like a lot of the problems are regulatory, because I know the cycle is just going to be longer. And the diadarity is where you have and deep learning systems to understand, And that feeds back to your question about regulatory and to make AI the competitive advantage. that the opportunities that people need to look for to what you were saying before. of overcoming the cost and the cycle time and ability to assimilate Yes, the patients. Know your diagnosis, right? and filling in the gaps where there's less training We'll meet you all out back for the next start up. And so the whole recertification process is being are there ways to-- most of the behavior. because he addresses that issue in there is that the systems are starting to be able to, You mentioned AI agents that could help you So most of the literature done prosectively So there are emerging statistics to do that that you can apply to the N of 1. and the data to build these And so, the amount of information we can gain And the second thing is, what specifically is Intel doing, and the use cases is up to you that you can actually go and play with. You have the compound problem of semantic normalization all the websites there. I also want to thank our fantastic panelists today. So I hope you all get to join that.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Diane BryantPERSON

0.99+

Bob RogersPERSON

0.99+

Kay ErinPERSON

0.99+

JohnPERSON

0.99+

David HausslerPERSON

0.99+

ChinaLOCATION

0.99+

sixQUANTITY

0.99+

Chris FarleyPERSON

0.99+

Naveen RaoPERSON

0.99+

100%QUANTITY

0.99+

BobPERSON

0.99+

10QUANTITY

0.99+

Ray KurzweilPERSON

0.99+

IntelORGANIZATION

0.99+

LondonLOCATION

0.99+

MikePERSON

0.99+

John MadisonPERSON

0.99+

American Association of Medical SpecialtiesORGANIZATION

0.99+

fourQUANTITY

0.99+

GoogleORGANIZATION

0.99+

three monthsQUANTITY

0.99+

HHSORGANIZATION

0.99+

Andrew IanPERSON

0.99+

20 minutesQUANTITY

0.99+

$100QUANTITY

0.99+

first paperQUANTITY

0.99+

CongressORGANIZATION

0.99+

95 percentQUANTITY

0.99+

second authorQUANTITY

0.99+

UC Santa CruzORGANIZATION

0.99+

100-dollarQUANTITY

0.99+

200 waysQUANTITY

0.99+

two billion dollarsQUANTITY

0.99+

George ChurchPERSON

0.99+

Health CapORGANIZATION

0.99+

NaveenPERSON

0.99+

25 plus yearsQUANTITY

0.99+

12 layersQUANTITY

0.99+

27 genesQUANTITY

0.99+

12 yearsQUANTITY

0.99+

KayPERSON

0.99+

140 layersQUANTITY

0.99+

first authorQUANTITY

0.99+

one questionQUANTITY

0.99+

200 peopleQUANTITY

0.99+

20QUANTITY

0.99+

FirstQUANTITY

0.99+

CIAORGANIZATION

0.99+

NLPORGANIZATION

0.99+

TodayDATE

0.99+

two questionsQUANTITY

0.99+

yesterdayDATE

0.99+

PetePERSON

0.99+

MedicareORGANIZATION

0.99+

LegosORGANIZATION

0.99+

Northern CaliforniaLOCATION

0.99+

EchoCOMMERCIAL_ITEM

0.99+

EachQUANTITY

0.99+

100 timesQUANTITY

0.99+

nervanasys.comOTHER

0.99+

$1000QUANTITY

0.99+

Ray ChrisfallPERSON

0.99+

NervanaORGANIZATION

0.99+

Data Centers GroupORGANIZATION

0.99+

Global AllianceORGANIZATION

0.99+

Global Alliance for Genomics and HealthORGANIZATION

0.99+

millionsQUANTITY

0.99+

intel.com/aiOTHER

0.99+

four yearsQUANTITY

0.99+

StanfordORGANIZATION

0.99+

10,000 examplesQUANTITY

0.99+

todayDATE

0.99+

one diseaseQUANTITY

0.99+

Two examplesQUANTITY

0.99+

Steven HawkingPERSON

0.99+

five years agoDATE

0.99+

firstQUANTITY

0.99+

two sortQUANTITY

0.99+

bothQUANTITY

0.99+

OneQUANTITY

0.99+

first timeQUANTITY

0.99+

Dr. Naveen Rao | SXSW 2017


 

(bright music) >> Narrator: Live from Austin, Texas. It's theCUBE, covering South by Southwest 2017. Brought to you by Intel. Now here's John Furrier. >> We're here live in South by Southwest Austin, Texas. Silicon Angle, theCUBE, our broadcast, we go out and extract the signal from noise. I'm John Furrier, I'm here with Naveene Rao, the vice president general manager of the artificial intelligence solutions group at Intel. Welcome to theCUBE. >> Thank you, yeah. >> So we're here, big crowd here at Intel, Intel AI lounge. Okay, so that's your wheelhouse. You're the general manager of AI solutions. >> Naveene: That's right. >> What is AI? (laughs) I mean-- >> AI has been redefined through time a few times. Today AI means generally applied machine learning. Basically ways to find useful structure in data to do something with. It's a tool, really, more than anything else. >> So obviously AI is a mental model, people can understand kind of what's going on with software. Machine learning and IoT gets kind of in the industry, it's a hot area, but this really is points to a future world where you're seeing software tackling new problems at scale. So cloud computing, what you guys are doing with the chips and software has now created a scale dynamic. Similar to Moore's, but Moore's Law is done for devices. You're starting to see software impact society. So what are some of those game changing impacts that you see and that you're looking at at Intel? >> There are many different thought labors that many of us will characterize as drudgery. For instance, if I'm an insurance company, and I want to assess the risk of 10 million pages of text, I can't do that very easily. I have to have a team of analysts run through, write summaries. These are the kind of problems we can start to attack. So the way I always look at it is what a bulldozer was to physical labor, AI is to data. To thought labor, we can really get through much more of it and use more data to make our decisions better. >> So what are the big game changing things that are going on that people can relate to? Obviously, autonomous vehicles is one that we can all look at and say, "Wow, that's mind blowing." Smart cities is one that you say, "Oh my god, I'm a resident of a community. "Do they have to re-change the roads? "Who writes the software, is there a budget for that?" Smart home, you see Alexa with Amazon, you see Google with their home product. Voice bots, voice interfaces. So the user interface is certainly changing. How is that impacting some of the things that you guys are working on? >> Well, to the user interface changing, I think that has an entire dynamic on how people use tools. Easier something is, the more people use, the more pervasive it becomes, and we start discovering these emergent dynamics. Like an iPod, for instance. Storing music in a digital form, small devices around before the iPod. But when it made it easy to use, that sort of gave rise to the smartphone. So I think we're going to start seeing some really interesting dynamics like that. >> One of the things that I liked about this past week in San Francisco, Google had their big event, their cloud event, and they talked a lot about, and by the way, Intel was on stage with the new Xeon processor, up to 72 cores, amazing compute capabilities, but cloud computing does bring that scale together. But you start thinking about data science has moved into using data, and now you have a tsunami of data, whether it's taking an analog view of the world and having now multiple datasets available. If you can connect the dots, okay, a lot of data, now you have a lot of data plus a lot of datasets, and you have almost unlimited compute capability. That starts to draw in some of the picture a little bit. >> It does, but actually there's one thing missing from what you just described, is that our ability to scale data storage and data collection has outpaced our ability to compute on it. Computing on it typically is some sort of quadratic function, something faster than when your growth on amount of data. And our compute has really not caught up with that, and a lot of that has been more about focus. Computers were really built to automate streams of tasks, and this sort of idea of going highly parallel and distributed, it's something somewhat new. It's been around a lot in academic circles, but the real use case to drive it home and build technologies around it is relatively new. And so we're right now in the midst of transforming computer architecture, and it's something that becomes a data inference machine, not just a way to automate compute tasks, but to actually do data inference and find useful inferences in data. >> And so machine learning is the hottest trend right now that kind of powers AI, but also there's some talk in the leader circles around learning machines. Data learning from engaged data, or however you want to call it, also brings out another question. How do you see that evolving, because do we need to have algorithms to police the algorithms? Who teaches the algorithms? So you bring in this human aspect of it. So how does the machine become a learning machine? Who teaches the machine, is it... (laughs) I mean, it's crazy. >> Let me answer that a little bit with a question. Do you have kids? >> Yes, four. >> Does anyone police you on raising your kids? >> (laughs) Kind of, a little bit, but not much. They complain a lot. >> I would argue that it's not so dissimilar. As a parent, your job is to expose them to the right kind of biases or not biased data as much as possible, like experiences, they're exactly that. I think this idea of shepherding data is extremely important. And we've seen it in solutions that Google has brought out. There are these little unexpected biases, and a lot of those come from just what we have in the data. And AI is no different than a regular intelligence in that way, it's presented with certain data, it learns from that data and its biases are formed that way. There's nothing inherent about the algorithm itself that causes that bias other than the data. >> So you're saying to me that exposing more data is actually probably a good thing? >> It is. Exposing different kinds of data, diverse data. To give you an example from the biological world, children who have never seen people of different races tend to be more, it's something new and unique and they'll tease it out. It's like, oh, that's something different. Whereas children who are raised with people of many diverse face types or whatever are perfectly okay seeing new diverse face types. So it's the same kind of thing in AI, right? It's going to hone in on the trends that are coming, and things that are outliers, we're going to call as such. So having good, balanced datasets, the way we collect that data, the way we sift through it and actually present it to an AI is extremely important. >> So one of the most exciting things that I like, obviously autonomous vehicles, I geek out on because, not that I'm a car head, gear head or car buff, but it just, you look at what it encapsulates technically. 5G overlay, essentially sensors all over the car, you have software powering it, you now have augmented reality, mixed reality coming into it, and you have an interface to consumers and their real world in a car. Some say it's a moving data center, some say it's also a human interface to the world, as they move around in transportation. So it kind of brings out the AI question, and I want to ask you specifically. Intel talks about this a lot in their super demos. What actually is Intel doing with the compute and what are you guys doing to make that accelerate faster and create a good safe environment? Is it just more chips, is it software? Can you explain, take a minute to explain what Intel's doing specifically? >> Intel is uniquely positioned in this space, 'cause it's a great example of a full end to end problem. We have in-car compute, we have software, we have interfaces, we have actuators. That's maybe not Intel's suite. Then we have connectivity, and then we have cloud. Intel is every one of those things, and so we're extremely well positioned to drive this field forward. Now you ask what are we doing in terms of hardware and software, yes, it's all of it. This is a big focus area for Intel now. We see autonomous vehicles as being one of the major ways that people interact with the world, like locality between cars and interaction through social networks and these kinds of things. This is a big focus area, we are working on the in-car compute actively, we're going to lead that, 5G is a huge focus for Intel, as you might've seen in other, Mobile World Congress, other places. And then the data center. And so we own the data center today, and we're going to continue to do that with new technologies and actually enable these solutions, not just from a pure hardware primitives perspective, but from the software-hardware interaction in full stack. >> So for those people who think of Intel as a chip company, obviously you guys abstract away complexities and put it into silicon, I obviously get that. Google Next this week, one thing I was really impressed by was the TensorFlow machine learning algorithms in open source, you guys are optimizing the Xeon processor to offload, not offload, but kind of take on... Is this kind of the paradigm that Intel looks at, that you guys will optimize the highest performance in the chip where possible, and then to let the software be more functional? Is that a guiding principle, is that a one off? >> I would say that Intel is not just a chip company. We make chips, but we're a platform solutions company. So we sell primitives to various levels, and so, in certain cases, yes, we do optimize for software that's out there because that drives adoption of our solutions, of course. But in new areas, like the car for instance, we are driving the whole stack, it's not just the chip, it's the entire package end to end. And so with TensorFlow, definitely. Google is a very strong partner of ours, and we continue to team up on activities like that. >> We are talking with Naveene Rao, vice president general manager Intel's AI solutions. Breaking it down for us. This end to end thing is really interesting to me. So I want to get just double click on that a little bit. It requires a community to do that, right? So it's not just Intel, right? Intel's always had a great rising tide floats all boats kind of concept over the life of the company, but now, more than ever, it's an API world, you see integration points between companies. This becomes an interesting part. Can you talk up to that point about how you guys are enabling partners to work with, and if people want to work with Intel, how do they work, from a developer to whoever? How do you guys view this community aspect? I mean, sure you'd agree with that, right? >> Yeah, absolutely. Working with Intel can take on many different forms. We're very active in the open source community. The Intel Nervana AI solutions are completely open source. We're very happy to enable people in the open source, help them develop their solutions on our hardware, but also, the open source is there to form that community and actually give us feedback on what to build. The next piece is kind of one quick down, if you're actually trying to build an end to end solution, like you're saying, you got a camera. We're not building cameras. But these interfaces are pretty well defined. Generally what we'll do is, we like to select some partners that we think are high value add. And we work with them very closely, and we build stuff that our customers can rely on. Intel stands for quality. We're not going to put Intel branding on something, unless it sort of conforms to some really high standard. And so that's I think a big power here. It doesn't mean we're not going to enable the people that aren't our channel partners or whatever, they're going to have to be enabled through a more of a standard set of interfaces, software or hardware. >> Naveene, I'll ask you, in the final couple minutes we have left, to kind of zoom out and look at the coolness of the industry right now. So you're exposed, your background, we got your PhD, and then you topic wise now heading up the AI solutions. You probably see a lot of stuff. Go down the what's cool to you scene, share with the audience some of the cool things that you can point to that we should pay attention to or even things that are cool that we should be aware that we might not be aware of. What are some of the coolest things that are out there that you could share? >> To share new things, we'll get to that in a second. Things I think are one of my favorites, AlphaGo, I know this is like, maybe it's hackneyed. But as an engineering student in CS in the mid-90s, studying artificial intelligence back then or what we called artificial intelligence, Go was just off the table. That was less than 20 years ago. In that time, it looked like such an insurmountable problem, the brain is doing something so special that we're just not going to figure it out in my lifetime, to actually doing it is incredible. So to me, that represents a lot. So that's a big one. Interesting things that you may not be aware of are other use cases of AI, like we see it in farming. This is something we take for granted. We go to the grocery store, we pick up our food and we're happy, but the reality is, that's a whole economy in and of itself, and scaling it as our population scales is an extremely difficult thing to do. And we're actually interacting with companies that are doing this at multiple levels. One is at the farming level itself, automating things, using AI to determine the state of different props and actually taking action in the field automatically. That's huge, this is back-breaking work. Humans don't necessarily-- >> And it's important too, because people are worried about the farming industry in general. >> Absolutely. And what I love about that use case of like applying AI to farming techniques is that, by doing that, we actually get more consistency and you get better yields. And you're doing it without any additional chemicals, no genetic engineering, nothing like that, you're just applying the same principles we know better. And so I think that's where we see a lot of wonderful things happening. It's a solved problem, but just not at scale. How do I scale this problem up? I can't do that in many instances, like I talked about with the legal documents and trying to come up with a summary. You just can't scale it today. But with these techniques, we can. And so that's what I think is extremely exciting, any interaction there, where we start to see scale-- >> And new stuff, and new stuff? >> New stuff. Well, some of it I can't necessarily talk about. In the robot space, there's a lot happening there. I'm seeing a lot in the startup world right now. We have a convergence of the mechanical part of it becoming cheaper and easier to build with 3D printing, the Maker revolution, all these kind of things happening, which our CEO is really big on. So that, combined with these techniques becoming mature, is going to come up with some really cool stuff. We're going to start seeing The Jetsons kind of thing. It's kind of neat to think about, really. I don't want to clean my room, hey robot, go clean my room. >> John: I'd love that. >> I'd love that too. Make me dinner, maybe like a gourmet dinner, that'd be really awesome. So we're actually getting to a point where there's a line of sight. We're not there yet, I can see it in the next 10 years. >> So the fog is lifting. All right, final question, just more of a personal note. Obviously, you have a neuroscience background, you mentioned that Go is cool. But the humanization factor's coming in. And we mentioned ethics, came up, we don't have time to talk about the ethics role, but as societal changes are happening, with these new impacts of technologies, there's real impact. Whether it's solving diseases and farming, or finding missing children, there's some serious stuff that's really being done. But the human aspects of converging with algorithms and software and scale. Your thoughts on that, how do you see that and how would you, a lot of people are trying to really put this in a framework to try to advance more either sociology thinking, how do I bring sociology into computer science in a way that's relevant. What are some of your thought here? Can you share any color commentary? >> I think it's a very difficult thing to comment on, especially because there are these emergent dynamics. But I think what we'll see is, just as like social network have interfered in some ways and actually helped our interaction with each other, we're going to start seeing that more and more. We can have AIs that are filtering interactions for us. A positive of that is that we can actually understand more about what's going on around in our world, and we're more tightly interconnected. You can sort of think of it as a higher bandwidth communication between all of us. When we're in hunter-gatherer societies, we can only talk to so many people in a day. Now we can actually do more, and so we can gather more information. Bad things are maybe that things become more impersonal, or people have to start doing weird things to stand out in other people's view. There's all these weird interactions-- >> It's kind of like Twitter. (laughs) >> A little bit like Twitter. You can say ridiculous things sometimes to get noticed. We're going to continue to see that, we're already starting to see that at this point. And so I think that's really where the social dynamic happened. It's just how it impacts our day to day communication. >> Talk to Naveene Rao, great conversation here inside the Intel AI lounge. These are the kind of conversations that are going to be on more and more kitchen tables across the world, I'm John Furrier with theCUBE. Be right back with more after this short break. >> Thanks, John. (bright music)

Published Date : Mar 10 2017

SUMMARY :

Brought to you by Intel. the vice president general manager of You're the general manager of AI solutions. in data to do something with. So cloud computing, what you guys are doing with the chips These are the kind of problems we can start to attack. How is that impacting some of the things that sort of gave rise to the smartphone. and you have almost unlimited compute capability. and a lot of that has been more about focus. And so machine learning is the hottest trend right now Let me answer that a little bit with a question. (laughs) Kind of, a little bit, but not much. that causes that bias other than the data. that data, the way we sift through it and what are you guys doing to make that accelerate faster 'cause it's a great example of a full end to end problem. that you guys will optimize the highest performance it's the entire package end to end. it's an API world, you see integration points the open source is there to form that community Go down the what's cool to you scene, and actually taking action in the field automatically. the farming industry in general. and you get better yields. is going to come up with some really cool stuff. So we're actually getting to a point But the human aspects of converging with algorithms A positive of that is that we can actually It's kind of like Twitter. You can say ridiculous things sometimes to get noticed. that are going to be on more and more kitchen tables (bright music)

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
NaveenePERSON

0.99+

JohnPERSON

0.99+

Naveene RaoPERSON

0.99+

John FurrierPERSON

0.99+

GoogleORGANIZATION

0.99+

San FranciscoLOCATION

0.99+

iPodCOMMERCIAL_ITEM

0.99+

IntelORGANIZATION

0.99+

Naveen RaoPERSON

0.99+

AmazonORGANIZATION

0.99+

Austin, TexasLOCATION

0.99+

10 million pagesQUANTITY

0.99+

mid-90sDATE

0.98+

OneQUANTITY

0.98+

oneQUANTITY

0.98+

SXSW 2017EVENT

0.98+

AlexaTITLE

0.97+

TodayDATE

0.96+

SouthwestLOCATION

0.95+

XeonCOMMERCIAL_ITEM

0.94+

todayDATE

0.94+

TwitterORGANIZATION

0.92+

fourQUANTITY

0.91+

less than 20 years agoDATE

0.9+

Next this weekDATE

0.88+

up to 72 coresQUANTITY

0.88+

one thingQUANTITY

0.87+

Moore'sTITLE

0.86+

South by SouthwestTITLE

0.86+

next 10 yearsDATE

0.84+

AlphaGoORGANIZATION

0.82+

5GORGANIZATION

0.8+

this past weekDATE

0.8+

vice presidentPERSON

0.72+

2017DATE

0.72+

a dayQUANTITY

0.71+

theCUBEORGANIZATION

0.68+

Silicon AngleLOCATION

0.65+

TensorFlowTITLE

0.65+

CongressORGANIZATION

0.64+

NervanaCOMMERCIAL_ITEM

0.62+

Mobile WorldEVENT

0.61+

doubleQUANTITY

0.6+

peopleQUANTITY

0.6+

GoTITLE

0.6+

dataQUANTITY

0.57+

secondQUANTITY

0.57+

MooreORGANIZATION

0.57+

Narrator: LiveTITLE

0.56+

Dr.PERSON

0.53+

thingsQUANTITY

0.51+

JetsonsORGANIZATION

0.4+

Lisa Spelman, Intel - Google Next 2017 - #GoogleNext17 - #theCUBE


 

(bright music) >> Narrator: Live from Silicon Valley. It's theCUBE, covering Google Cloud Next 17. >> Okay, welcome back, everyone. We're live in Palo Alto for theCUBE special two day coverage here in Palo Alto. We have reporters, we have analysts on the ground in San Francisco, analyzing what's going on with Google Next, we have all the great action. Of course, we also have reporters at Open Compute Summit, which is also happening in San Hose, and Intel's at both places, and we have Intel senior manager on the line here, on the phone, Lisa Spelman, vice president and general manager of the Xeon product line, product manager responsibility as well as marketing across the data center. Lisa, welcome to theCUBE, and thanks for calling in and dissecting Google Next, as well as teasing out maybe a little bit of OCP around the Xeon processor, thanks for calling. >> Lisa: Well, thank you for having me, and it's hard to be in many places at once, so it's a busy week and we're all over, so that's that. You know, we'll do this on the phone, and next time we'll do it in person. >> I'd love to. Well, more big news is obviously Intel has a big presence with the Google Next, and tomorrow there's going to be some activity with some of the big name executives at Google. Talking about your relationship with Google, aka Alphabet, what are some of the key things that you guys are doing with Google that people should know about, because this is a very turbulent time in the ecosystem of the tech business. You saw Mobile World Congress last week, we've seen the evolution of 5G, we have network transformation going on. Data centers are moving to a hybrid cloud, in some cases, cloud native's exploding. So all new kind of computing environment is taking shape. What is Intel doing here at Google Next that's a proof point to the trajectory of the business? >> Lisa: Yeah, you know, I'd like to think it's not too much of a surprise that we're there, arm in arm with Google, given all of the work that we've done together over the last several years in that tight engineering and technical partnership that we have. One of the big things that we've been working with Google on is, as they move from delivering cloud services for their own usage and for their own applications that they provide out to others, but now as they transition into being a cloud service provider for enterprises and other IT shops as well, so they've recently launched their Google Cloud platform, just in the last week or so. Did a nice announcement about the partnership that we have together, and how the Google Cloud platform is now available and running and open for business on our latest next generation Intel Xeon product, and that's codenamed Skylake, but that's something that we've been working on with them since the inception of the design of the product, so it's really nice to have it out there and in the market, and available for customers, and we very much value partnerships, like the one we have with Google, where we have that deep technical engagement to really get to the heart of the workload that they need to provide, and then can design product and solution around that. So you don't just look at it as a one off project or a one time investment, it's an ongoing continuation and evolution of new product, new features, new capabilities to continue to improve their total cost of ownership and their customer experience. >> Well, Lisa, this is your baby, the Xeon, codename Skylake, which I love that name. Intel always has great codenames, by the way, we love that, but it's real technology. Can you share some specific features of what's different around these new workloads because, you know, we've been teasing out over the past day and we're going to be talking tomorrow as well about these new use cases, because you're looking at a plethora of use cases, from IoT edge all the way down into cloud native applications. What specific things is Xeon doing that's next generation that you could highlight, that points to this new cloud operating system, the cloud service providers, whether it's managed services to full blown down and dirty cloud? >> Lisa: So it is my baby, I appreciate you saying that, and it's so exciting to see it out there and starting to get used and picked up and be unleashing it on the world. With this next generation of Xeon, it's always about the processor, but what we've done has gone so much beyond that, so we have a ton of what we call platform level innovation that is coming in, we really see this as one of our biggest kind of step function improvements in the last 10 years that we've offered. Some of the features that we've already talked about are things like AVX-512 instructions, which I know just sounds fun and rolls of the tongue, but really it's very specific workload acceleration for things like high performance computing workloads. And high performance computing is something that we see more and more getting used in access in cloud style infrastructure. So it's this perfect marrying of that workload specifically deriving benefit from the new platforms, and seeing really strong performance improvements. It also speaks to the way with Intel and Xeon families, 'cause remember, with Xeon, we have Xeon Phi, you've got standard Xeon, you've got Xeon D. You can use these instructions across the families and have workloads that can move to the most optimized hardware for whatever you're trying to drive. Some of the other things that we've talked about announced is we'll have our next generation of Intel Resource Director technology, which really helps you manage and provide quality of service within you application, which is very important to cloud service providers, giving them control over hardware and software assets so that they can deliver the best customer experience to their customers based on the service level agreement they've signed up for. And then the other one is Intel Omni-Path architecture, so again, fairly high performance computing focused product, Omni-Path is a fabric, and we're going to offer that in an integrated fashion with Skylake so that you can get even higher level of performance and capability. So we're looking forward to a lot more that we have to come, the whole of the product line will continue to roll out in the middle of this year, but we're excited to be able to offer an early version to the cloud service providers, get them started, get it out in the market and then do that full scale enterprise validation over the next several months. >> So I got to ask you the question, because this is something that's coming up, we're seeing a transition, also the digital transformation's been talked about for a while. Network transformation, IoTs all around the corner, we've got autonomous vehicles, smart cities, on and on. But I got to ask you though, the cloud service providers seems to be coming out of this show as a key storyline in Google Next as the multi cloud architectures become very clear. So it's become clear, not just this show but it's been building up to this, it's pretty clear that it's going to be a multi cloud world. As well as you're starting to see the providers talk about their SaaS offerings, Google talking about G Suite, Microsoft talks about Office 365, Oracle has their apps, IBM's got Watson, so you have this SaaSification. So this now creates a whole another category of what cloud is. If you include SaaS, you're really talking about Salesforce, Adobe, you know, on and on the list, everyone is potentially going to become a SaaS provider whether they're unique cloud or partnering with some other cloud. What does that mean for a cloud service provider, what do they need for applications support requirements to be successful? >> So when we look at the cloud service provider market inside of Intel, we are talking about infrastructure as a service, platform as a service and software as a service. So cutting across the three major categories, I give you like, up until now, infrastructure of the service has gotten a lot of the airtime or focus, but SaaS is actually the bigger business, and that's why you see, I think, people moving towards it, especially as enterprise IT becomes more comfortable with using SaaS application. You know, maybe first they started with offloading their expense report tool, but over time, they've moved into more sophisticated offerings that free up resources for them to do their most critical or business critical applications the they require to stay in more of a private cloud. I think that's evolution to a multi cloud, a hybrid cloud, has happened across the entire industry, whether you are an enterprise or whether you are a cloud service provider. And then the move to SaaS is logical, because people are demanding just more and more services. One of the things through all our years of partnering with the biggest to the smallest cloud service providers and working so closely on those technical requirements that we've continued to find is that total cost of ownership really is king, it's that performance per dollar, TCO, that they can provide and derive from their infrastructure, and we focused a lot of our engineering and our investment in our silicon design around providing that. We have multi generations that we've provided even just in the last five years to continue to drive those step function improvements and really optimize our hardware and the code that runs on top of it to make sure that it does continue to deliver on those demanding workloads. The other thing that we see the providers focusing on is what's their differentiation. So you'll see cloud service providers that will look through the various silicon features that we offer and choose, they'll pick and choose based on whatever their key workload is or whatever their key market is, and really kind of hone in and optimize for those silicon features so that they can have a differentiated offering into the market about what capabilities and services they'll provide. So it's an area where we continue to really focus our efforts, understand the workload, drive the TCO down, and then focus in on the design point of what's going to give that differentiation and acceleration. >> It's interesting, the definition's also where I would agree with you, the cloud service provider is a huge market when you even look at the SaaS. 'Cause whether you're talking about Uber or Netflix, for instance, examples people know about in real life, you can't ignore these new diverse use cases coming out. For instance, I was just talking with Stu Miniman, one of our analysts here, Wikibon, and Riot Games could be considered a cloud, right, I mean, 'cause it's a SaaS platform, it's gaming. You're starting to see these new apps coming out of the woodwork. There seems to be a requirement for being agile as a cloud provider. How do you enable that, what specifically can you share, if I'm a cloud service provider, to be ready to support anything that's coming down the pike? >> Lisa: You know, we do do a lot of workload and market analysis inside of Intel and the data center group, and then if you have even seen over the past five years, again, I'll just stick with the new term, how much we've expanded and broadened our product portfolio. So again, it will still be built upon that foundation of Xeon and what we have there, but we've gone to offer a lot of varieties. So again, I mentioned Xeon Phi. Xeon Phi at the 72 cores, bootable Xeon but specific workload acceleration targeted at high performance computing and other analytics workloads. And then you have things at the other end. You've got Xeon D, which is really focused at more frontend web services and storage and network workloads, or Atom, which is even lower power and more focused on cold and warm storage workloads, and again, that network function. So you could then say we're not just sticking with one product line and saying this is the answer for everything, we're saying here's the core of what we offer, and the features people need, and finding options, whether they range from low power to high power high performance, and kind of mixed across that whole kind of workload spectrum, and then we've broadened around the CPU into a lot of other silicon innovation. So I don't know if you guys have had a chance to talk about some of the work that we're doing with FPGAs, with our FPGA group and driving and delivering cloud and network acceleration through FPGAs. We've also introduced new products in the last year like Silicon Photonics, so dealing with network traffic crossing through-- >> Well, is FPGA, that's the Altera stuff, we did talk with them, they're doing the programmable chips. >> Lisa: Exactly, so it requires a level of sophistication and understanding what you need the workload to accelerate, but once you have it, it is a very impressive and powerful performance gain for you, so the cloud service providers are a perfect market for that, as are the cloud service providers because they have very sophisticated IT and very technically astute engineering teams that are able to really, again, go back to the workload, understand what they need and figure out the right software solution to pair with it. So that's been a big focus of our targeting. And then, like I said, we've added all these different things, different new products to the platform that start to, over time, just work better and better together, so when you have things like Intel SSD there together with Intel CPUs and Intel Ethernet and Intel FPGA and Intel Silicon Photonics, you can start to see how the whole package, when it's designed together under one house, can offer a tremendous amount of workload acceleration. >> I got to ask you a question, Lisa, 'cause this comes up, while you're talking, I'm just in my mind visualizing a new kind of virtual computer server, the cloud is one big server, so it's a design challenge. And what was teased out at Mobile World Congress that was very clear was this new end to end architecture, you know, re-imagined, but if you have these processors that have unique capabilities, that have use case specific capabilities, in a way, you guys are now providing a portfolio of solutions so that it almost can be customized for a variety of cloud service providers. Am I getting that right, is that how you guys see this happening where you guys can just say, "Hey, just mix and match what you want and you're good." >> Lisa: Well, and we try to provide a little bit more guidance than as you wish, I mean, of course, people have their options to choose, so like, with the cloud service providers, that's what we have, really tight engineering engagement, so that we can, you know, again, understand what they need, what their design point is, what they're honing in on. You might work with one cloud service provider that is very facilities limited, and you might work with another one that is, they're face limited, the other one's power limited, and another one has performance is king, so you can, we can cut some SKUs to help meet each of those needs. Another good example is in the artificial intelligence space where we did another acquisition last year, a company called Nervana that's working on optimized silicon for a neural network. And so now we have put together this AI portfolio, so instead of saying, "Oh, here's one answer "for artificial intelligence," it's, "Here's a multitude of answers where you've got Xeon," so if you have, I'm going to utilize capacity, and are starting down your artificial intelligence journey, just use your Xeon capacity with an optimized framework and you'll get great results and you can start your journey. If you are monetizing and running your business based on what AI can do for you and you are leading the pack out there, you've got the best data scientists and algorithm writers and peak running experts in the world, then you're going to want to use something like the silicon that we acquired from the Nervana team, and that codename is Lake Crest, speaking of some lakes there. And you'll want to use something like Xeon with Lake Crest to get that ultimate workload acceleration. So we have the whole portfolio that goes from Xeon to Xeon Phi to Xeon with FPGAs or Xeon with Lake Crest. Depending on what you're doing, and again, what your design point is, we have a solution for you. And of course, when we say solution, we don't just mean hardware, we mean the optimized software frameworks and the libraries and all of that, that actually give you something that can perform. >> On the competitive side, we've seen the processor landscape heat up on the server and the cloud space. Obviously, whether it's from a competitor or homegrown foundry, whatever fabs are out there, I mean, so Intel's always had a great partnership with cloud service providers. Vis-a-vis the competition and context to that, what are you guys doing specifically and how you'd approach the marketplace in light of competition? >> Lisa: So we do operate in a highly competitive market, and we always take all competitors seriously. So far we've seen the press heat up, which is different than seeing all of the deployments, so what we look for is to continue to offer the highest performance and lowest total cost of ownership for all our customers, and in this case, the cloud service providers, of course. And what do we do is we kind of stick with our game plan of putting the best silicon in the world into the market on a regular beat rate and cadence, and so there's always news, there's always an interesting story, but when you look at having had eight new products and new generations in market since the last major competitive x86 product, that's kind of what we do, just keep delivering so that our customers know that they can bet on us to always be there and not have these massive gaps. And then I also talked to you about portfolio expansion, we don't bet on just one horse, we give our customers the choice to optimize for their workloads, so you can go up to 72 cores with Xeon Phi if that's important, you can go as low as two cores with Atom, if that's what works for you. Just an example of how we try to kind of address all of our customer segments with the right product at the right time. >> And IoT certainly brings a challenge too, when you hear about network edge, that's a huge, huge growth area, I mean, you can't deny that that's going to be amazing, you look at the cars are data centers these days, right? >> Lisa: A data center on wheels. >> Data center on wheels. >> Lisa: That's one of the fun things about my role, even in the last year, is that growing partnership, even inside of Intel with our IoT team, and just really going through all of the products that we have in development, and how many of them can be reused and driven towards IoT solution. The other thing is, if you look into the data center space, I genuinely believe we have the world's best ecosystem, you can't find an ISV that we haven't worked with to optimize their solution to run best on Intel architecture and get that workload acceleration. And now we have the chance to put that same playbook into play in the IoT space, so it's a growing, somewhat nascent but growing market with a ton of opportunity and a ton of standards to still be built, and a lot of full solution kits to be put together. And that's kind of what Intel does, you know, we don't just throw something out to the market and say, "Good luck," we actually put the ecosystem together around it so that it performs. But I think that's kind of what you see with, I don't know if you guys saw our Intel GO announcement, but it's really like the software development kit and the whole product offering for what you need for truly delivering automated vehicles. >> Well, Lisa, I got to say, so you guys have a great formula, why fix what's not broken, stay with Moore's law, keep that cadence going, but what's interesting is you are listening and adapting to the architectural shifts, which is smart, so congratulations and I think, as the cloud service provider world changes, and certainly in the data center, it's going to be a turbulent time, but a lot of opportunity, and so good to have that reliability and, if you can make the software go faster then they can write more software faster, so-- >> Lisa: Yup, and that's what we've seen every time we deliver a step function improvement in performance, we see a step function improvement in demand, and so the world is still hungry for more and more compute, and we see this across all of our customer bases. And every time you make that compute more affordable, they come up with new, innovative, different ways to do things, to get things done and new services to offer, and that fundamentally is what drives us, is that desire to continue to be the backbone of that industry innovation. >> If you could sum up in a bumper sticker what that step function is, what is that new step function? >> Lisa: Oh, when we say step functions of improvements, I mean, we're always looking at targeting over 20% performance improvement per generation, and then on top of that, we've added a bunch of other capabilities beyond it. So it might show up as, say, a security feature as well, so you're getting the massive performance improvement gen to gen, and then you're also getting new capabilities like security features added on top. So you'll see more and more of those types of announcements from us as well where we kind of highlight the, not just the performance but that and what else comes with it, so that you can continue to address, you know, again, the growing needs that are out there, so all we're trying to say is, day a step ahead. >> All right, Lisa Spelman, VP of the GM, the Xeon product family as well as marketing and data center. Thank you for spending the time and sharing your insights on Google Next, and giving us a peak at the portfolio of the Xeon next generation, really appreciate it, and again, keep on bringing that power, Moore's law, more flexibility. Thank you so much for sharing. We're going to have more live coverage here in Palo Alto after this short break. (bright music)

Published Date : Mar 9 2017

SUMMARY :

Narrator: Live from Silicon Valley. maybe a little bit of OCP around the Xeon processor, and it's hard to be in many places at once, of the tech business. partnerships, like the one we have with Google, that you could highlight, that points to and it's so exciting to see it out there So I got to ask you the question, and really optimize our hardware and the code is a huge market when you even look at the SaaS. and the data center group, and then if you have even seen Well, is FPGA, that's the Altera stuff, the right software solution to pair with it. I got to ask you a question, Lisa, so that we can, you know, again, understand what they need, Vis-a-vis the competition and context to that, And then I also talked to you about portfolio expansion, and the whole product offering for what you need and so the world is still hungry for more and more compute, with it, so that you can continue to address, you know, at the portfolio of the Xeon next generation,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Lisa SpelmanPERSON

0.99+

GoogleORGANIZATION

0.99+

NervanaORGANIZATION

0.99+

LisaPERSON

0.99+

Palo AltoLOCATION

0.99+

San FranciscoLOCATION

0.99+

AlphabetORGANIZATION

0.99+

two coresQUANTITY

0.99+

OracleORGANIZATION

0.99+

AdobeORGANIZATION

0.99+

IBMORGANIZATION

0.99+

IntelORGANIZATION

0.99+

UberORGANIZATION

0.99+

last yearDATE

0.99+

Silicon PhotonicsORGANIZATION

0.99+

NetflixORGANIZATION

0.99+

MicrosoftORGANIZATION

0.99+

72 coresQUANTITY

0.99+

two dayQUANTITY

0.99+

last weekDATE

0.99+

San HoseLOCATION

0.99+

oneQUANTITY

0.99+

G SuiteTITLE

0.99+

Office 365TITLE

0.99+

Stu MinimanPERSON

0.99+

Open Compute SummitEVENT

0.98+

Mobile World CongressEVENT

0.98+

XeonORGANIZATION

0.98+

tomorrowDATE

0.98+

both placesQUANTITY

0.98+

AlteraORGANIZATION

0.98+

Riot GamesORGANIZATION

0.97+

OneQUANTITY

0.97+

WikibonORGANIZATION

0.97+

WatsonTITLE

0.96+

over 20%QUANTITY

0.95+

SaaSTITLE

0.95+

firstQUANTITY

0.95+

one horseQUANTITY

0.94+

Silicon ValleyLOCATION

0.94+

one productQUANTITY

0.94+

eachQUANTITY

0.94+

one answerQUANTITY

0.94+

eight new productsQUANTITY

0.93+

one timeQUANTITY

0.92+

XeonCOMMERCIAL_ITEM

0.92+

GMORGANIZATION

0.91+

one houseQUANTITY

0.91+

Google CloudTITLE

0.91+