An Absolute Requirement for Precision Medicine Humanized Organ Study
>>Hello everybody. I am Toshihiko Nishimura from Stanford. University is there to TTT out here, super aging, global OMIM global transportation group about infections, uh, or major point of concerns. In addition, this year, we have the COVID-19 pandemic. As you can see here, while the why the new COVID-19 patients are still increasing, meanwhile, case count per day in the United state, uh, beginning to decrease this pandemic has changed our daily life to digital transformation. Even today, the micro segmentation is being conducted online and doctor and the nurse care, uh, now increase to telemedicine. Likewise, the drug development process is in need of major change paradigm shift, especially in vaccine in drug development for COVID-19 is, should be safe, effective, and faster >>In the >>Anastasia department, which is the biggest department in school of medicine. We have Stanford, a love for drug device development, regulatory science. So cold. Say the DDT RDS chairman is Ron Paul and this love leaderships are long mysel and stable shaper. In the drug development. We have three major pains, one exceedingly long duration that just 20 years huge budget, very low success rate general overview in the drug development. There are Discoverly but clinical clinical stage, as you see here, Tang. Yes. In clinical stage where we sit, say, what are the programs in D D D R S in each stages or mix program? Single cell programs, big data machine learning, deep learning, AI mathematics, statistics programs, humanized animal, the program SNS program engineering program. And we have annual symposium. Today's the, my talk, I do like to explain limitation of my science significance of humanized. My science out of separate out a program. I focused on humanized program. I believe this program is potent game changer for drug development mouse. When we think of animal experiment, many people think of immediately mouse. We have more than 30 kinds of inbred while the type such as chief 57, black KK yarrow, barber C white and so on using QA QC defined. Why did the type mice 18 of them gave him only one intervention using mouse, genomics analyzed, computational genetics. And then we succeeded to pick up fish one single gene in a week. >>We have another category of gene manipulated, mice transgenic, no clout, no Kamal's group. So far registered 40,000 kind as over today. Pretty critical requirement. Wrong FDA PMDA negative three sites are based on arteries. Two kinds of animal models, showing safety efficacy, combination of two animals and motel our mouse and the swine mouse and non-human primate. And so on mouse. Oh, Barry popular. Why? Because mouse are small enough, easy to handle big database we had and cost effective. However, it calls that low success rate. Why >>It, this issue speculation, low success rate came from a gap between preclinical the POC and the POC couldn't stay. Father divided into phase one. Phase two has the city FDA unsolved to our question. Speculation in nature biology using 7,372 new submissions, they found a 68 significant cradle out crazy too, to study approved by the process. And in total 90 per cent Radia in the clinical stages. What we can surmise from this study, FDA confirmed is that the big discrepancy between POC and clinical POC in another ward, any amount of data well, Ms. Representative for human, this nature bio report impacted our work significantly. >>What is a solution for this discrepancy? FDA standards require the people data from two species. One species is usually mice, but if the reported 90% in a preclinical data, then huge discrepancy between pretty critical POC in clinical POC. Our interpretation is data from mice, sometime representative, actually mice, and the humor of different especially immune system and the diva mice liver enzyme are missing, which human Liba has. This is one huge issue to be taught to overcome this problem. We started humanized mice program. What kind of human animals? We created one humanized, immune mice. The other is human eyes, DBA, mice. What is the definition of a humanized mice? They should have human gene or human cells or human tissues or human organs. Well, let me share one preclinical stages. Example of a humanized mouse that is polio receptor mice. This problem led by who was my mentor? Polio virus. Well, polio virus vaccine usually required no human primate to test in 13 years, collaboration with the FDA w H O polio eradication program. Finally FDA well as w H O R Purdue due to the place no human primate test to transgenic PVL. This is three. Our principle led by loss around the botch >>To move before this humanized mouse program, we need two other bonds donut outside your science, as well as the CPN mouse science >>human hormone, like GM CSF, Whoah, GCSF producing or human cytokine. those producing emoji mice are required in the long run. Two maintain human cells in their body under generation here, South the generation here, Dr. already created more than 100 kinds based on Z. The 100 kinds of Noe mice, we succeeded to create the human immune mice led the blood. The cell quite about the cell platelets are beautifully constituted in an mice, human and rebar MAs also succeeded to create using deparent human base. We have AGN diva, humanized mouse, American African human nine-thirty by mice co-case kitchen, humanized mice. These are Hennessy humanized, the immune and rebar model. On the other hand, we created disease rebar human either must to one example, congenital Liba disease, our guidance Schindel on patient model. >>The other model, we have infectious DDS and Waddell council Modell and GVH Modell. And so on creature stage or phase can a human itemize apply. Our objective is any stage. Any phase would be to, to propose. We propose experiment, pose a compound, which showed a huge discrepancy between. If Y you show the huge discrepancy, if Y is lucrative analog and the potent anti hepatitis B candidate in that predict clinical stage, it didn't show any toxicity in mice got dark and no human primate. On the other hand, weighing into clinical stage and crazy to October 15, salvage, five of people died and other 10 the show to very severe condition. >>Is that the reason why Nicole traditional the mice model is that throughout this, another mice Modell did not predict this severe side outcome. Why Zack humanized mouse, the Debar Modell demonstrate itself? Yes. Within few days that chemistry data and the puzzle physiology data phase two and phase the city requires huge number of a human subject. For example, COVID-19 vaccine development by Pfizer, AstraZeneca Moderna today, they are sample size are Southeast thousand vaccine development for COVID-19. She Novak UConn in China books for the us Erica Jones on the Johnson in unite United Kingdom. Well, there are now no box us Osaka Osaka, university hundred Japan. They are already in phase two industry discovery and predict clinical and regulatory stage foster in-app. However, clinical stage is a studious role because that phases required hugely number or the human subject 9,000 to 30,000. Even my conclusion, a humanized mouse model shortens the duration of drug development humanize, and most Isabel, uh, can be increase the success rate of drug development. Thank you for Ron Paul and to Steven YALI pelt at Stanford and and his team and or other colleagues. Thank you for listening.
SUMMARY :
case count per day in the United state, uh, beginning to decrease the drug development. our mouse and the swine mouse and non-human primate. is that the big discrepancy between POC and clinical What is the definition of a humanized mice? On the other hand, we created disease rebar human other 10 the show to very severe condition. that phases required hugely number or the human subject 9,000
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Ron Paul | PERSON | 0.99+ |
FDA | ORGANIZATION | 0.99+ |
October 15 | DATE | 0.99+ |
Pfizer | ORGANIZATION | 0.99+ |
Toshihiko Nishimura | PERSON | 0.99+ |
90% | QUANTITY | 0.99+ |
Two kinds | QUANTITY | 0.99+ |
9,000 | QUANTITY | 0.99+ |
two animals | QUANTITY | 0.99+ |
20 years | QUANTITY | 0.99+ |
One species | QUANTITY | 0.99+ |
two species | QUANTITY | 0.99+ |
100 kinds | QUANTITY | 0.99+ |
United Kingdom | LOCATION | 0.99+ |
7,372 new submissions | QUANTITY | 0.99+ |
13 years | QUANTITY | 0.99+ |
Steven YALI | PERSON | 0.99+ |
nine-thirty | QUANTITY | 0.99+ |
90 per cent | QUANTITY | 0.99+ |
Today | DATE | 0.99+ |
today | DATE | 0.98+ |
30,000 | QUANTITY | 0.98+ |
COVID-19 | OTHER | 0.98+ |
more than 30 kinds | QUANTITY | 0.98+ |
Barry | PERSON | 0.98+ |
this year | DATE | 0.98+ |
Two | QUANTITY | 0.98+ |
more than 100 kinds | QUANTITY | 0.98+ |
68 significant cradle | QUANTITY | 0.98+ |
Stanford | ORGANIZATION | 0.98+ |
pandemic | EVENT | 0.98+ |
one single gene | QUANTITY | 0.98+ |
Zack | PERSON | 0.98+ |
Single | QUANTITY | 0.97+ |
40,000 kind | QUANTITY | 0.97+ |
China | LOCATION | 0.97+ |
AstraZeneca Moderna | ORGANIZATION | 0.97+ |
hepatitis B | OTHER | 0.97+ |
five of people | QUANTITY | 0.96+ |
18 | QUANTITY | 0.96+ |
United state | LOCATION | 0.96+ |
Schindel | PERSON | 0.96+ |
Kamal | PERSON | 0.96+ |
one intervention | QUANTITY | 0.95+ |
COVID-19 pandemic | EVENT | 0.95+ |
polio virus | OTHER | 0.95+ |
two other bonds | QUANTITY | 0.95+ |
one example | QUANTITY | 0.94+ |
one | QUANTITY | 0.93+ |
Polio virus | OTHER | 0.93+ |
10 | QUANTITY | 0.93+ |
Anastasia | ORGANIZATION | 0.93+ |
Isabel | PERSON | 0.92+ |
Japan | LOCATION | 0.91+ |
three sites | QUANTITY | 0.91+ |
DDS | ORGANIZATION | 0.91+ |
Osaka | LOCATION | 0.88+ |
three | QUANTITY | 0.87+ |
GCSF | OTHER | 0.86+ |
phase two | QUANTITY | 0.86+ |
OMIM | ORGANIZATION | 0.86+ |
each stages | QUANTITY | 0.85+ |
a week | QUANTITY | 0.78+ |
57 | QUANTITY | 0.77+ |
Liba | OTHER | 0.76+ |
Tang | PERSON | 0.76+ |
Hennessy | PERSON | 0.76+ |
Nicole | PERSON | 0.75+ |
one huge issue | QUANTITY | 0.75+ |
Waddell council | ORGANIZATION | 0.73+ |
Precision Medicine | ORGANIZATION | 0.72+ |
DDT RDS | ORGANIZATION | 0.72+ |
American African | OTHER | 0.71+ |
Modell | COMMERCIAL_ITEM | 0.71+ |
GM CSF | OTHER | 0.71+ |
PVL | OTHER | 0.68+ |
Erica Jones | PERSON | 0.67+ |
FDA PMDA | ORGANIZATION | 0.66+ |
Phase two | QUANTITY | 0.61+ |
UConn | LOCATION | 0.61+ |
phase two | QUANTITY | 0.61+ |
Stanford | LOCATION | 0.59+ |
phase one | QUANTITY | 0.57+ |
GVH | ORGANIZATION | 0.57+ |
Novak | ORGANIZATION | 0.54+ |
Carol Carpenter, Google Cloud & Ayin Vala, Precision Medicine | Google Cloud Next 2018
>> Live from San Francisco, it's the Cube, covering Google Cloud Next 2018. Brought to you by Google Cloud and its ecosystem partners. >> Hello and welcome back to The Cube coverage here live in San Francisco for Google Cloud's conference Next 2018, #GoogleNext18. I'm John Furrier with Jeff Frick, my cohost all week. Third day of three days of wall to wall live coverage. Our next guest, Carol Carpenter, Vice President of Product Marketing for Google Cloud. And Ayin Vala, Chief Data Science Foundation for Precision Medicine. Welcome to The Cube, thanks for joining us. >> Thank you for having us. >> So congratulations, VP of Product Marketing. Great job getting all these announcements out, all these different products. Open source, big query machine learning, Istio, One dot, I mean, all this, tons of products, congratulations. >> Thank you, thank you. It was a tremendous amount of work. Great team. >> So you guys are starting to show real progress in customer traction, customer scale. Google's always had great technology. Consumption side of it, you guys have made progress. Diane Green mentioned on stage, on day one, she mentioned health care. She mentioned how you guys are organizing around these verticals. Health care is one of the big areas. Precision Medicine, AI usage, tell us about your story. >> Yes, so we are a very small non-profit. And we are at the intersection of data science and medical science and we work on projects that have non-profits impact and social impact. And we work on driving and developing projects that have social impact and in personalized medicine. >> So I think it's amazing. I always think with medicine, right, you look back five years wherever you are and you look back five years and think, oh my god, that was completely barbaric, right. They used to bleed people out and here, today, we still help cancer patients by basically poisoning them until they almost die and hopefully it kills the cancer first. You guys are looking at medicine in a very different way and the future medicine is so different than what it is today. And talk about, what is Presicion Medicine? Just the descriptor, it's a very different approach to kind of some of the treatments that we still use today in 2018. It's crazy. >> Yes, so Presicion Medicine has the meaning of personalized medicine. Meaning that we hone it into smaller population of people to trying to see what is the driving factors, individually customized to those populations and find out the different variables that are important for that population of people for detection of the disease, you know, cancer, Alzheimer's, those things. >> Okay, talk about the news. Okay, go ahead. >> Oh, oh, I was just going to say. And to be able to do what he's doing requires a lot of computational power to be able to actually get that precise. >> Right. Talk about the relationship and the news you guys have here. Some interesting stuff. Non-profits, they need compute power, they need, just like an eneterprise. You guys are bringing some change. What's the relationship between you guys? How are you working together? >> So one of our key messages here at this event is really around making computing available for everyone. Making data and analytics and machine learning available for everyone. This whole idea of human-centered AI. And what we've realized is, you know, data is the new natural resource. >> Yeah. >> In the world these days. And companies that know how to take advantage and actually mine insights from the data to solve problems like what they're solving at Precision Medicine. That is really where the new breakthroughs are going to come. So we announced a program here at the event, It's called Data Solutions for Change. It's from Google Cloud and it's a program in addition to our other non-profit programs. So we actually have other programs like Google Earth for non-profits. G Suite for non-profits. This one is very much focused on harnessing and helping non-profits extract insights from data. >> And is it a funding program, is it technology transfer Can you talk about, just a little detail on how it actually works. >> It's actually a combination of three things. One is funding, it's credits for up to $5,000 a month for up to six months. As well as customer support. One thing we've all talked about is the technology is amazing. You often also need to be able to apply some business logic around it and data scientists are somewhat of a challenge to hire these days. >> Yeah. >> So we're also proving free customer support, as well as online learning. >> Talk about an impact of the Cloud technology for the non-proit because6 I, you know, I'm seeing so much activity, certainly in Washington D.C. and around the world, where, you know, since the Jobs Act, fundings have changed. You got great things happening. You can have funding on mission-based funding. And also, the legacy of brand's are changing and open source changes So faster time to value. (laughs) >> Right. >> And without all the, you know, expertise it's an issue. How is Cloud helping you be better at what you do? Can you give some examples? >> Yes, so we had two different problems early on, as a small non-profit. First of all, we needed to scale up computationally. We had in-house servers. We needed a HIPAA complaint way to put our data up. So that's one of the reasons we were able to even use Google Cloud in the beginning. And now, we are able to run our models or entire data sets. Before that, we were only using a small population. And in Presicion Medicine, that's very important 'cause you want to get% entire population. That makes your models much more accurate. The second things was, we wanted to collaborate with people with clinical research backgrounds. And we need to provide a platform for them to be able to use, have the data on there, visualize, do computations, anything they want to do. And being on a Cloud really helped us to collaborate much more smoothly and you know, we only need their Gmail access, you know to Gmail to give them access and things. >> Yeah. >> And we could do it very, very quickly. Whereas before, it would take us months to transfer data. >> Yeah, it's a huge savings. Talk about the machine learning, AutoML's hot at the show, obviously, hot trend. You start to see AI ops coming in and disrupt more of the enterprise side but as data scientists, as you look at some of these machine learnings, I mean, you must get pretty excited. What are you thinking? What's your vision and how you going to use, like BigQuery's got ML built in now. This is like not new, it's Google's been using it for awhile. Are you tapping some of that? And what's your team doing with ML? >> Absolutely. We use BigQuery ML. We were able to use a few months in advance. It's great 'cause our data scientists like to work in BigQuery. They used to see, you know, you query the data right there. You can actually do the machine learning on there too. And you don't have to send it to different part of the platform for that. And it gives you sort of a proof of concept right away. For doing deep learning and those things, we use Cloud ML still, but for early on, you want to see if there is potential in a data. And you're able to do that very quickly with BigQuery ML right there. We also use AutoML Vision. We had access to about a thousand patients for MRI images and we wanted to see if we can detect Alzheimer's based on those. And we used AutoML for that. Actually works well. >> Some of the relationships with doctors, they're not always seen as the most tech savvy. So now they are getting more. As you do all this high-end, geeky stuff, you got to push it out to an interface. Google's really user-centric philosophy with user interfaces has always been kind of known for. Is that in Sheets, is that G Suite? How will you extend out the analysis and the interactions. How do you integrate into the edge work flow? You know? (laughs) >> So one thing I really appreciated for Google Cloud was that it was, seems to me it's built from the ground up for everyone to use. And it was the ease of access was very, was very important to us, like I said. We have data scientisits and statisticians and computer scientists onboard. But we needed a method and a platform that everybody can use. And through this program, they actually.. You guys provide what's called Qwiklab, which is, you know, screenshot of how to spin up a virtual machine and things like that. That, you know, a couple of years ago you have to run, you know, few command lines, too many command lines, to get that. Now it's just a push of a button. So that's just... Makes it much easier to work with people with background and domain knowledge and take away that 80% of the work, that's just a data engineering work that they don't want to do. >> That's awesome stuff. Well congratulations. Carol, a question to you is How does someone get involved in the Data Solutions for Change? An application? Online? Referral? I mean, how do these work? >> All of the above. (John laughs) We do have an online application and we welcome all non-profits to apply if they have a clear objective data problem that they want to solve. We would love to be able to help them. >> Does scope matter, big size, is it more mission? What's the mission criteria? Is there a certain bar to reach, so to speak, or-- >> Yeah, I mean we're most focused on... there really is not size, in terms of size of the non-profit or the breadth. It's much more around, do you have a problem that data and analytics can actually address. >> Yeah. >> So really working on problems that matter. And in addition, we actually announced this week that we are partnering with United Nations on a contest. It's called Sustainable.. It's for Visualize 2030 >> Yeah. >> So there are 17 sustainable development goals. >> Right, righr. >> And so, that's aimed at college students and storytelling to actually address one of these 17 areas. >> We'd love to follow up after the show, talk about some of the projects. since you have a lot of things going on. >> Yeah. >> Use of technology for good really is important right now, that people see that. People want to work for mission-driven organizations. >> Absolutely >> This becomes a clear citeria. Thanks for coming on. Appreciate it. Thanks for coming on today. Acute coverage here at Google Could Next 18 I'm John Furrier with Jeff Fricks. Stay with us. More coverage after this short break. (upbeat music)
SUMMARY :
Brought to you by Google Cloud Welcome to The Cube, thanks for joining us. So congratulations, VP of Product Marketing. It was a tremendous amount of work. So you guys are starting to show real progress And we work on driving and developing and you look back five years for that population of people for detection of the disease, Okay, talk about the news. And to be able to do what he's doing and the news you guys have here. And what we've realized is, you know, And companies that know how to take advantage Can you talk about, just a little detail You often also need to be able to apply So we're also proving free customer support, And also, the legacy of brand's are changing And without all the, you know, expertise So that's one of the reasons we And we could do it very, very quickly. and disrupt more of the enterprise side And you don't have to send it to different Some of the relationships with doctors, and take away that 80% of the work, Carol, a question to you is All of the above. It's much more around, do you have a problem And in addition, we actually announced this week and storytelling to actually address one of these 17 areas. since you have a lot of things going on. Use of technology for good really is important right now, Thanks for coming on today.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Jeff Frick | PERSON | 0.99+ |
Carol Carpenter | PERSON | 0.99+ |
Diane Green | PERSON | 0.99+ |
80% | QUANTITY | 0.99+ |
Ayin Vala | PERSON | 0.99+ |
United Nations | ORGANIZATION | 0.99+ |
Carol | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
San Francisco | LOCATION | 0.99+ |
Washington D.C. | LOCATION | 0.99+ |
Jeff Fricks | PERSON | 0.99+ |
Precision Medicine | ORGANIZATION | 0.99+ |
John | PERSON | 0.99+ |
five years | QUANTITY | 0.99+ |
John Furrier | PERSON | 0.99+ |
One | QUANTITY | 0.99+ |
three days | QUANTITY | 0.99+ |
Jobs Act | TITLE | 0.99+ |
BigQuery | TITLE | 0.99+ |
G Suite | TITLE | 0.99+ |
2018 | DATE | 0.99+ |
17 areas | QUANTITY | 0.98+ |
one | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
Third day | QUANTITY | 0.98+ |
this week | DATE | 0.98+ |
AutoML | TITLE | 0.98+ |
Cloud ML | TITLE | 0.98+ |
up to six months | QUANTITY | 0.98+ |
First | QUANTITY | 0.97+ |
Gmail | TITLE | 0.97+ |
BigQuery ML | TITLE | 0.97+ |
second things | QUANTITY | 0.97+ |
17 sustainable development goals | QUANTITY | 0.96+ |
about a thousand patients | QUANTITY | 0.95+ |
three things | QUANTITY | 0.95+ |
Google Cloud | ORGANIZATION | 0.94+ |
two different problems | QUANTITY | 0.94+ |
Google Earth | TITLE | 0.93+ |
AutoML Vision | TITLE | 0.93+ |
The Cube | ORGANIZATION | 0.93+ |
ML | TITLE | 0.93+ |
Alzheimer | OTHER | 0.91+ |
up to $5,000 a month | QUANTITY | 0.91+ |
day one | QUANTITY | 0.87+ |
couple of years ago | DATE | 0.87+ |
Istio | PERSON | 0.87+ |
first | QUANTITY | 0.85+ |
Vice President | PERSON | 0.85+ |
Google Cloud | TITLE | 0.85+ |
BigQuery ML. | TITLE | 0.85+ |
Next 2018 | DATE | 0.84+ |
one thing | QUANTITY | 0.83+ |
Qwiklab | TITLE | 0.79+ |
2030 | TITLE | 0.78+ |
Cloud | TITLE | 0.76+ |
#GoogleNext18 | EVENT | 0.73+ |
HIPAA | TITLE | 0.72+ |
Data Science Foundation | ORGANIZATION | 0.72+ |
Next 18 | TITLE | 0.7+ |
Cube | ORGANIZATION | 0.67+ |
I'm John | TITLE | 0.64+ |
tons | QUANTITY | 0.64+ |
Next | DATE | 0.63+ |
Furrier | PERSON | 0.59+ |
messages | QUANTITY | 0.58+ |
AI for Good Panel - Precision Medicine - SXSW 2017 - #IntelAI - #theCUBE
>> Welcome to the Intel AI Lounge. Today, we're very excited to share with you the Precision Medicine panel discussion. I'll be moderating the session. My name is Kay Erin. I'm the general manager of Health and Life Sciences at Intel. And I'm excited to share with you these three panelists that we have here. First is John Madison. He is a chief information medical officer and he is part of Kaiser Permanente. We're very excited to have you here. Thank you, John. >> Thank you. >> We also have Naveen Rao. He is the VP and general manager for the Artificial Intelligence Solutions at Intel. He's also the former CEO of Nervana, which was acquired by Intel. And we also have Bob Rogers, who's the chief data scientist at our AI solutions group. So, why don't we get started with our questions. I'm going to ask each of the panelists to talk, introduce themselves, as well as talk about how they got started with AI. So why don't we start with John? >> Sure, so can you hear me okay in the back? Can you hear? Okay, cool. So, I am a recovering evolutionary biologist and a recovering physician and a recovering geek. And I implemented the health record system for the first and largest region of Kaiser Permanente. And it's pretty obvious that most of the useful data in a health record, in lies in free text. So I started up a natural language processing team to be able to mine free text about a dozen years ago. So we can do things with that that you can't otherwise get out of health information. I'll give you an example. I read an article online from the New England Journal of Medicine about four years ago that said over half of all people who have had their spleen taken out were not properly vaccinated for a common form of pneumonia, and when your spleen's missing, you must have that vaccine or you die a very sudden death with sepsis. In fact, our medical director in Northern California's father died of that exact same scenario. So, when I read the article, I went to my structured data analytics team and to my natural language processing team and said please show me everybody who has had their spleen taken out and hasn't been appropriately vaccinated and we ran through about 20 million records in about three hours with the NLP team, and it took about three weeks with a structured data analytics team. That sounds counterintuitive but it actually happened that way. And it's not a competition for time only. It's a competition for quality and sensitivity and specificity. So we were able to indentify all of our members who had their spleen taken out, who should've had a pneumococcal vaccine. We vaccinated them and there are a number of people alive today who otherwise would've died absent that capability. So people don't really commonly associate natural language processing with machine learning, but in fact, natural language processing relies heavily and is the first really, highly successful example of machine learning. So we've done dozens of similar projects, mining free text data in millions of records very efficiently, very effectively. But it really helped advance the quality of care and reduce the cost of care. It's a natural step forward to go into the world of personalized medicine with the arrival of a 100-dollar genome, which is actually what it costs today to do a full genome sequence. Microbiomics, that is the ecosystem of bacteria that are in every organ of the body actually. And we know now that there is a profound influence of what's in our gut and how we metabolize drugs, what diseases we get. You can tell in a five year old, whether or not they were born by a vaginal delivery or a C-section delivery by virtue of the bacteria in the gut five years later. So if you look at the complexity of the data that exists in the genome, in the microbiome, in the health record with free text and you look at all the other sources of data like this streaming data from my wearable monitor that I'm part of a research study on Precision Medicine out of Stanford, there is a vast amount of disparate data, not to mention all the imaging, that really can collectively produce much more useful information to advance our understanding of science, and to advance our understanding of every individual. And then we can do the mash up of a much broader range of science in health care with a much deeper sense of data from an individual and to do that with structured questions and structured data is very yesterday. The only way we're going to be able to disambiguate those data and be able to operate on those data in concert and generate real useful answers from the broad array of data types and the massive quantity of data, is to let loose machine learning on all of those data substrates. So my team is moving down that pathway and we're very excited about the future prospects for doing that. >> Yeah, great. I think that's actually some of the things I'm very excited about in the future with some of the technologies we're developing. My background, I started actually being fascinated with computation in biological forms when I was nine. Reading and watching sci-fi, I was kind of a big dork which I pretty much still am. I haven't really changed a whole lot. Just basically seeing that machines really aren't all that different from biological entities, right? We are biological machines and kind of understanding how a computer works and how we engineer those things and trying to pull together concepts that learn from biology into that has always been a fascination of mine. As an undergrad, I was in the EE, CS world. Even then, I did some research projects around that. I worked in the industry for about 10 years designing chips, microprocessors, various kinds of ASICs, and then actually went back to school, quit my job, got a Ph.D. in neuroscience, computational neuroscience, to specifically understand what's the state of the art. What do we really understand about the brain? And are there concepts that we can take and bring back? Inspiration's always been we want to... We watch birds fly around. We want to figure out how to make something that flies. We extract those principles, and then build a plane. Don't necessarily want to build a bird. And so Nervana's really was the combination of all those experiences, bringing it together. Trying to push computation in a new a direction. Now, as part of Intel, we can really add a lot of fuel to that fire. I'm super excited to be part of Intel in that the technologies that we were developing can really proliferate and be applied to health care, can be applied to Internet, can be applied to every facet of our lives. And some of the examples that John mentioned are extremely exciting right now and these are things we can do today. And the generality of these solutions are just really going to hit every part of health care. I mean from a personal viewpoint, my whole family are MDs. I'm sort of the black sheep of the family. I don't have an MD. And it's always been kind of funny to me that knowledge is concentrated in a few individuals. Like you have a rare tumor or something like that, you need the guy who knows how to read this MRI. Why? Why is it like that? Can't we encapsulate that knowledge into a computer or into an algorithm, and democratize it. And the reason we couldn't do it is we just didn't know how. And now we're really getting to a point where we know how to do that. And so I want that capability to go to everybody. It'll bring the cost of healthcare down. It'll make all of us healthier. That affects everything about our society. So that's really what's exciting about it to me. >> That's great. So, as you heard, I'm Bob Rogers. I'm chief data scientist for analytics and artificial intelligence solutions at Intel. My mission is to put powerful analytics in the hands of every decision maker and when I think about Precision Medicine, decision makers are not just doctors and surgeons and nurses, but they're also case managers and care coordinators and probably most of all, patients. So the mission is really to put powerful analytics and AI capabilities in the hands of everyone in health care. It's a very complex world and we need tools to help us navigate it. So my background, I started with a Ph.D. in physics and I was computer modeling stuff, falling into super massive black holes. And there's a lot of applications for that in the real world. No, I'm kidding. (laughter) >> John: There will be, I'm sure. Yeah, one of these days. Soon as we have time travel. Okay so, I actually, about 1991, I was working on my post doctoral research, and I heard about neural networks, these things that could compute the way the brain computes. And so, I started doing some research on that. I wrote some papers and actually, it was an interesting story. The problem that we solved that got me really excited about neural networks, which have become deep learning, my office mate would come in. He was this young guy who was about to go off to grad school. He'd come in every morning. "I hate my project." Finally, after two weeks, what's your project? What's the problem? It turns out he had to circle these little fuzzy spots on these images from a telescope. So they were looking for the interesting things in a sky survey, and he had to circle them and write down their coordinates all summer. Anyone want to volunteer to do that? No? Yeah, he was very unhappy. So we took the first two weeks of data that he created doing his work by hand, and we trained an artificial neural network to do his summer project and finished it in about eight hours of computing. (crowd laughs) And so he was like yeah, this is amazing. I'm so happy. And we wrote a paper. I was the first author of course, because I was the senior guy at age 24. And he was second author. His first paper ever. He was very, very excited. So we have to fast forward about 20 years. His name popped up on the Internet. And so it caught my attention. He had just won the Nobel Prize in physics. (laughter) So that's where artificial intelligence will get you. (laughter) So thanks Naveen. Fast forwarding, I also developed some time series forecasting capabilities that allowed me to create a hedge fund that I ran for 12 years. After that, I got into health care, which really is the center of my passion. Applying health care to figuring out how to get all the data from all those siloed sources, put it into the cloud in a secure way, and analyze it so you can actually understand those cases that John was just talking about. How do you know that that person had had a splenectomy and that they needed to get that pneumovax? You need to be able to search all the data, so we used AI, natural language processing, machine learning, to do that and then two years ago, I was lucky enough to join Intel and, in the intervening time, people like Naveen actually thawed the AI winter and we're really in a spring of amazing opportunities with AI, not just in health care but everywhere, but of course, the health care applications are incredibly life saving and empowering so, excited to be here on this stage with you guys. >> I just want to cue off of your comment about the role of physics in AI and health care. So the field of microbiomics that I referred to earlier, bacteria in our gut. There's more bacteria in our gut than there are cells in our body. There's 100 times more DNA in that bacteria than there is in the human genome. And we're now discovering a couple hundred species of bacteria a year that have never been identified under a microscope just by their DNA. So it turns out the person who really catapulted the study and the science of microbiomics forward was an astrophysicist who did his Ph.D. in Steven Hawking's lab on the collision of black holes and then subsequently, put the other team in a virtual reality, and he developed the first super computing center and so how did he get an interest in microbiomics? He has the capacity to do high performance computing and the kind of advanced analytics that are required to look at a 100 times the volume of 3.2 billion base pairs of the human genome that are represented in the bacteria in our gut, and that has unleashed the whole science of microbiomics, which is going to really turn a lot of our assumptions of health and health care upside down. >> That's great, I mean, that's really transformational. So a lot of data. So I just wanted to let the audience know that we want to make this an interactive session, so I'll be asking for questions in a little bit, but I will start off with one question so that you can think about it. So I wanted to ask you, it looks like you've been thinking a lot about AI over the years. And I wanted to understand, even though AI's just really starting in health care, what are some of the new trends or the changes that you've seen in the last few years that'll impact how AI's being used going forward? >> So I'll start off. There was a paper published by a guy by the name of Tegmark at Harvard last summer that, for the first time, explained why neural networks are efficient beyond any mathematical model we predict. And the title of the paper's fun. It's called Deep Learning Versus Cheap Learning. So there were two sort of punchlines of the paper. One is is that the reason that mathematics doesn't explain the efficiency of neural networks is because there's a higher order of mathematics called physics. And the physics of the underlying data structures determined how efficient you could mine those data using machine learning tools. Much more so than any mathematical modeling. And so the second thing that was a reel from that paper is that the substrate of the data that you're operating on and the natural physics of those data have inherent levels of complexity that determine whether or not a 12th layer of neural net will get you where you want to go really fast, because when you do the modeling, for those math geeks in the audience, a factorial. So if there's 12 layers, there's 12 factorial permutations of different ways you could sequence the learning through those data. When you have 140 layers of a neural net, it's a much, much, much bigger number of permutations and so you end up being hardware-bound. And so, what Max Tegmark basically said is you can determine whether to do deep learning or cheap learning based upon the underlying physics of the data substrates you're operating on and have a good insight into how to optimize your hardware and software approach to that problem. >> So another way to put that is that neural networks represent the world in the way the world is sort of built. >> Exactly. >> It's kind of hierarchical. It's funny because, sort of in retrospect, like oh yeah, that kind of makes sense. But when you're thinking about it mathematically, we're like well, anything... The way a neural can represent any mathematical function, therfore, it's fully general. And that's the way we used to look at it, right? So now we're saying, well actually decomposing the world into different types of features that are layered upon each other is actually a much more efficient, compact representation of the world, right? I think this is actually, precisely the point of kind of what you're getting at. What's really exciting now is that what we were doing before was sort of building these bespoke solutions for different kinds of data. NLP, natural language processing. There's a whole field, 25 plus years of people devoted to figuring out features, figuring out what structures make sense in this particular context. Those didn't carry over at all to computer vision. Didn't carry over at all to time series analysis. Now, with neural networks, we've seen it at Nervana, and now part of Intel, solving customers' problems. We apply a very similar set of techniques across all these different types of data domains and solve them. All data in the real world seems to be hierarchical. You can decompose it into this hierarchy. And it works really well. Our brains are actually general structures. As a neuroscientist, you can look at different parts of your brain and there are differences. Something that takes in visual information, versus auditory information is slightly different but they're much more similar than they are different. So there is something invariant, something very common between all of these different modalities and we're starting to learn that. And this is extremely exciting to me trying to understand the biological machine that is a computer, right? We're figurig it out, right? >> One of the really fun things that Ray Chrisfall likes to talk about is, and it falls in the genre of biomimmicry, and how we actually replicate biologic evolution in our technical solutions so if you look at, and we're beginning to understand more and more how real neural nets work in our cerebral cortex. And it's sort of a pyramid structure so that the first pass of a broad base of analytics, it gets constrained to the next pass, gets constrained to the next pass, which is how information is processed in the brain. So we're discovering increasingly that what we've been evolving towards, in term of architectures of neural nets, is approximating the architecture of the human cortex and the more we understand the human cortex, the more insight we get to how to optimize neural nets, so when you think about it, with millions of years of evolution of how the cortex is structured, it shouldn't be a surprise that the optimization protocols, if you will, in our genetic code are profoundly efficient in how they operate. So there's a real role for looking at biologic evolutionary solutions, vis a vis technical solutions, and there's a friend of mine who worked with who worked with George Church at Harvard and actually published a book on biomimmicry and they wrote the book completely in DNA so if all of you have your home DNA decoder, you can actually read the book on your DNA reader, just kidding. >> There's actually a start up I just saw in the-- >> Read-Write DNA, yeah. >> Actually it's a... He writes something. What was it? (response from crowd member) Yeah, they're basically encoding information in DNA as a storage medium. (laughter) The company, right? >> Yeah, that same friend of mine who coauthored that biomimmicry book in DNA also did the estimate of the density of information storage. So a cubic centimeter of DNA can store an hexabyte of data. I mean that's mind blowing. >> Naveen: Highly done soon. >> Yeah that's amazing. Also you hit upon a really important point there, that one of the things that's changed is... Well, there are two major things that have changed in my perception from let's say five to 10 years ago, when we were using machine learning. You could use data to train models and make predictions to understand complex phenomena. But they had limited utility and the challenge was that if I'm trying to build on these things, I had to do a lot of work up front. It was called feature engineering. I had to do a lot of work to figure out what are the key attributes of that data? What are the 10 or 20 or 100 pieces of information that I should pull out of the data to feed to the model, and then the model can turn it into a predictive machine. And so, what's really exciting about the new generation of machine learning technology, and particularly deep learning, is that it can actually learn from example data those features without you having to do any preprogramming. That's why Naveen is saying you can take the same sort of overall approach and apply it to a bunch of different problems. Because you're not having to fine tune those features. So at the end of the day, the two things that have changed to really enable this evolution is access to more data, and I'd be curious to hear from you where you're seeing data come from, what are the strategies around that. So access to data, and I'm talking millions of examples. So 10,000 examples most times isn't going to cut it. But millions of examples will do it. And then, the other piece is the computing capability to actually take millions of examples and optimize this algorithm in a single lifetime. I mean, back in '91, when I started, we literally would have thousands of examples and it would take overnight to run the thing. So now in the world of millions, and you're putting together all of these combinations, the computing has changed a lot. I know you've made some revolutionary advances in that. But I'm curious about the data. Where are you seeing interesting sources of data for analytics? >> So I do some work in the genomics space and there are more viable permutations of the human genome than there are people who have ever walked the face of the earth. And the polygenic determination of a phenotypic expression translation, what are genome does to us in our physical experience in health and disease is determined by many, many genes and the interaction of many, many genes and how they are up and down regulated. And the complexity of disambiguating which 27 genes are affecting your diabetes and how are they up and down regulated by different interventions is going to be different than his. It's going to be different than his. And we already know that there's four or five distinct genetic subtypes of type II diabetes. So physicians still think there's one disease called type II diabetes. There's actually at least four or five genetic variants that have been identified. And so, when you start thinking about disambiguating, particularly when we don't know what 95 percent of DNA does still, what actually is the underlining cause, it will require this massive capability of developing these feature vectors, sometimes intuiting it, if you will, from the data itself. And other times, taking what's known knowledge to develop some of those feature vectors, and be able to really understand the interaction of the genome and the microbiome and the phenotypic data. So the complexity is high and because the variation complexity is high, you do need these massive members. Now I'm going to make a very personal pitch here. So forgive me, but if any of you have any role in policy at all, let me tell you what's happening right now. The Genomic Information Nondiscrimination Act, so called GINA, written by a friend of mine, passed a number of years ago, says that no one can be discriminated against for health insurance based upon their genomic information. That's cool. That should allow all of you to feel comfortable donating your DNA to science right? Wrong. You are 100% unprotected from discrimination for life insurance, long term care and disability. And it's being practiced legally today and there's legislation in the House, in mark up right now to completely undermine the existing GINA legislation and say that whenever there's another applicable statute like HIPAA, that the GINA is irrelevant, that none of the fines and penalties are applicable at all. So we need a ton of data to be able to operate on. We will not be getting a ton of data to operate on until we have the kind of protection we need to tell people, you can trust us. You can give us your data, you will not be subject to discrimination. And that is not the case today. And it's being further undermined. So I want to make a plea to any of you that have any policy influence to go after that because we need this data to help the understanding of human health and disease and we're not going to get it when people look behind the curtain and see that discrimination is occurring today based upon genetic information. >> Well, I don't like the idea of being discriminated against based on my DNA. Especially given how little we actually know. There's so much complexity in how these things unfold in our own bodies, that I think anything that's being done is probably childishly immature and oversimplifying. So it's pretty rough. >> I guess the translation here is that we're all unique. It's not just a Disney movie. (laughter) We really are. And I think one of the strengths that I'm seeing, kind of going back to the original point, of these new techniques is it's going across different data types. It will actually allow us to learn more about the uniqueness of the individual. It's not going to be just from one data source. They were collecting data from many different modalities. We're collecting behavioral data from wearables. We're collecting things from scans, from blood tests, from genome, from many different sources. The ability to integrate those into a unified picture, that's the important thing that we're getting toward now. That's what I think is going to be super exciting here. Think about it, right. I can tell you to visual a coin, right? You can visualize a coin. Not only do you visualize it. You also know what it feels like. You know how heavy it is. You have a mental model of that from many different perspectives. And if I take away one of those senses, you can still identify the coin, right? If I tell you to put your hand in your pocket, and pick out a coin, you probably can do that with 100% reliability. And that's because we have this generalized capability to build a model of something in the world. And that's what we need to do for individuals is actually take all these different data sources and come up with a model for an individual and you can actually then say what drug works best on this. What treatment works best on this? It's going to get better with time. It's not going to be perfect, because this is what a doctor does, right? A doctor who's very experienced, you're a practicing physician right? Back me up here. That's what you're doing. You basically have some categories. You're taking information from the patient when you talk with them, and you're building a mental model. And you apply what you know can work on that patient, right? >> I don't have clinic hours anymore, but I do take care of many friends and family. (laughter) >> You used to, you used to. >> I practiced for many years before I became a full-time geek. >> I thought you were a recovering geek. >> I am. (laughter) I do more policy now. >> He's off the wagon. >> I just want to take a moment and see if there's anyone from the audience who would like to ask, oh. Go ahead. >> We've got a mic here, hang on one second. >> I have tons and tons of questions. (crosstalk) Yes, so first of all, the microbiome and the genome are really complex. You already hit about that. Yet most of the studies we do are small scale and we have difficulty repeating them from study to study. How are we going to reconcile all that and what are some of the technical hurdles to get to the vision that you want? >> So primarily, it's been the cost of sequencing. Up until a year ago, it's $1000, true cost. Now it's $100, true cost. And so that barrier is going to enable fairly pervasive testing. It's not a real competitive market becaue there's one sequencer that is way ahead of everybody else. So the price is not $100 yet. The cost is below $100. So as soon as there's competition to drive the cost down, and hopefully, as soon as we all have the protection we need against discrimination, as I mentioned earlier, then we will have large enough sample sizes. And so, it is our expectation that we will be able to pool data from local sources. I chair the e-health work group at the Global Alliance for Genomics and Health which is working on this very issue. And rather than pooling all the data into a single, common repository, the strategy, and we're developing our five-year plan in a month in London, but the goal is to have a federation of essentially credentialed data enclaves. That's a formal method. HHS already does that so you can get credentialed to search all the data that Medicare has on people that's been deidentified according to HIPPA. So we want to provide the same kind of service with appropriate consent, at an international scale. And there's a lot of nations that are talking very much about data nationality so that you can't export data. So this approach of a federated model to get at data from all the countries is important. The other thing is a block-chain technology is going to be very profoundly useful in this context. So David Haussler of UC Santa Cruz is right now working on a protocol using an open block-chain, public ledger, where you can put out. So for any typical cancer, you may have a half dozen, what are called sematic variance. Cancer is a genetic disease so what has mutated to cause it to behave like a cancer? And if we look at those biologically active sematic variants, publish them on a block chain that's public, so there's not enough data there to reidentify the patient. But if I'm a physician treating a woman with breast cancer, rather than say what's the protocol for treating a 50-year-old woman with this cell type of cancer, I can say show me all the people in the world who have had this cancer at the age of 50, wit these exact six sematic variants. Find the 200 people worldwide with that. Ask them for consent through a secondary mechanism to donate everything about their medical record, pool that information of the core of 200 that exactly resembles the one sitting in front of me, and find out, of the 200 ways they were treated, what got the best results. And so, that's the kind of future where a distributed, federated architecture will allow us to query and obtain a very, very relevant cohort, so we can basically be treating patients like mine, sitting right in front of me. Same thing applies for establishing research cohorts. There's some very exciting stuff at the convergence of big data analytics, machine learning, and block chaining. >> And this is an area that I'm really excited about and I think we're excited about generally at Intel. They actually have something called the Collaborative Cancer Cloud, which is this kind of federated model. We have three different academic research centers. Each of them has a very sizable and valuable collection of genomic data with phenotypic annotations. So you know, pancreatic cancer, colon cancer, et cetera, and we've actually built a secure computing architecture that can allow a person who's given the right permissions by those organizations to ask a specific question of specific data without ever sharing the data. So the idea is my data's really important to me. It's valuable. I want us to be able to do a study that gets the number from the 20 pancreatic cancer patients in my cohort, up to the 80 that we have in the whole group. But I can't do that if I'm going to just spill my data all over the world. And there are HIPAA and compliance reasons for that. There are business reasons for that. So what we've built at Intel is this platform that allows you to do different kinds of queries on this genetic data. And reach out to these different sources without sharing it. And then, the work that I'm really involved in right now and that I'm extremely excited about... This also touches on something that both of you said is it's not sufficient to just get the genome sequences. You also have to have the phenotypic data. You have to know what cancer they've had. You have to know that they've been treated with this drug and they've survived for three months or that they had this side effect. That clinical data also needs to be put together. It's owned by other organizations, right? Other hospitals. So the broader generalization of the Collaborative Cancer Cloud is something we call the data exchange. And it's a misnomer in a sense that we're not actually exchanging data. We're doing analytics on aggregated data sets without sharing it. But it really opens up a world where we can have huge populations and big enough amounts of data to actually train these models and draw the thread in. Of course, that really then hits home for the techniques that Nervana is bringing to the table, and of course-- >> Stanford's one of your academic medical centers? >> Not for that Collaborative Cancer Cloud. >> The reason I mentioned Standford is because the reason I'm wearing this FitBit is because I'm a research subject at Mike Snyder's, the chair of genetics at Stanford, IPOP, intrapersonal omics profile. So I was fully sequenced five years ago and I get four full microbiomes. My gut, my mouth, my nose, my ears. Every three months and I've done that for four years now. And about a pint of blood. And so, to your question of the density of data, so a lot of the problem with applying these techniques to health care data is that it's basically a sparse matrix and there's a lot of discontinuities in what you can find and operate on. So what Mike is doing with the IPOP study is much the same as you described. Creating a highly dense longitudinal set of data that will help us mitigate the sparse matrix problem. (low volume response from audience member) Pardon me. >> What's that? (low volume response) (laughter) >> Right, okay. >> John: Lost the school sample. That's got to be a new one I've heard now. >> Okay, well, thank you so much. That was a great question. So I'm going to repeat this and ask if there's another question. You want to go ahead? >> Hi, thanks. So I'm a journalist and I report a lot on these neural networks, a system that's beter at reading mammograms than your human radiologists. Or a system that's better at predicting which patients in the ICU will get sepsis. These sort of fascinating academic studies that I don't really see being translated very quickly into actual hospitals or clinical practice. Seems like a lot of the problems are regulatory, or liability, or human factors, but how do you get past that and really make this stuff practical? >> I think there's a few things that we can do there and I think the proof points of the technology are really important to start with in this specific space. In other places, sometimes, you can start with other things. But here, there's a real confidence problem when it comes to health care, and for good reason. We have doctors trained for many, many years. School and then residencies and other kinds of training. Because we are really, really conservative with health care. So we need to make sure that technology's well beyond just the paper, right? These papers are proof points. They get people interested. They even fuel entire grant cycles sometimes. And that's what we need to happen. It's just an inherent problem, its' going to take a while. To get those things to a point where it's like well, I really do trust what this is saying. And I really think it's okay to now start integrating that into our standard of care. I think that's where you're seeing it. It's frustrating for all of us, believe me. I mean, like I said, I think personally one of the biggest things, I want to have an impact. Like when I go to my grave, is that we used machine learning to improve health care. We really do feel that way. But it's just not something we can do very quickly and as a business person, I don't actually look at those use cases right away because I know the cycle is just going to be longer. >> So to your point, the FDA, for about four years now, has understood that the process that has been given to them by their board of directors, otherwise known as Congress, is broken. And so they've been very actively seeking new models of regulation and what's really forcing their hand is regulation of devices and software because, in many cases, there are black box aspects of that and there's a black box aspect to machine learning. Historically, Intel and others are making inroads into providing some sort of traceability and transparency into what happens in that black box rather than say, overall we get better results but once in a while we kill somebody. Right? So there is progress being made on that front. And there's a concept that I like to use. Everyone knows Ray Kurzweil's book The Singularity Is Near? Well, I like to think that diadarity is near. And the diadarity is where you have human transparency into what goes on in the black box and so maybe Bob, you want to speak a little bit about... You mentioned that, in a prior discussion, that there's some work going on at Intel there. >> Yeah, absolutely. So we're working with a number of groups to really build tools that allow us... In fact Naveen probably can talk in even more detail than I can, but there are tools that allow us to actually interrogate machine learning and deep learning systems to understand, not only how they respond to a wide variety of situations but also where are there biases? I mean, one of the things that's shocking is that if you look at the clinical studies that our drug safety rules are based on, 50 year old white guys are the peak of that distribution, which I don't see any problem with that, but some of you out there might not like that if you're taking a drug. So yeah, we want to understand what are the biases in the data, right? And so, there's some new technologies. There's actually some very interesting data-generative technologies. And this is something I'm also curious what Naveen has to say about, that you can generate from small sets of observed data, much broader sets of varied data that help probe and fill in your training for some of these systems that are very data dependent. So that takes us to a place where we're going to start to see deep learning systems generating data to train other deep learning systems. And they start to sort of go back and forth and you start to have some very nice ways to, at least, expose the weakness of these underlying technologies. >> And that feeds back to your question about regulatory oversight of this. And there's the fascinating, but little known origin of why very few women are in clinical studies. Thalidomide causes birth defects. So rather than say pregnant women can't be enrolled in drug trials, they said any woman who is at risk of getting pregnant cannot be enrolled. So there was actually a scientific meritorious argument back in the day when they really didn't know what was going to happen post-thalidomide. So it turns out that the adverse, unintended consequence of that decision was we don't have data on women and we know in certain drugs, like Xanax, that the metabolism is so much slower, that the typical dosing of Xanax is women should be less than half of that for men. And a lot of women have had very serious adverse effects by virtue of the fact that they weren't studied. So the point I want to illustrate with that is that regulatory cycles... So people have known for a long time that was like a bad way of doing regulations. It should be changed. It's only recently getting changed in any meaningful way. So regulatory cycles and legislative cycles are incredibly slow. The rate of exponential growth in technology is exponential. And so there's impedance mismatch between the cycle time for regulation cycle time for innovation. And what we need to do... I'm working with the FDA. I've done four workshops with them on this very issue. Is that they recognize that they need to completely revitalize their process. They're very interested in doing it. They're not resisting it. People think, oh, they're bad, the FDA, they're resisting. Trust me, there's nobody on the planet who wants to revise these review processes more than the FDA itself. And so they're looking at models and what I recommended is global cloud sourcing and the FDA could shift from a regulatory role to one of doing two things, assuring the people who do their reviews are competent, and assuring that their conflicts of interest are managed, because if you don't have a conflict of interest in this very interconnected space, you probably don't know enough to be a reviewer. So there has to be a way to manage the conflict of interest and I think those are some of the keypoints that the FDA is wrestling with because there's type one and type two errors. If you underregulate, you end up with another thalidomide and people born without fingers. If you overregulate, you prevent life saving drugs from coming to market. So striking that balance across all these different technologies is extraordinarily difficult. If it were easy, the FDA would've done it four years ago. It's very complicated. >> Jumping on that question, so all three of you are in some ways entrepreneurs, right? Within your organization or started companies. And I think it would be good to talk a little bit about the business opportunity here, where there's a huge ecosystem in health care, different segments, biotech, pharma, insurance payers, etc. Where do you see is the ripe opportunity or industry, ready to really take this on and to make AI the competitive advantage. >> Well, the last question also included why aren't you using the result of the sepsis detection? We do. There were six or seven published ways of doing it. We did our own data, looked at it, we found a way that was superior to all the published methods and we apply that today, so we are actually using that technology to change clinical outcomes. As far as where the opportunities are... So it's interesting. Because if you look at what's going to be here in three years, we're not going to be using those big data analytics models for sepsis that we are deploying today, because we're just going to be getting a tiny aliquot of blood, looking for the DNA or RNA of any potential infection and we won't have to infer that there's a bacterial infection from all these other ancillary, secondary phenomenon. We'll see if the DNA's in the blood. So things are changing so fast that the opportunities that people need to look for are what are generalizable and sustainable kind of wins that are going to lead to a revenue cycle that are justified, a venture capital world investing. So there's a lot of interesting opportunities in the space. But I think some of the biggest opportunities relate to what Bob has talked about in bringing many different disparate data sources together and really looking for things that are not comprehensible in the human brain or in traditional analytic models. >> I think we also got to look a little bit beyond direct care. We're talking about policy and how we set up standards, these kinds of things. That's one area. That's going to drive innovation forward. I completely agree with that. Direct care is one piece. How do we scale out many of the knowledge kinds of things that are embedded into one person's head and get them out to the world, democratize that. Then there's also development. The underlying technology's of medicine, right? Pharmaceuticals. The traditional way that pharmaceuticals is developed is actually kind of funny, right? A lot of it was started just by chance. Penicillin, a very famous story right? It's not that different today unfortunately, right? It's conceptually very similar. Now we've got more science behind it. We talk about domains and interactions, these kinds of things but fundamentally, the problem is what we in computer science called NP hard, it's too difficult to model. You can't solve it analytically. And this is true for all these kinds of natural sorts of problems by the way. And so there's a whole field around this, molecular dynamics and modeling these sorts of things, that are actually being driven forward by these AI techniques. Because it turns out, our brain doesn't do magic. It actually doesn't solve these problems. It approximates them very well. And experience allows you to approximate them better and better. Actually, it goes a little bit to what you were saying before. It's like simulations and forming your own networks and training off each other. There are these emerging dynamics. You can simulate steps of physics. And you come up with a system that's much too complicated to ever solve. Three pool balls on a table is one such system. It seems pretty simple. You know how to model that, but it actual turns out you can't predict where a balls going to be once you inject some energy into that table. So something that simple is already too complex. So neural network techniques actually allow us to start making those tractable. These NP hard problems. And things like molecular dynamics and actually understanding how different medications and genetics will interact with each other is something we're seeing today. And so I think there's a huge opportunity there. We've actually worked with customers in this space. And I'm seeing it. Like Rosch is acquiring a few different companies in space. They really want to drive it forward, using big data to drive drug development. It's kind of counterintuitive. I never would've thought it had I not seen it myself. >> And there's a big related challenge. Because in personalized medicine, there's smaller and smaller cohorts of people who will benefit from a drug that still takes two billion dollars on average to develop. That is unsustainable. So there's an economic imperative of overcoming the cost and the cycle time for drug development. >> I want to take a go at this question a little bit differently, thinking about not so much where are the industry segments that can benefit from AI, but what are the kinds of applications that I think are most impactful. So if this is what a skilled surgeon needs to know at a particular time to care properly for a patient, this is where most, this area here, is where most surgeons are. They are close to the maximum knowledge and ability to assimilate as they can be. So it's possible to build complex AI that can pick up on that one little thing and move them up to here. But it's not a gigantic accelerator, amplifier of their capability. But think about other actors in health care. I mentioned a couple of them earlier. Who do you think the least trained actor in health care is? >> John: Patients. >> Yes, the patients. The patients are really very poorly trained, including me. I'm abysmal at figuring out who to call and where to go. >> Naveen: You know as much the doctor right? (laughing) >> Yeah, that's right. >> My doctor friends always hate that. Know your diagnosis, right? >> Yeah, Dr. Google knows. So the opportunities that I see that are really, really exciting are when you take an AI agent, like sometimes I like to call it contextually intelligent agent, or a CIA, and apply it to a problem where a patient has a complex future ahead of them that they need help navigating. And you use the AI to help them work through. Post operative. You've got PT. You've got drugs. You've got to be looking for side effects. An agent can actually help you navigate. It's like your own personal GPS for health care. So it's giving you the inforamation that you need about you for your care. That's my definition of Precision Medicine. And it can include genomics, of course. But it's much bigger. It's that broader picture and I think that a sort of agent way of thinking about things and filling in the gaps where there's less training and more opportunity, is very exciting. >> Great start up idea right there by the way. >> Oh yes, right. We'll meet you all out back for the next start up. >> I had a conversation with the head of the American Association of Medical Specialties just a couple of days ago. And what she was saying, and I'm aware of this phenomenon, but all of the medical specialists are saying, you're killing us with these stupid board recertification trivia tests that you're giving us. So if you're a cardiologist, you have to remember something that happens in one in 10 million people, right? And they're saying that irrelevant anymore, because we've got advanced decision support coming. We have these kinds of analytics coming. Precisely what you're saying. So it's human augmentation of decision support that is coming at blazing speed towards health care. So in that context, it's much more important that you have a basic foundation, you know how to think, you know how to learn, and you know where to look. So we're going to be human-augmented learning systems much more so than in the past. And so the whole recertification process is being revised right now. (inaudible audience member speaking) Speak up, yeah. (person speaking) >> What makes it fathomable is that you can-- (audience member interjects inaudibly) >> Sure. She was saying that our brain is really complex and large and even our brains don't know how our brains work, so... are there ways to-- >> What hope do we have kind of thing? (laughter) >> It's a metaphysical question. >> It circles all the way down, exactly. It's a great quote. I mean basically, you can decompose every system. Every complicated system can be decomposed into simpler, emergent properties. You lose something perhaps with each of those, but you get enough to actually understand most of the behavior. And that's really how we understand the world. And that's what we've learned in the last few years what neural network techniques can allow us to do. And that's why our brain can understand our brain. (laughing) >> Yeah, I'd recommend reading Chris Farley's last book because he addresses that issue in there very elegantly. >> Yeah we're seeing some really interesting technologies emerging right now where neural network systems are actually connecting other neural network systems in networks. You can see some very compelling behavior because one of the things I like to distinguish AI versus traditional analytics is we used to have question-answering systems. I used to query a database and create a report to find out how many widgets I sold. Then I started using regression or machine learning to classify complex situations from this is one of these and that's one of those. And then as we've moved more recently, we've got these AI-like capabilities like being able to recognize that there's a kitty in the photograph. But if you think about it, if I were to show you a photograph that happened to have a cat in it, and I said, what's the answer, you'd look at me like, what are you talking about? I have to know the question. So where we're cresting with these connected sets of neural systems, and with AI in general, is that the systems are starting to be able to, from the context, understand what the question is. Why would I be asking about this picture? I'm a marketing guy, and I'm curious about what Legos are in the thing or what kind of cat it is. So it's being able to ask a question, and then take these question-answering systems, and actually apply them so that's this ability to understand context and ask questions that we're starting to see emerge from these more complex hierarchical neural systems. >> There's a person dying to ask a question. >> Sorry. You have hit on several different topics that all coalesce together. You mentioned personalized models. You mentioned AI agents that could help you as you're going through a transitionary period. You mentioned data sources, especially across long time periods. Who today has access to enough data to make meaningful progress on that, not just when you're dealing with an issue, but day-to-day improvement of your life and your health? >> Go ahead, great question. >> That was a great question. And I don't think we have a good answer to it. (laughter) I'm sure John does. Well, I think every large healthcare organization and various healthcare consortiums are working very hard to achieve that goal. The problem remains in creating semantic interoperatability. So I spent a lot of my career working on semantic interoperatability. And the problem is that if you don't have well-defined, or self-defined data, and if you don't have well-defined and documented metadata, and you start operating on it, it's real easy to reach false conclusions and I can give you a classic example. It's well known, with hundreds of studies looking at when you give an antibiotic before surgery and how effective it is in preventing a post-op infection. Simple question, right? So most of the literature done prosectively was done in institutions where they had small sample sizes. So if you pool that, you get a little bit more noise, but you get a more confirming answer. What was done at a very large, not my own, but a very large institution... I won't name them for obvious reasons, but they pooled lots of data from lots of different hospitals, where the data definitions and the metadata were different. Two examples. When did they indicate the antibiotic was given? Was it when it was ordered, dispensed from the pharmacy, delivered to the floor, brought to the bedside, put in the IV, or the IV starts flowing? Different hospitals used a different metric of when it started. When did surgery occur? When they were wheeled into the OR, when they were prepped and drapped, when the first incision occurred? All different. And they concluded quite dramatically that it didn't matter when you gave the pre-op antibiotic and whether or not you get a post-op infection. And everybody who was intimate with the prior studies just completely ignored and discounted that study. It was wrong. And it was wrong because of the lack of commonality and the normalization of data definitions and metadata definitions. So because of that, this problem is much more challenging than you would think. If it were so easy as to put all these data together and operate on it, normalize and operate on it, we would've done that a long time ago. It's... Semantic interoperatability remains a big problem and we have a lot of heavy lifting ahead of us. I'm working with the Global Alliance, for example, of Genomics and Health. There's like 30 different major ontologies for how you represent genetic information. And different institutions are using different ones in different ways in different versions over different periods of time. That's a mess. >> Our all those issues applicable when you're talking about a personalized data set versus a population? >> Well, so N of 1 studies and single-subject research is an emerging field of statistics. So there's some really interesting new models like step wedge analytics for doing that on small sample sizes, recruiting people asynchronously. There's single-subject research statistics. You compare yourself with yourself at a different point in time, in a different context. So there are emerging statistics to do that and as long as you use the same sensor, you won't have a problem. But people are changing their remote sensors and you're getting different data. It's measured in different ways with different sensors at different normalization and different calibration. So yes. It even persists in the N of 1 environment. >> Yeah, you have to get started with a large N that you can apply to the N of 1. I'm actually going to attack your question from a different perspective. So who has the data? The millions of examples to train a deep learning system from scratch. It's a very limited set right now. Technology such as the Collaborative Cancer Cloud and The Data Exchange are definitely impacting that and creating larger and larger sets of critical mass. And again, not withstanding the very challenging semantic interoperability questions. But there's another opportunity Kay asked about what's changed recently. One of the things that's changed in deep learning is that we now have modules that have been trained on massive data sets that are actually very smart as certain kinds of problems. So, for instance, you can go online and find deep learning systems that actually can recognize, better than humans, whether there's a cat, dog, motorcycle, house, in a photograph. >> From Intel, open source. >> Yes, from Intel, open source. So here's what happens next. Because most of that deep learning system is very expressive. That combinatorial mixture of features that Naveen was talking about, when you have all these layers, there's a lot of features there. They're actually very general to images, not just finding cats, dogs, trees. So what happens is you can do something called transfer learning, where you take a small or modest data set and actually reoptimize it for your specific problem very, very quickly. And so we're starting to see a place where you can... On one end of the spectrum, we're getting access to the computing capabilities and the data to build these incredibly expressive deep learning systems. And over here on the right, we're able to start using those deep learning systems to solve custom versions of problems. Just last weekend or two weekends ago, in 20 minutes, I was able to take one of those general systems and create one that could recognize all different kinds of flowers. Very subtle distinctions, that I would never be able to know on my own. But I happen to be able to get the data set and literally, it took 20 minutes and I have this vision system that I could now use for a specific problem. I think that's incredibly profound and I think we're going to see this spectrum of wherever you are in your ability to get data and to define problems and to put hardware in place to see really neat customizations and a proliferation of applications of this kind of technology. >> So one other trend I think, I'm very hopeful about it... So this is a hard problem clearly, right? I mean, getting data together, formatting it from many different sources, it's one of these things that's probably never going to happen perfectly. But one trend I think that is extremely hopeful to me is the fact that the cost of gathering data has precipitously dropped. Building that thing is almost free these days. I can write software and put it on 100 million cell phones in an instance. You couldn't do that five years ago even right? And so, the amount of information we can gain from a cell phone today has gone up. We have more sensors. We're bringing online more sensors. People have Apple Watches and they're sending blood data back to the phone, so once we can actually start gathering more data and do it cheaper and cheaper, it actually doesn't matter where the data is. I can write my own app. I can gather that data and I can start driving the correct inferences or useful inferences back to you. So that is a positive trend I think here and personally, I think that's how we're going to solve it, is by gathering from that many different sources cheaply. >> Hi, my name is Pete. I've very much enjoyed the conversation so far but I was hoping perhaps to bring a little bit more focus into Precision Medicine and ask two questions. Number one, how have you applied the AI technologies as you're emerging so rapidly to your natural language processing? I'm particularly interested in, if you look at things like Amazon Echo or Siri, or the other voice recognition systems that are based on AI, they've just become incredibly accurate and I'm interested in specifics about how I might use technology like that in medicine. So where would I find a medical nomenclature and perhaps some reference to a back end that works that way? And the second thing is, what specifically is Intel doing, or making available? You mentioned some open source stuff on cats and dogs and stuff but I'm the doc, so I'm looking at the medical side of that. What are you guys providing that would allow us who are kind of geeks on the software side, as well as being docs, to experiment a little bit more thoroughly with AI technology? Google has a free AI toolkit. Several other people have come out with free AI toolkits in order to accelerate that. There's special hardware now with graphics, and different processors, hitting amazing speeds. And so I was wondering, where do I go in Intel to find some of those tools and perhaps learn a bit about the fantastic work that you guys are already doing at Kaiser? >> Let me take that first part and then we'll be able to talk about the MD part. So in terms of technology, this is what's extremely exciting now about what Intel is focusing on. We're providing those pieces. So you can actually assemble and build the application. How you build that application specific for MDs and the use cases is up to you or the one who's filling out the application. But we're going to power that technology for multiple perspectives. So Intel is already the main force behind The Data Center, right? Cloud computing, all this is already Intel. We're making that extremely amenable to AI and setting the standard for AI in the future, so we can do that from a number of different mechanisms. For somebody who wants to develop an application quickly, we have hosted solutions. Intel Nervana is kind of the brand for these kinds of things. Hosted solutions will get you going very quickly. Once you get to a certain level of scale, where costs start making more sense, things can be bought on premise. We're supplying that. We're also supplying software that makes that transition essentially free. Then taking those solutions that you develop in the cloud, or develop in The Data Center, and actually deploying them on device. You want to write something on your smartphone or PC or whatever. We're actually providing those hooks as well, so we want to make it very easy for developers to take these pieces and actually build solutions out of them quickly so you probably don't even care what hardware it's running on. You're like here's my data set, this is what I want to do. Train it, make it work. Go fast. Make my developers efficient. That's all you care about, right? And that's what we're doing. We're taking it from that point at how do we best do that? We're going to provide those technologies. In the next couple of years, there's going to be a lot of new stuff coming from Intel. >> Do you want to talk about AI Academy as well? >> Yeah, that's a great segway there. In addition to this, we have an entire set of tutorials and other online resources and things we're going to be bringing into the academic world for people to get going quickly. So that's not just enabling them on our tools, but also just general concepts. What is a neural network? How does it work? How does it train? All of these things are available now and we've made a nice, digestible class format that you can actually go and play with. >> Let me give a couple of quick answers in addition to the great answers already. So you're asking why can't we use medical terminology and do what Alexa does? Well, no, you may not be aware of this, but Andrew Ian, who was the AI guy at Google, who was recruited by Google, they have a medical chat bot in China today. I don't speak Chinese. I haven't been able to use it yet. There are two similar initiatives in this country that I know of. There's probably a dozen more in stealth mode. But Lumiata and Health Cap are doing chat bots for health care today, using medical terminology. You have the compound problem of semantic normalization within language, compounded by a cross language. I've done a lot of work with an international organization called Snowmed, which translates medical terminology. So you're aware of that. We can talk offline if you want, because I'm pretty deep into the semantic space. >> Go google Intel Nervana and you'll see all the websites there. It's intel.com/ai or nervanasys.com. >> Okay, great. Well this has been fantastic. I want to, first of all, thank all the people here for coming and asking great questions. I also want to thank our fantastic panelists today. (applause) >> Thanks, everyone. >> Thank you. >> And lastly, I just want to share one bit of information. We will have more discussions on AI next Tuesday at 9:30 AM. Diane Bryant, who is our general manager of Data Centers Group will be here to do a keynote. So I hope you all get to join that. Thanks for coming. (applause) (light electronic music)
SUMMARY :
And I'm excited to share with you He is the VP and general manager for the And it's pretty obvious that most of the useful data in that the technologies that we were developing So the mission is really to put and analyze it so you can actually understand So the field of microbiomics that I referred to earlier, so that you can think about it. is that the substrate of the data that you're operating on neural networks represent the world in the way And that's the way we used to look at it, right? and the more we understand the human cortex, What was it? also did the estimate of the density of information storage. and I'd be curious to hear from you And that is not the case today. Well, I don't like the idea of being discriminated against and you can actually then say what drug works best on this. I don't have clinic hours anymore, but I do take care of I practiced for many years I do more policy now. I just want to take a moment and see Yet most of the studies we do are small scale And so that barrier is going to enable So the idea is my data's really important to me. is much the same as you described. That's got to be a new one I've heard now. So I'm going to repeat this and ask Seems like a lot of the problems are regulatory, because I know the cycle is just going to be longer. And the diadarity is where you have and deep learning systems to understand, And that feeds back to your question about regulatory and to make AI the competitive advantage. that the opportunities that people need to look for to what you were saying before. of overcoming the cost and the cycle time and ability to assimilate Yes, the patients. Know your diagnosis, right? and filling in the gaps where there's less training We'll meet you all out back for the next start up. And so the whole recertification process is being are there ways to-- most of the behavior. because he addresses that issue in there is that the systems are starting to be able to, You mentioned AI agents that could help you So most of the literature done prosectively So there are emerging statistics to do that that you can apply to the N of 1. and the data to build these And so, the amount of information we can gain And the second thing is, what specifically is Intel doing, and the use cases is up to you that you can actually go and play with. You have the compound problem of semantic normalization all the websites there. I also want to thank our fantastic panelists today. So I hope you all get to join that.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Diane Bryant | PERSON | 0.99+ |
Bob Rogers | PERSON | 0.99+ |
Kay Erin | PERSON | 0.99+ |
John | PERSON | 0.99+ |
David Haussler | PERSON | 0.99+ |
China | LOCATION | 0.99+ |
six | QUANTITY | 0.99+ |
Chris Farley | PERSON | 0.99+ |
Naveen Rao | PERSON | 0.99+ |
100% | QUANTITY | 0.99+ |
Bob | PERSON | 0.99+ |
10 | QUANTITY | 0.99+ |
Ray Kurzweil | PERSON | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
London | LOCATION | 0.99+ |
Mike | PERSON | 0.99+ |
John Madison | PERSON | 0.99+ |
American Association of Medical Specialties | ORGANIZATION | 0.99+ |
four | QUANTITY | 0.99+ |
ORGANIZATION | 0.99+ | |
three months | QUANTITY | 0.99+ |
HHS | ORGANIZATION | 0.99+ |
Andrew Ian | PERSON | 0.99+ |
20 minutes | QUANTITY | 0.99+ |
$100 | QUANTITY | 0.99+ |
first paper | QUANTITY | 0.99+ |
Congress | ORGANIZATION | 0.99+ |
95 percent | QUANTITY | 0.99+ |
second author | QUANTITY | 0.99+ |
UC Santa Cruz | ORGANIZATION | 0.99+ |
100-dollar | QUANTITY | 0.99+ |
200 ways | QUANTITY | 0.99+ |
two billion dollars | QUANTITY | 0.99+ |
George Church | PERSON | 0.99+ |
Health Cap | ORGANIZATION | 0.99+ |
Naveen | PERSON | 0.99+ |
25 plus years | QUANTITY | 0.99+ |
12 layers | QUANTITY | 0.99+ |
27 genes | QUANTITY | 0.99+ |
12 years | QUANTITY | 0.99+ |
Kay | PERSON | 0.99+ |
140 layers | QUANTITY | 0.99+ |
first author | QUANTITY | 0.99+ |
one question | QUANTITY | 0.99+ |
200 people | QUANTITY | 0.99+ |
20 | QUANTITY | 0.99+ |
First | QUANTITY | 0.99+ |
CIA | ORGANIZATION | 0.99+ |
NLP | ORGANIZATION | 0.99+ |
Today | DATE | 0.99+ |
two questions | QUANTITY | 0.99+ |
yesterday | DATE | 0.99+ |
Pete | PERSON | 0.99+ |
Medicare | ORGANIZATION | 0.99+ |
Legos | ORGANIZATION | 0.99+ |
Northern California | LOCATION | 0.99+ |
Echo | COMMERCIAL_ITEM | 0.99+ |
Each | QUANTITY | 0.99+ |
100 times | QUANTITY | 0.99+ |
nervanasys.com | OTHER | 0.99+ |
$1000 | QUANTITY | 0.99+ |
Ray Chrisfall | PERSON | 0.99+ |
Nervana | ORGANIZATION | 0.99+ |
Data Centers Group | ORGANIZATION | 0.99+ |
Global Alliance | ORGANIZATION | 0.99+ |
Global Alliance for Genomics and Health | ORGANIZATION | 0.99+ |
millions | QUANTITY | 0.99+ |
intel.com/ai | OTHER | 0.99+ |
four years | QUANTITY | 0.99+ |
Stanford | ORGANIZATION | 0.99+ |
10,000 examples | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
one disease | QUANTITY | 0.99+ |
Two examples | QUANTITY | 0.99+ |
Steven Hawking | PERSON | 0.99+ |
five years ago | DATE | 0.99+ |
first | QUANTITY | 0.99+ |
two sort | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
One | QUANTITY | 0.99+ |
first time | QUANTITY | 0.99+ |
Kamile Taouk, UNSW & Sabrina Yan, Children's Cancer Institute | DockerCon 2020
>>from around the globe. It's the queue with digital coverage of Docker Con Live 2020 brought to you by Docker and its ecosystem partners. Welcome to the Special Cube coverage of Docker Con 2020. It's a virtual digital event co produced by Docker and the Cube. Thanks for joining us. We have great segment here. Precision cancer medicine really is evolving where the personalization of the data are really going to be important to personalize those treatments based upon unique characteristics of the tumors. This is something that's been a really hot topic, talking point and focus area in the industry. And technology is here to help with two great guests who are using technology. Docker Docker containers a variety of other things to help the process go further along. And we got here spring and who's the bioinformatics research assistant and Camille took Who's a student and in turn, you guys done some compelling work. Thanks for joining this docker con virtualized. Thanks for coming on. >>Thanks for having me. >>So first tell us about yourself and what you guys doing at the Children's Cancer Institute? That's where you're located. What's going on there? Tell us what you guys are doing there? >>Sure, So I built into Cancer Institute. As it sounds, we do a lot of research when it comes to specifically the Children's cancer, though Children a unique in the sense that a lot of the typical treatment we use for adult may or may not work or will have adverse side effects. So what we do is we do all kinds of research. But what lab and I love, which we call a dry love What we do research in silica, using computers at the develop pipelines in order to improve outcomes for Children. >>And what are some of the things you get some to deal with us on the tech side, but also there's the workflow of the patients survival rates, capacity, those constraints that you guys are dealing with. And what are some of the some of the things going on there that you have to deal with and you're trying to improve the outcomes? What specific outcomes were you trying to work through? >>Well, at the moment off of the past decade and all the work you've done in the past decade, we've made a substantial impact on the supply of ability off several high risk cancers in Pediatrics on and we've Got a certain Program, which spent I'll talk about in more depth called the Zero Childhood Cancer Program and essentially that aims to reduce childhood cancer in Children uh, zero. So that, in other words, with the previous five ability 100% on hopefully, no lives will be lost. But that's >>and what do you guys doing specifically? What's your your job? What's your focus? >>Yes, so part of our lab Old computational biology. Uh, we run a processing pipeline, the whole genome and our next guest that, given the sequencing information for the kids, though, we sequence the healthy cells and we sequence there. Two missiles. We analyze them together, and what we do is we find mutations that are causing the cancel that help us determine what treatment. So what? Clinical trials might be most effective for the kids and so specifically Allah books on that pipeline where we run a whole bunch of bioinformatics tools, that area buying thematic basically biology, informatics, and we use the data generated sequel thing in order to extract those mutations that will be the cancer driving mutations that hopefully we can target in order to treat the kids. >>You know, you hear about an attack and you hear Facebook personalization recommendation engines. What the click on you guys are really doing Really? Mawr personalization around treatment recommendations. These kinds of things come into it. Can you share a little bit about what goes on there and and tell us what's happening? >>Well, as you mentioned when you first, some brought us into this, which we're looking at, the the profile of the team itself and that allows us to specialize the medication on the young treatment for that patient on. Essentially, that lets us improve the efficiency and the effectiveness off the treatment, which in turn has an impact on this probability off. >>What are some of the technical things? How did you guys get involved with Docker with Docker fit into all this? >>Yeah, I'm sure Camille will have plenty to bring up on this as well. But, um, yes, it's been quite a project to the the pipeline that we have. Um, we have built on a specific platforms and is looking great. But as with most tools in a lot of things that you develop when your engineers eyes pretty easy for them to become platform specific. And then that kind of stuck there. And you have to re engineer the whole thing kind of of a black hole. That's such a pain to there. So, um, the project that Mikhail in my field working on was actually taking it to the individual's pools we used in the pipeline and Docker rising them individually containing them with the dependencies they need so that we could hook them up anyway. We want So we can configure the pipeline, not just customized based off of the data like we're on the same pipeline and every it even being able to change the pipeline of different things to different kids. Be able to do that easily, um, to be able to run it on different platforms. You know, the fact that we have the choice not only means that we could save money, but if there's a cloud instance that will run an app costal. If there's a platform that you know wanted to collaborate with us and they say, Oh, we have this wholesome data we'd love for you to analyze. It's over hell, like a lot of you know, >>use my tool. It's really great. >>Yeah. And so having portability is a big thing as well. And so I'm sure people can go on about, uh, some of the pain point you having to do authorize all of the different, But, you know, even though they Austin challenges associated with doing it, I think the payoff is massive. >>Dig into this because this is one of the things where you've got a problem statement. You got a real world example. Cancer patients, life or death gets a serious things going on here. You're a tech. You get in here. What's going on? You're like, Okay, this is going to be easy. Just wrangle the data. I throw some compute at it. It's over, right? You know what? How did you take us through the life? They're, you know, living >>right. So a supreme I mentioned before, first and foremost well, in the scale of several 100 terabytes worth of data for every single patient. So obviously we can start to understand just how beneficial it is to move the pipeline to the data, rather the other way around. Um, so much time would be saved. The money costs as well, in terms of actually Docker rising the but the programs that analyze the data, it was quite difficult. And I think Sabrina would agree mate would agree with me on this point. The primary issue was that almost all of the apps we encountered within the pipeline we're very, very heavily dependent on very specific versions off some dependencies, but that they were just build upon so many other different APS on and they were very heavily fined tuned. So docker rising. It was quite difficult because we have to preserve every single version of every single dependency in one instance just to ensure that that was working. And these apps get updated quite Simpson my regularly. So we have to ensure that our doctors would survive. >>So what does it really take? The doc arise your pipeline. >>I mean, it was a whole project. Well, um, myself, Camille, we had a whole bunch of, um, automatic guns doing us over the summer, which was fantastic as well. And we basically have a whole team of lost words like, Okay, here's another automatic pull in the pipeline. You get enterprise, you get to go for a special you get enterprise, they each who individually and then you've been days awake on it, depending on the app. Easier than others. Um, but particularly when it comes to things a lot by a dramatic pools, some of them are very memory hungry. Some of them are very finicky. Some of the, um ah, little stable than others. And so you could spend one day characterizing a tool. And it's done, you know, in a handful of Allah's old. Sometimes it could make a week, and he's just getting this one tool done. And the idea behind the whole team working on it was eventually use. Look through this process, and then you have, um, a docker file set up. Well, anyone to run it on any system. And we know we have an identical set up, which was not sure before, because I remember when I started and I was trying to get the pipeline running on my own machine. Ah, lot of things just didn't look like Oh, you don't have the very specific version of ah that this developer has. 00 that's not working because you don't have this specific girl file that actually has a bug fixes in it. Just for us like, Well, >>he had a lot of limitations before the doctor and doctor analyzing docker container izing it. It was tough. What was it like before and after? >>And we'll probably speak more people full. It was basically, uh, yeah, days or weeks trying to set up on in. Stole everything needed around the whole pipeline. Yeah, it took a long time. And even then, a lot of things, But how you got to set up this? You know, I think speculation of pipeline, all the units, these are the three of the different programs. Will you need this version of obligation? This new upgrade of the tools that work with that version of Oz The old, all kinds of issues that you run into when they schools depend on entirely different things and to install, like, four different versions of python. Three different versions of our or different versions of job on the one machine, you know, just to run it is a bit of >>what has. It's a hassle. Basically, it's a nightmare. And now, after you're >>probably familiar with that, >>Yeah. So what's it like after >>it's a zoo? It supports ridiculously efficient. Like it. It's It's incredible what Michael mentioned before, as soon as we did in stone. Those at the versions of the dependencies. Dhaka keeps them naturally, and we can specify the versions within a docker container. So we can. We can absolutely guarantee that that application will run successfully and effectively every single time. >>Share with me how complicated these pipelines are. Sounds like that's a key piece here for you guys. And you had all the hassles that you do. Your get Docker rised up and things work smoothly. Got that? But tell >>me about >>the pipelines. What's what's so complicated about them? >>Honestly, the biggest complication is all of the connection. It's not a simple as, um, run a from the sea, and then you don't That would be nice, but that know how these things work if you have a network of programs with the output of this, input for another, and you have to run this program before this little this one. But some of the output become input for multiple programs, and by the time you hook the whole thing up, it looks like a gigantic web of applications. The way all the connections, so it's a massive Well, it almost looks like a massive met when you look at it. But having each of the individual tools contained and working means that we can look them all up. And even though it looks complicated, it would be far more complicated if we had that entire pipeline. You know, in a single program like having to code, that whole thing in a single group would be an absolute nightmare. Where is being able to have each of the tools as individual doctors means we just have the link, the input on that book, which is the top. But once you've done that, it means that you know each of the individual pools will run. And if an individual fails, or whatever raised in memory or other issues run into, you can rerun that one individual school re hooks the output into whatever the next program is going without having one massive you know, program will file what it fails midway through, and there's nothing you can do. >>Yeah, you unpack. It really says, Basically, you get the goodness to the work up front, and a lot of goodness come out of it. So this lets comes to the future of health. What are the key takeaways that you guys have from this process? And how does it apply to things that might be helpful to you right around the corner? Or today, like deep learning as you get more tools out there with machine learning and deep learning? Um, we hope there's gonna be some cool things coming out. What do you guys see here? And the insights? >>Well, we have a section of how the computational biologist team that is looking into doing more predictive talks working out, um, basically the risk of people developing can't the risks of kids developing cancel. And that's something you can do when you have all of this data. But that requires a lot of analysis as well. And so one of the benefits of you know being able to have these very moveable pipelines and tools makes it easier to run them on. The cloud makes it easier to shale. You're processing with about researches to the hospitals, just making collaboration easier. Mainz that data sharing becomes a possibility or is before if you have three different organizations. But the daughter in three different places. Um, how do you share that with moving the daughter really feasible. Pascal, can you analyze it in a way that practical and so I don't want one of the benefits of Docker? Is all of these advanced tools coming out? You know, if there's some amazing predicted that comes out that uses some kind of regression little deep learning, whatever. If we wanted to add that being able to dock arise a complex school into a single docker ice makes it less complicated that highlighted the pipeline in the future, if that's something we'd like to do, >>Camille, any thoughts on your end on this? >>Actually, I was Sabrina in my mind for the last point. I was just thinking about scalability definitely is very. It's a huge point because the part about the girls as a technology does any kind of technology that we've got to inspect into the pipeline. As of now, it be significantly easier with the use of Docker. You could just docker rise that technology and then implant that straight into the pipeline. Minimal stress. >>So productivity agility doesn't come home for you guys. Is that resonate? >>Yeah, definitely. >>And you got the collaboration. So there's business benefits, the outcomes. Are there any proof points you could share on some results that you guys are seeing some fruit from the tree, if you will, from all this Goodness. >>Well, one of the things we've been working on is actually a collaboration with those Bio Commons and Katica. They built a platform, specifically the development pipelines. We wanted to go out, and they have support for Docker containers built into the platform, which makes it very easy to push a lot of containers of the platform, look them up and be able to collaborate with them not only to try a new platform without that, but also help them look like a platform to be able to shoot action access data that's been uploaded there as well. But a lot of people we wouldn't have been able to do that if we hadn't. Guys, they're up. It just wouldn't have. Actually, it wouldn't be possible. And now that we have, we've been able to collaborate with them in terms of improving the platform. But also to be able to share and run our pipelines on other data will just pretty good, >>awesome. Well, It's great to have you on the Cube here on Docker Con 2020 from down under. Great Internet connections get great Internet down. They're keeping us remote were sheltering in place here. Stay safe and you guys final question. Could you eat? Share in your own words from a developer? From a tech standpoint, as you're in this core role, super important role, the outcomes are significant and have real impact. What has the technology? What is docker ization done for you guys and for your work environment and for the business share in your own words what it means. A lot of other developers are watching What's your opinion? >>But yeah, I mean, the really practical point is we've massively increased capacity of the pipeline. One thing that been quite fantastic years. We've got a lot of increased. The Port zero child who can program, which means going into the schedule will actually be able to open a program. Every child in Australia that, uh, has cancel will be ableto add them to the program. Where is currently we're only able to enroll kids who are low survivability, right? So about 30% the lowest 30% of the viability we're able to roll over program currently, but having a pipeline where we can just double the memory like that double the amount of battle. Uh, and the fact that we can change the instance is really to just double the capacity trip. The capacity means that now that we have the support to be able to enroll potentially every kid, Mr Leo, um, once we've upgraded the whole pipeline, it means will actually be a code with the amount of Children being enrolled, whereas on the existing pipeline, we're currently that capacity. So doing the upgrade in a really practical way means that we're actually going to be a triple the number of kids in Australia. We can add onto the program which wouldn't have been possible otherwise >>unleashing the limitations and making it totally scalable. Your thoughts as developers watching you're in there, Your hand in your hands, dirty. You built it. It's showing some traction. What's what's your what's your take? What's your view? >>Well, I mean first and foremost locks events. It just feels fantastic knowing that what we're doing is as a substantial and quantify who impact on the on a subset of the population and we're literally saving lives. Analyze with the work that we're doing in terms off developing with With that technology, such a breeze especially compared Teoh I've had minimal contact with what it was like without docker and from the horror stories I've heard, it's It's It's a godsend. It's It's it's really improved The quality of developing. >>Well, you guys have a great mission. And congratulations on the success. Really impact right there. You guys are doing great work and it must feel great. I'm happy for you and great to connect with you guys and continue, you know, using technology to get the outcomes, not just using technology. So Fantastic story. Thank you for sharing. Appreciate >>you having me. >>Thank you. >>Okay, I'm John for we here for Docker Con 2020 Docker con virtual docker con digital. It's a digital event This year we were all shale three in place that we're in the Palo Alto studios for Docker con 2020. I'm John furrier. Stay with us for more coverage digitally go to docker con dot com from or check out all these different sessions And of course, stay with us for this feat. Thank you very much. Yeah, yeah, yeah, yeah, yeah, yeah
SUMMARY :
of Docker Con Live 2020 brought to you by Docker and its ecosystem Tell us what you guys are doing there? a unique in the sense that a lot of the typical treatment we use for adult may or may not work And what are some of the some of the things going on there that you have to deal with and you're trying to improve the outcomes? Well, at the moment off of the past decade and all the work you've done in the past decade, for the kids and so specifically Allah books on that pipeline where we run a whole bunch of What the click on you guys are really doing Really? Well, as you mentioned when you first, some brought us into this, which we're looking You know, the fact that we have the choice not only means that we could save money, It's really great. go on about, uh, some of the pain point you having to do authorize all of the different, They're, you know, living of actually Docker rising the but the programs that analyze the data, So what does it really take? Ah, lot of things just didn't look like Oh, you don't have the very specific he had a lot of limitations before the doctor and doctor analyzing docker container izing it. on the one machine, you know, just to run it is a bit of And now, Those at the versions of the dependencies. And you had all the hassles that you do. the pipelines. and by the time you hook the whole thing up, it looks like a gigantic web of applications. What are the key takeaways that you guys have of the benefits of you know being able to have these very moveable It's a huge point because the part about the girls as a technology does any So productivity agility doesn't come home for you guys. And you got the collaboration. And now that we have, we've been able to collaborate with them in terms of improving the platform. Well, It's great to have you on the Cube here on Docker Con 2020 from down under. Uh, and the fact that we can change the instance is really to just double What's what's your what's your take? on a subset of the population and we're literally saving lives. great to connect with you guys and continue, you know, using technology to get the outcomes, Thank you very much.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Brian Gilmore | PERSON | 0.99+ |
David Brown | PERSON | 0.99+ |
Tim Yoakum | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
Dave Volante | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Brian | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Tim Yokum | PERSON | 0.99+ |
Stu | PERSON | 0.99+ |
Herain Oberoi | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Dave Valante | PERSON | 0.99+ |
Kamile Taouk | PERSON | 0.99+ |
John Fourier | PERSON | 0.99+ |
Rinesh Patel | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Santana Dasgupta | PERSON | 0.99+ |
Europe | LOCATION | 0.99+ |
Canada | LOCATION | 0.99+ |
BMW | ORGANIZATION | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
ICE | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Jack Berkowitz | PERSON | 0.99+ |
Australia | LOCATION | 0.99+ |
NVIDIA | ORGANIZATION | 0.99+ |
Telco | ORGANIZATION | 0.99+ |
Venkat | PERSON | 0.99+ |
Michael | PERSON | 0.99+ |
Camille | PERSON | 0.99+ |
Andy Jassy | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Venkat Krishnamachari | PERSON | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
Don Tapscott | PERSON | 0.99+ |
thousands | QUANTITY | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
Intercontinental Exchange | ORGANIZATION | 0.99+ |
Children's Cancer Institute | ORGANIZATION | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
telco | ORGANIZATION | 0.99+ |
Sabrina Yan | PERSON | 0.99+ |
Tim | PERSON | 0.99+ |
Sabrina | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
MontyCloud | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Leo | PERSON | 0.99+ |
COVID-19 | OTHER | 0.99+ |
Santa Ana | LOCATION | 0.99+ |
UK | LOCATION | 0.99+ |
Tushar | PERSON | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
Valente | PERSON | 0.99+ |
JL Valente | PERSON | 0.99+ |
1,000 | QUANTITY | 0.99+ |
Full Keynote Hour - DockerCon 2020
(water running) (upbeat music) (electric buzzing) >> Fuel up! (upbeat music) (audience clapping) (upbeat music) >> Announcer: From around the globe. It's the queue with digital coverage of DockerCon live 2020, brought to you by Docker and its ecosystem partners. >> Hello everyone, welcome to DockerCon 2020. I'm John Furrier with theCUBE I'm in our Palo Alto studios with our quarantine crew. We have a great lineup here for DockerCon 2020. Virtual event, normally it was in person face to face. I'll be with you throughout the day from an amazing lineup of content, over 50 different sessions, cube tracks, keynotes, and we've got two great co-hosts here with Docker, Jenny Burcio and Bret Fisher. We'll be with you all day today, taking you through the program, helping you navigate the sessions. I'm so excited. Jenny, this is a virtual event. We talk about this. Can you believe it? Maybe the internet gods be with us today and hope everyone's having-- >> Yes. >> Easy time getting in. Jenny, Bret, thank you for-- >> Hello. >> Being here. >> Hey. >> Hi everyone, so great to see everyone chatting and telling us where they're from. Welcome to the Docker community. We have a great day planned for you. >> Guys great job getting this all together. I know how hard it is. These virtual events are hard to pull off. I'm blown away by the community at Docker. The amount of sessions that are coming in the sponsor support has been amazing. Just the overall excitement around the brand and the opportunities given this tough times where we're in. It's super exciting again, made the internet gods be with us throughout the day, but there's plenty of content. Bret's got an amazing all day marathon group of people coming in and chatting. Jenny, this has been an amazing journey and it's a great opportunity. Tell us about the virtual event. Why DockerCon virtual. Obviously everyone's canceling their events, but this is special to you guys. Talk about DockerCon virtual this year. >> The Docker community shows up at DockerCon every year, and even though we didn't have the opportunity to do an in person event this year, we didn't want to lose the time that we all come together at DockerCon. The conversations, the amazing content and learning opportunities. So we decided back in December to make DockerCon a virtual event. And of course when we did that, there was no quarantine we didn't expect, you know, I certainly didn't expect to be delivering it from my living room, but we were just, I mean we were completely blown away. There's nearly 70,000 people across the globe that have registered for DockerCon today. And when you look at DockerCon of past right live events, really and we're learning are just the tip of the iceberg and so thrilled to be able to deliver a more inclusive global event today. And we have so much planned I think. Bret, you want to tell us some of the things that you have planned? >> Well, I'm sure I'm going to forget something 'cause there's a lot going on. But, we've obviously got interviews all day today on this channel with John and the crew. Jenny has put together an amazing set of all these speakers, and then you have the captain's on deck, which is essentially the YouTube live hangout where we just basically talk shop. It's all engineers, all day long. Captains and special guests. And we're going to be in chat talking to you about answering your questions. Maybe we'll dig into some stuff based on the problems you're having or the questions you have. Maybe there'll be some random demos, but it's basically not scripted, it's an all day long unscripted event. So I'm sure it's going to be a lot of fun hanging out in there. >> Well guys, I want to just say it's been amazing how you structured this so everyone has a chance to ask questions, whether it's informal laid back in the captain's channel or in the sessions, where the speakers will be there with their presentations. But Jenny, I want to get your thoughts because we have a site out there that's structured a certain way for the folks watching. If you're on your desktop, there's a main stage hero. There's then tracks and Bret's running the captain's tracks. You can click on that link and jump into his session all day long. He's got an amazing set of line of sleet, leaning back, having a good time. And then each of the tracks, you can jump into those sessions. It's on a clock, it'll be available on demand. All that content is available if you're on your desktop. If you're on your mobile, it's the same thing. Look at the calendar, find the session that you want. If you're interested in it, you could watch it live and chat with the participants in real time or watch it on demand. So there's plenty of content to navigate through. We do have it on a clock and we'll be streaming sessions as they happen. So you're in the moment and that's a great time to chat in real time. But there's more, Jenny, getting more out of this event. You guys try to bring together the stimulation of community. How does the participants get more out of the the event besides just consuming some of the content all day today? >> Yes, so first set up your profile, put your picture next to your chat handle and then chat. John said we have various setups today to help you get the most out of your experience are breakout sessions. The content is prerecorded, so you get quality content and the speakers and chat so you can ask questions the whole time. If you're looking for the hallway track, then definitely check out the captain's on deck channel. And then we have some great interviews all day on the queue. So set up your profile, join the conversation and be kind, right? This is a community event. Code of conduct is linked on every page at the top, and just have a great day. >> And Bret, you guys have an amazing lineup on the captain, so you have a great YouTube channel that you have your stream on. So the folks who were familiar with that can get that either on YouTube or on the site. The chat is integrated in, So you're set up, what do you got going on? Give us the highlights. What are you excited about throughout your day? Take us through your program on the captains. That's going to be probably pretty dynamic in the chat too. >> Yeah, so I'm sure we're going to have lots of, stuff going on in chat. So no cLancaerns there about, having crickets in the chat. But we're going to be basically starting the day with two of my good Docker captain friends, (murmurs) and Laura Taco. And we're going to basically start you out and at the end of this keynote, at the end of this hour and we're going to get you going and then you can maybe jump out and go to take some sessions. Maybe there's some stuff you want to check out and other sessions that you want to chat and talk with the instructors, the speakers there, and then you're going to come back to us, right? Or go over, check out the interviews. So the idea is you're hopping back and forth and throughout the day we're basically changing out every hour. We're not just changing out the guests basically, but we're also changing out the topics that we can cover because different guests will have different expertise. We're going to have some special guests in from Microsoft, talk about some of the cool stuff going on there, and basically it's captains all day long. And if you've been on my YouTube live show you've watched that, you've seen a lot of the guests we have on there. I'm lucky to just hang out with all these really awesome people around the world, so it's going to be fun. >> Awesome and the content again has been preserved. You guys had a great session on call for paper sessions. Jenny, this is good stuff. What other things can people do to make it interesting? Obviously we're looking for suggestions. Feel free to chirp on Twitter about ideas that can be new. But you guys got some surprises. There's some selfies, what else? What's going on? Any secret, surprises throughout the day. >> There are secret surprises throughout the day. You'll need to pay attention to the keynotes. Bret will have giveaways. I know our wonderful sponsors have giveaways planned as well in their sessions. Hopefully right you feel conflicted about what you're going to attend. So do know that everything is recorded and will be available on demand afterwards so you can catch anything that you miss. Most of them will be available right after they stream the initial time. >> All right, great stuff, so they've got the Docker selfie. So the Docker selfies, the hashtag is just DockerCon hashtag DockerCon. If you feel like you want to add some of the hashtag no problem, check out the sessions. You can pop in and out of the captains is kind of the cool kids are going to be hanging out with Bret and then all they'll knowledge and learning. Don't miss the keynote, the keynote should be solid. We've got chain Governor from red monk delivering a keynote. I'll be interviewing him live after his keynote. So stay with us. And again, check out the interactive calendar. All you got to do is look at the calendar and click on the session you want. You'll jump right in. Hop around, give us feedback. We're doing our best. Bret, any final thoughts on what you want to share to the community around, what you got going on the virtual event, just random thoughts? >> Yeah, so sorry we can't all be together in the same physical place. But the coolest thing about as business online, is that we actually get to involve everyone, so as long as you have a computer and internet, you can actually attend DockerCon if you've never been to one before. So we're trying to recreate that experience online. Like Jenny said, the code of conduct is important. So, we're all in this together with the chat, so try to be nice in there. These are all real humans that, have feelings just like me. So let's try to keep it cool. And, over in the Catherine's channel we'll be taking your questions and maybe playing some music, playing some games, giving away some free stuff, while you're, in between sessions learning, oh yeah. >> And I got to say props to your rig. You've got an amazing setup there, Bret. I love what your show, you do. It's really bad ass and kick ass. So great stuff. Jenny sponsors ecosystem response to this event has been phenomenal. The attendance 67,000. We're seeing a surge of people hitting the site now. So if you're not getting in, just, Wade's going, we're going to crank through the queue, but the sponsors on the ecosystem really delivered on the content side and also the sport. You want to share a few shout outs on the sponsors who really kind of helped make this happen. >> Yeah, so definitely make sure you check out the sponsor pages and you go, each page is the actual content that they will be delivering. So they are delivering great content to you. So you can learn and a huge thank you to our platinum and gold authors. >> Awesome, well I got to say, I'm super impressed. I'm looking forward to the Microsoft Amazon sessions, which are going to be good. And there's a couple of great customer sessions there. I tweeted this out last night and let them get you guys' reaction to this because there's been a lot of talk around the COVID crisis that we're in, but there's also a positive upshot to this is Cambridge and explosion of developers that are going to be building new apps. And I said, you know, apps aren't going to just change the world, they're going to save the world. So a lot of the theme here is the impact that developers are having right now in the current situation. If we get the goodness of compose and all the things going on in Docker and the relationships, this real impact happening with the developer community. And it's pretty evident in the program and some of the talks and some of the examples. how containers and microservices are certainly changing the world and helping save the world, your thoughts. >> Like you said, a number of sessions and interviews in the program today that really dive into that. And even particularly around COVID, Clement Beyondo is sharing his company's experience, from being able to continue operations in Italy when they were completely shut down beginning of March. We have also in theCUBE channel several interviews about from the national Institute of health and precision cancer medicine at the end of the day. And you just can really see how containerization and developers are moving in industry and really humanity forward because of what they're able to build and create, with advances in technology. >> Yeah and the first responders and these days is developers. Bret compose is getting a lot of traction on Twitter. I can see some buzz already building up. There's huge traction with compose, just the ease of use and almost a call for arms for integrating into all the system language libraries, I mean, what's going on with compose? I mean, what's the captain say about this? I mean, it seems to be really tracking in terms of demand and interest. >> I think we're over 700,000 composed files on GitHub. So it's definitely beyond just the standard Docker run commands. It's definitely the next tool that people use to run containers. Just by having that we just buy, and that's not even counting. I mean that's just counting the files that are named Docker compose YAML. So I'm sure a lot of you out there have created a YAML file to manage your local containers or even on a server with Docker compose. And the nice thing is is Docker is doubling down on that. So we've gotten some news recently, from them about what they want to do with opening the spec up, getting more companies involved because compose is already gathered so much interest from the community. You know, AWS has importers, there's Kubernetes importers for it. So there's more stuff coming and we might just see something here in a few minutes. >> All right, well let's get into the keynote guys, jump into the keynote. If you missing anything, come back to the stream, check out the sessions, check out the calendar. Let's go, let's have a great time. Have some fun, thanks and enjoy the rest of the day we'll see you soon. (upbeat music) (upbeat music) >> Okay, what is the name of that Whale? >> Molly. >> And what is the name of this Whale? >> Mobby. >> That's right, dad's got to go, thanks bud. >> Bye. >> Bye. Hi, I'm Scott Johnson, CEO of Docker and welcome to DockerCon 2020. This year DockerCon is an all virtual event with more than 60,000 members of the Docker Community joining from around the world. And with the global shelter in place policies, we're excited to offer a unifying, inclusive virtual community event in which anyone and everyone can participate from their home. As a company, Docker has been through a lot of changes since our last DockerCon last year. The most important starting last November, is our refocusing 100% on developers and development teams. As part of that refocusing, one of the big challenges we've been working on, is how to help development teams quickly and efficiently get their app from code to cloud And wouldn't it be cool, if developers could quickly deploy to the cloud right from their local environment with the commands and workflow they already know. We're excited to give you a sneak preview of what we've been working on. And rather than slides, we thought we jumped right into the product. And joining me demonstrate some of these cool new features, is enclave your DACA. One of our engineers here at Docker working on Docker compose. Hello Lanca. >> Hello. >> We're going to show how an application development team collaborates using Docker desktop and Docker hub. And then deploys the app directly from the Docker command line to the clouds in just two commands. A development team would use this to quickly share functional changes of their app with the product management team, with beta testers or other development teams. Let's go ahead and take a look at our app. Now, this is a web app, that randomly pulls words from the database, and assembles them into sentences. You can see it's a pretty typical three tier application with each tier implemented in its own container. We have a front end web service, a middle tier, which implements the logic to randomly pull the words from the database and assemble them and a backend database. And here you can see the database uses the Postgres official image from Docker hub. Now let's first run the app locally using Docker command line and the Docker engine in Docker desktop. We'll do a Doc compose up and you can see that it's pulling the containers from our Docker organization account. Wordsmith, inc. Now that it's up. Let's go ahead and look at local host and we'll confirm that the application is functioning as desired. So there's one sentence, let's pull and now you and you can indeed see that we are pulling random words and assembling into sentences. Now you can also see though that the look and feel is a bit dated. And so Lanca is going to show us how easy it is to make changes and share them with the rest of the team. Lanca, over to you. >> Thank you, so I have, the source code of our application on my machine and I have updated it with the latest team from DockerCon 2020. So before committing the code, I'm going to build the application locally and run it, to verify that indeed the changes are good. So I'm going to build with Docker compose the image for the web service. Now that the image has been built, I'm going to deploy it locally. Wait to compose up. We can now check the dashboard in a Docker desktop that indeed our containers are up and running, and we can access, we can open in the web browser, the end point for the web service. So as we can see, we have the latest changes in for our application. So as you can see, the application has been updated successfully. So now, I'm going to push the image that I have just built to my organization's shared repository on Docker hub. So I can do this with Docker compose push web. Now that the image has been updated in the Docker hub repository, or my teammates can access it and check the changes. >> Excellent, well, thank you Lanca. Now of course, in these times, video conferencing is the new normal, and as great as it is, video conferencing does not allow users to actually test the application. And so, to allow us to have our app be accessible by others outside organizations such as beta testers or others, let's go ahead and deploy to the cloud. >> Sure we, can do this by employing a context. A Docker context, is a mechanism that we can use to target different platforms for deploying containers. The context we hold, information as the endpoint for the platform, and also how to authenticate to it. So I'm going to list the context that I have set locally. As you can see, I'm currently using the default context that is pointing to my local Docker engine. So all the commands that I have issued so far, we're targeting my local engine. Now, in order to deploy the application on a cloud. I have an account in the Azure Cloud, where I have no resource running currently, and I have created for this account, dedicated context that will hold the information on how to connect it to it. So now all I need to do, is to switch to this context, with Docker context use, and the name of my cloud context. So all the commands that I'm going to run, from now on, are going to target the cloud platform. So we can also check very, more simpler, in a simpler way we can check the running containers with Docker PS. So as we see no container is running in my cloud account. Now to deploy the application, all I need to do is to run a Docker compose up. And this will trigger the deployment of my application. >> Thanks Lanca. Now notice that Lanca did not have to move the composed file from Docker desktop to Azure. Notice you have to make any changes to the Docker compose file, and nor did she change any of the containers that she and I were using locally in our local environments. So the same composed file, same images, run locally and upon Azure without changes. While the app is deploying to Azure, let's highlight some of the features in Docker hub that helps teams with remote first collaboration. So first, here's our team's account where it (murmurs) and you can see the updated container sentences web that Lanca just pushed a couple of minutes ago. As far as collaboration, we can add members using their Docker ID or their email, and then we can organize them into different teams depending on their role in the application development process. So and then Lancae they're organized into different teams, we can assign them permissions, so that teams can work in parallel without stepping on each other's changes accidentally. For example, we'll give the engineering team full read, write access, whereas the product management team will go ahead and just give read only access. So this role based access controls, is just one of the many features in Docker hub that allows teams to collaboratively and quickly develop applications. Okay Lanca, how's our app doing? >> Our app has been successfully deployed to the cloud. So, we can easily check either the Azure portal to verify the containers running for it or simpler we can run a Docker PS again to get the list with the containers that have been deployed for it. In the output from the Docker PS, we can see an end point that we can use to access our application in the web browser. So we can see the application running in clouds. It's really up to date and now we can take this particular endpoint and share it within our organization such that anybody can have a look at it. >> That's cool Onka. We showed how we can deploy an app to the cloud in minutes and just two commands, and using commands that Docker users already know, thanks so much. In that sneak preview, you saw a team developing an app collaboratively, with a tool chain that includes Docker desktop and Docker hub. And simply by switching Docker context from their local environment to the cloud, deploy that app to the cloud, to Azure without leaving the command line using Docker commands they already know. And in doing so, really simplifying for development team, getting their app from code to cloud. And just as important, what you did not see, was a lot of complexity. You did not see cloud specific interfaces, user management or security. You did not see us having to provision and configure compute networking and storage resources in the cloud. And you did not see infrastructure specific application changes to either the composed file or the Docker images. And by simplifying a way that complexity, these new features help application DevOps teams, quickly iterate and get their ideas, their apps from code to cloud, and helping development teams, build share and run great applications, is what Docker is all about. A Docker is able to simplify for development teams getting their app from code to cloud quickly as a result of standards, products and ecosystem partners. It starts with open standards for applications and application artifacts, and active open source communities around those standards to ensure portability and choice. Then as you saw in the demo, the Docker experience delivered by Docker desktop and Docker hub, simplifies a team's collaborative development of applications, and together with ecosystem partners provides every stage of an application development tool chain. For example, deploying applications to the cloud in two commands. What you saw on the demo, well that's an extension of our strategic partnership with Microsoft, which we announced yesterday. And you can learn more about our partnership from Amanda Silver from Microsoft later today, right here at DockerCon. Another tool chain stage, the capability to scan applications for security and vulnerabilities, as a result of our partnership with Sneak, which we announced last week. You can learn more about that partnership from Peter McKay, CEO Sneak, again later today, right here at DockerCon. A third example, development team can automate the build of container images upon a simple get push, as a result of Docker hub integrations with GitHub and Alaska and Bitbucket. As a final example of Docker and the ecosystem helping teams quickly build applications, together with our ISV partners. We offer in Docker hub over 500 official and verified publisher images of ready to run Dockerized application components such as databases, load balancers, programming languages, and much more. Of course, none of this happens without people. And I would like to take a moment to thank four groups of people in particular. First, the Docker team, past and present. We've had a challenging 12 months including a restructuring and then a global pandemic, and yet their support for each other, and their passion for the product, this community and our customers has never been stronger. We think our community, Docker wouldn't be Docker without you, and whether you're one of the 50 Docker captains, they're almost 400 meetup organizers, the thousands of contributors and maintainers. Every day you show up, you give back, you teach new support. We thank our users, more than six and a half million developers who have built more than 7 million applications and are then sharing those applications through Docker hub at a rate of more than one and a half billion poles per week. Those apps are then run, are more than 44 million Docker engines. And finally, we thank our customers, the over 18,000 docker subscribers, both individual developers and development teams from startups to large organizations, 60% of which are outside the United States. And they spend every industry vertical, from media, to entertainment to manufacturing. healthcare and much more. Thank you. Now looking forward, given these unprecedented times, we would like to offer a challenge. While it would be easy to feel helpless and miss this global pandemic, the challenge is for us as individuals and as a community to instead see and grasp the tremendous opportunities before us to be forces for good. For starters, look no further than the pandemic itself, in the fight against this global disaster, applications and data are playing a critical role, and the Docker Community quickly recognize this and rose to the challenge. There are over 600 COVID-19 related publicly available projects on Docker hub today, from data processing to genome analytics to data visualization folding at home. The distributed computing project for simulating protein dynamics, is also available on Docker hub, and it uses spirit compute capacity to analyze COVID-19 proteins to aid in the design of new therapies. And right here at DockerCon, you can hear how Clemente Biondo and his company engineering in Gagne area Informatica are using Docker in the fight with COVID-19 in Italy every day. Now, in addition to fighting the pandemic directly, as a community, we also have an opportunity to bridge the disruption the pandemic is wreaking. It's impacting us at work and at home in every country around the world and every aspect of our lives. For example, many of you have a student at home, whose world is going to be very different when they returned to school. As employees, all of us have experienced the stresses from working from home as well as many of the benefits and in fact 75% of us say that going forward, we're going to continue to work from home at least occasionally. And of course one of the biggest disruptions has been job losses, over 35 million in the United States alone. And we know that's affected many of you. And yet your skills are in such demand and so important now more than ever. And that's why here at DockerCon, we want to try to do our part to help, and we're promoting this hashtag on Twitter, hashtag DockerCon jobs, where job seekers and those offering jobs can reach out to one another and connect. Now, pandemics disruption is accelerating the shift of more and more of our time, our priorities, our dollars from offline to online to hybrid, and even online only ways of living. We need to find new ways to collaborate, new approaches to engage customers, new modes for education and much more. And what is going to fill the needs created by this acceleration from offline, online? New applications. And it's this need, this demand for all these new applications that represents a great opportunity for the Docker community of developers. The world needs us, needs you developers now more than ever. So let's seize this moment. Let us in our teams, go build share and run great new applications. Thank you for joining today. And let's have a great DockerCon. >> Okay, welcome back to the DockerCon studio headquarters in your hosts, Jenny Burcio and myself John Furrier. u@farrier on Twitter. If you want to tweet me anything @DockerCon as well, share what you're thinking. Great keynote there from Scott CEO. Jenny, demo DockerCon jobs, some highlights there from Scott. Yeah, I love the intro. It's okay I'm about to do the keynote. The little green room comes on, makes it human. We're all trying to survive-- >> Let me answer the reality of what we are all doing with right now. I had to ask my kids to leave though or they would crash the whole stream but yes, we have a great community, a large community gather gathered here today, and we do want to take the opportunity for those that are looking for jobs, are hiring, to share with the hashtag DockerCon jobs. In addition, we want to support direct health care workers, and Bret Fisher and the captains will be running a all day charity stream on the captain's channel. Go there and you'll get the link to donate to directrelief.org which is a California based nonprofit, delivering and aid and supporting health care workers globally response to the COVID-19 crisis. >> Okay, if you jumping into the stream, I'm John Farrie with Jenny Webby, your hosts all day today throughout DockerCon. It's a packed house of great content. You have a main stream, theCUBE which is the mainstream that we'll be promoting a lot of cube interviews. But check out the 40 plus sessions underneath in the interactive calendar on dockercon.com site. Check it out, they're going to be live on a clock. So if you want to participate in real time in the chat, jump into your session on the track of your choice and participate with the folks in there chatting. If you miss it, it's going to go right on demand right after sort of all content will be immediately be available. So make sure you check it out. Docker selfie is a hashtag. Take a selfie, share it. Docker hashtag Docker jobs. If you're looking for a job or have openings, please share with the community and of course give us feedback on what you can do. We got James Governor, the keynote coming up next. He's with Red monk. Not afraid to share his opinion on open source on what companies should be doing, and also the evolution of this Cambrin explosion of apps that are going to be coming as we come out of this post pandemic world. A lot of people are thinking about this, the crisis and following through. So stay with us for more and more coverage. Jenny, favorite sessions on your mind for people to pay attention to that they should (murmurs)? >> I just want to address a few things that continue to come up in the chat sessions, especially breakout sessions after they play live and the speakers in chat with you, those go on demand, they are recorded, you will be able to access them. Also, if the screen is too small, there is the button to expand full screen, and different quality levels for the video that you can choose on your end. All the breakout sessions also have closed captioning, so please if you would like to read along, turn that on so you can, stay with the sessions. We have some great sessions, kicking off right at 10:00 a.m, getting started with Docker. We have a full track really in the how to enhance on that you should check out devs in action, hear what other people are doing and then of course our sponsors are delivering great content to you all day long. >> Tons of content. It's all available. They'll always be up always on at large scale. Thanks for watching. Now we got James Governor, the keynote. He's with Red Monk, the analyst firm and has been tracking open source for many generations. He's been doing amazing work. Watch his great keynote. I'm going to be interviewing him live right after. So stay with us and enjoy the rest of the day. We'll see you back shortly. (upbeat music) >> Hi, I'm James Governor, one of the co-founders of a company called RedMonk. We're an industry research firm focusing on developer led technology adoption. So that's I guess why Docker invited me to DockerCon 2020 to talk about some trends that we're seeing in the world of work and software development. So Monk Chips, that's who I am. I spent a lot of time on Twitter. It's a great research tool. It's a great way to find out what's going on with keep track of, as I say, there's people that we value so highly software developers, engineers and practitioners. So when I started talking to Docker about this event and it was pre Rhona, should we say, the idea of a crowd wasn't a scary thing, but today you see something like this, it makes you feel uncomfortable. This is not a place that I want to be. I'm pretty sure it's a place you don't want to be. And you know, to that end, I think it's interesting quote by Ellen Powell, she says, "Work from home is now just work" And we're going to see more and more of that. Organizations aren't feeling the same way they did about work before. Who all these people? Who is my cLancaern? So GitHub says has 50 million developers right on its network. Now, one of the things I think is most interesting, it's not that it has 50 million developers. Perhaps that's a proxy for number of developers worldwide. But quite frankly, a lot of those accounts, there's all kinds of people there. They're just Selena's. There are data engineers, there are data scientists, there are product managers, there were tech marketers. It's a big, big community and it goes way beyond just software developers itself. Frankly for me, I'd probably be saying there's more like 20 to 25 million developers worldwide, but GitHub knows a lot about the world of code. So what else do they know? One of the things they know is that world of code software and opensource, is becoming increasingly global. I get so excited about this stuff. The idea that there are these different software communities around the planet where we're seeing massive expansions in terms of things like open source. Great example is Nigeria. So Nigeria more than 200 million people, right? The energy there in terms of events, in terms of learning, in terms of teaching, in terms of the desire to code, the desire to launch businesses, desire to be part of a global software community is just so exciting. And you know, these, this sort of energy is not just in Nigeria, it's in other countries in Africa, it's happening in Egypt. It's happening around the world. This energy is something that's super interesting to me. We need to think about that. We've got global that we need to solve. And software is going to be a big part of that. At the moment, we can talk about other countries, but what about frankly the gender gap, the gender issue that, you know, from 1984 onwards, the number of women taking computer science degrees began to, not track but to create in comparison to what men were doing. The tech industry is way too male focused, there are men that are dominant, it's not welcoming, we haven't found ways to have those pathways and frankly to drive inclusion. And the women I know in tech, have to deal with the massively disproportionate amount of stress and things like online networks. But talking about online networks and talking about a better way of living, I was really excited by get up satellite recently, was a fantastic demo by Alison McMillan and she did a demo of a code spaces. So code spaces is Microsoft online ID, new platform that they've built. And online IDs, we're never quite sure, you know, plenty of people still out there just using the max. But, visual studio code has been a big success. And so this idea of moving to one online IDE, it's been around that for awhile. What they did was just make really tight integration. So you're in your GitHub repo and just be able to create a development environment with effectively one click, getting rid of all of the act shaving, making it super easy. And what I loved was it the demo, what Ali's like, yeah cause this is great. One of my kids are having a nap, I can just start (murmurs) and I don't have to sort out all the rest of it. And to me that was amazing. It was like productivity as inclusion. I'm here was a senior director at GitHub. They're doing this amazing work and then making this clear statement about being a parent. And I think that was fantastic. Because that's what, to me, importantly just working from home, which has been so challenging for so many of us, began to open up new possibilities, and frankly exciting possibilities. So Alley's also got a podcast parent-driven development, which I think is super important. Because this is about men and women rule in this together show parenting is a team sport, same as software development. And the idea that we should be thinking about, how to be more productive, is super important to me. So I want to talk a bit about developer culture and how it led to social media. Because you know, your social media, we're in this ad bomb stage now. It's TikTok, it's like exercise, people doing incredible back flips and stuff like that. Doing a bunch of dancing. We've had the world of sharing cat gifts, Facebook, we sort of see social media is I think a phenomenon in its own right. Whereas the me, I think it's interesting because it's its progenitors, where did it come from? So here's (murmurs) So 1971, one of the features in the emergency management information system, that he built, which it's topical, it was for medical tracking medical information as well, medical emergencies, included a bulletin board system. So that it could keep track of what people were doing on a team and make sure that they were collaborating effectively, boom! That was the start of something big, obviously. Another day I think is worth looking at 1983, Sorania Pullman, spanning tree protocol. So at DEC, they were very good at distributed systems. And the idea was that you can have a distributed system and so much of the internet working that we do today was based on radius work. And then it showed that basically, you could span out a huge network so that everyone could collaborate. That is incredibly exciting in terms of the trends, that I'm talking about. So then let's look at 1988, you've got IRC. IRC what developer has not used IRC, right. Well, I guess maybe some of the other ones might not have. But I don't know if we're post IRC yet, but (murmurs) at a finished university, really nailed it with IRC as a platform that people could communicate effectively with. And then we go into like 1991. So we've had IRC, we've had finished universities, doing a lot of really fantastic work about collaboration. And I don't think it was necessarily an accident that this is where the line is twofold, announced Linux. So Linux was a wonderfully packaged, idea in terms of we're going to take this Unix thing. And when I say package, what a package was the idea that we could collaborate on software. So, it may have just been the work of one person, but clearly what made it important, made it interesting, was finding a social networking pattern, for software development so that everybody could work on something at scale. That was really, I think, fundamental and foundational. Now I think it's important, We're going to talk about Linus, to talk about some things that are not good about software culture, not good about open source culture, not good about hacker culture. And that's where I'm going to talk about code of conduct. We have not been welcoming to new people. We got the acronyms, JFTI, We call people news, that's super unhelpful. We've got to find ways to be more welcoming and more self-sustaining in our communities, because otherwise communities will fail. And I'd like to thank everyone that has a code of conduct and has encouraged others to have codes of conduct. We need to have codes of conduct that are enforced to ensure that we have better diversity at our events. And that's what women, underrepresented minorities, all different kinds of people need to be well looked off to and be in safe and inclusive spaces. And that's the online events. But of course it's also for all of our activities offline. So Linus, as I say, I'm not the most charming of characters at all time, but he has done some amazing technology. So we got to like 2005 the creation of GIT. Not necessarily the distributed version control system that would win. But there was some interesting principles there, and they'd come out of the work that he had done in terms of trying to build and sustain the Linux code base. So it was very much based on experience. He had an itch that he needed to scratch and there was a community that was this building, this thing. So what was going to be the option, came up with Git foundational to another huge wave of social change, frankly get to logical awesome. April 20 April, 2008 GitHub, right? GiHub comes up, they've looked at Git, they've packaged it up, they found a way to make it consumable so the teams could use it and really begin to take advantage of the power of that distributed version control model. Now, ironically enough, of course they centralized the service in doing so. So we have a single point of failure on GitHub. But on the other hand, the notion of the poll request, the primitives that they established and made usable by people, that changed everything in terms of software development. I think another one that I'd really like to look at is Slack. So Slack is a huge success used by all different kinds of businesses. But it began specifically as a pivot from a company called Glitch. It was a game company and they still wanted, a tool internally that was better than IRC. So they built out something that later became Slack. So Slack 2014, is established as a company and basically it was this Slack fit software engineering. The focus on automation, the conversational aspects, the asynchronous aspects. It really pulled things together in a way that was interesting to software developers. And I think we've seen this pattern in the world, frankly, of the last few years. Software developers are influences. So Slack first used by the engineering teams, later used by everybody. And arguably you could say the same thing actually happened with Apple. Apple was mainstreamed by developers adopting that platform. Get to 2013, boom again, Solomon Hikes, Docker, right? So Docker was, I mean containers were not new, they were just super hard to use. People found it difficult technology, it was Easter Terek. It wasn't something that they could fully understand. Solomon did an incredible job of understanding how containers could fit into modern developer workflows. So if we think about immutable images, if we think about the ability to have everything required in the package where you are, it really tied into what people were trying to do with CICD, tied into microservices. And certainly the notion of sort of display usability Docker nailed that, and I guess from this conference, at least the rest is history. So I want to talk a little bit about, scratching the itch. And particularly what has become, I call it the developer authentic. So let's go into dark mode now. I've talked about developers laying out these foundations and frameworks that, the mainstream, frankly now my son, he's 14, he (murmurs) at me if I don't have dark mode on in an application. And it's this notion that developers, they have an aesthetic, it does get adopted I mean it's quite often jokey. One of the things we've seen in the really successful platforms like GitHub, Docker, NPM, let's look at GitHub. Let's look at over that Playfulness. I think was really interesting. And that changes the world of work, right? So we've got the world of work which can be buttoned up, which can be somewhat tight. I think both of those companies were really influential, in thinking that software development, which is a profession, it's also something that can and is fun. And I think about how can we make it more fun? How can we develop better applications together? Takes me to, if we think about Docker talking about build, share and run, for me the key word is share, because development has to be a team sport. It needs to be sharing. It needs to be kind and it needs to bring together people to do more effective work. Because that's what it's all about, doing effective work. If you think about zoom, it's a proxy for collaboration in terms of its value. So we've got all of these airlines and frankly, add up that their share that add up their total value. It's currently less than Zoom. So video conferencing has become so much of how we live now on a consumer basis. But certainly from a business to business perspective. I want to talk about how we live now. I want to think about like, what will come out all of this traumatic and it is incredibly traumatic time? I'd like to say I'm very privileged. I can work from home. So thank you to all the frontline workers that are out there that they're not in that position. But overall what I'm really thinking about, there's some things that will come out of this that will benefit us as a culture. Looking at cities like Paris, Milan, London, New York, putting a new cycling infrastructure, so that people can social distance and travel outside because they don't feel comfortable on public transport. I think sort of amazing widening pavements or we can't do that. All these cities have done it literally overnight. This sort of changes is exciting. And what does come off that like, oh there are some positive aspects of the current issues that we face. So I've got a conference or I've got a community that may and some of those, I've been working on. So Katie from HashiCorp and Carla from container solutions basically about, look, what will the world look like in developer relations? Can we have developer relations without the air miles? 'Cause developer advocates, they do too much travel ends up, you know, burning them out, develop relations. People don't like to say no. They may have bosses that say, you know, I was like, Oh that corporates went great. Now we're going to roll it out worldwide to 47 cities. That's stuff is terrible. It's terrible from a personal perspective, it's really terrible from an environmental perspective. We need to travel less. Virtual events are crushing it. Microsoft just at build, right? Normally that'd be just over 10,000 people, they had 245,000 plus registrations. 40,000 of them in the last day, right? Red Hat summit, 80,000 people, IBM think 90,000 people, GitHub Crushed it as well. Like this is a more inclusive way people can dip in. They can be from all around the world. I mentioned Nigeria and how fantastic it is. Very often Nigerian developers and advocates find it hard to get visas. Why should they be shut out of events? Events are going to start to become remote first because frankly, look at it, if you're turning in those kinds of numbers, and Microsoft was already doing great online events, but they absolutely nailed it. They're going to have to ask some serious questions about why everybody should get back on a plane again. So if you're going to do remote, you've got to be intentional about it. It's one thing I've learned some exciting about GitLab. GitLab's culture is amazing. Everything is documented, everything is public, everything is transparent. Think that really clear and if you look at their principles, everything, you can't have implicit collaboration models. Everything needs to be documented and explicit, so that anyone can work anywhere and they can still be part of the team. Remote first is where we're at now, Coinbase, Shopify, even Barkley says the not going to go back to having everybody in offices in the way they used to. This is a fundamental shift. And I think it's got significant implications for all industries, but definitely for software development. Here's the thing, the last 20 years were about distributed computing, microservices, the cloud, we've got pretty good at that. The next 20 years will be about distributed work. We can't have everybody living in San Francisco and London and Berlin. The talent is distributed, the talent is elsewhere. So how are we going to build tools? Who is going to scratch that itch to build tools to make them more effective? Who's building the next generation of apps, you are, thanks.
SUMMARY :
It's the queue with digital coverage Maybe the internet gods be with us today Jenny, Bret, thank you for-- Welcome to the Docker community. but this is special to you guys. of the iceberg and so thrilled to be able or the questions you have. find the session that you want. to help you get the most out of your So the folks who were familiar with that and at the end of this keynote, Awesome and the content attention to the keynotes. and click on the session you want. in the same physical place. And I got to say props to your rig. the sponsor pages and you go, So a lot of the theme here is the impact and interviews in the program today Yeah and the first responders And the nice thing is is Docker of the day we'll see you soon. got to go, thanks bud. of the Docker Community from the Docker command line to the clouds So I'm going to build with Docker compose And so, to allow us to So all the commands that I'm going to run, While the app is deploying to Azure, to get the list with the containers the capability to scan applications Yeah, I love the intro. and Bret Fisher and the captains of apps that are going to be coming in the how to enhance on the rest of the day. in terms of the desire to code,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Ellen Powell | PERSON | 0.99+ |
Alison McMillan | PERSON | 0.99+ |
Peter McKay | PERSON | 0.99+ |
Jenny Burcio | PERSON | 0.99+ |
Jenny | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
Italy | LOCATION | 0.99+ |
Carla | PERSON | 0.99+ |
Scott Johnson | PERSON | 0.99+ |
Amanda Silver | PERSON | 0.99+ |
Bret | PERSON | 0.99+ |
Egypt | LOCATION | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
London | LOCATION | 0.99+ |
Apple | ORGANIZATION | 0.99+ |
Bret Fisher | PERSON | 0.99+ |
Milan | LOCATION | 0.99+ |
Paris | LOCATION | 0.99+ |
RedMonk | ORGANIZATION | 0.99+ |
John Farrie | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Africa | LOCATION | 0.99+ |
Clement Beyondo | PERSON | 0.99+ |
California | LOCATION | 0.99+ |
Shopify | ORGANIZATION | 0.99+ |
Jenny Webby | PERSON | 0.99+ |
75% | QUANTITY | 0.99+ |
Berlin | LOCATION | 0.99+ |
Katie | PERSON | 0.99+ |
December | DATE | 0.99+ |
60% | QUANTITY | 0.99+ |
1983 | DATE | 0.99+ |
1984 | DATE | 0.99+ |
two | QUANTITY | 0.99+ |
14 | QUANTITY | 0.99+ |
United States | LOCATION | 0.99+ |
GitHub | ORGANIZATION | 0.99+ |
New York | LOCATION | 0.99+ |
Nigeria | LOCATION | 0.99+ |
2005 | DATE | 0.99+ |
San Francisco | LOCATION | 0.99+ |
Docker | ORGANIZATION | 0.99+ |
DockerCon | EVENT | 0.99+ |
more than 44 million | QUANTITY | 0.99+ |
100% | QUANTITY | 0.99+ |
Laura Taco | PERSON | 0.99+ |
40,000 | QUANTITY | 0.99+ |
47 cities | QUANTITY | 0.99+ |
April 20 April, 2008 | DATE | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Wade | PERSON | 0.99+ |
Coinbase | ORGANIZATION | 0.99+ |
Gagne | LOCATION | 0.99+ |
last week | DATE | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
James Governor | PERSON | 0.99+ |
Sorania Pullman | PERSON | 0.99+ |
last November | DATE | 0.99+ |
50 million developers | QUANTITY | 0.99+ |
First | QUANTITY | 0.99+ |
Clemente Biondo | PERSON | 0.99+ |
10:00 a.m | DATE | 0.99+ |
Scott | PERSON | 0.99+ |
DockerCon 2020 Kickoff
>>From around the globe. It's the queue with digital coverage of DockerCon live 2020 brought to you by Docker and its ecosystem partners. >>Hello everyone. Welcome to Docker con 2020 I'm John furrier with the cube. I'm in our Palo Alto studios with our quarantine crew. We have a great lineup here for DockerCon con 2020 virtual event. Normally it was in person face to face. I'll be with you throughout the day from an amazing lineup of content over 50 different sessions, cube tracks, keynotes, and we've got two great co-hosts here with Docker, Jenny Marcio and Brett Fisher. We'll be with you all day, all day today, taking you through the program, helping you navigate the sessions. I'm so excited, Jenny. This is a virtual event. We talk about this. Can you believe it? We're, you know, may the internet gods be with us today and hope everyone's having an easy time getting in. Jenny, Brett, thank you for being here. Hey, >>Yeah. Hi everyone. Uh, so great to see everyone chatting and telling us where they're from. Welcome to the Docker community. We have a great day planned for you >>Guys. Great job. I'm getting this all together. I know how hard it is. These virtual events are hard to pull off. I'm blown away by the community at Docker. The amount of sessions that are coming in the sponsor support has been amazing. Just the overall excitement around the brand and the, and the opportunities given this tough times where we're in. Um, it's super exciting. Again, made the internet gods be with us throughout the day, but there's plenty of content. Uh, Brett's got an amazing all day marathon group of people coming in and chatting. Jenny, this has been an amazing journey and it's a great opportunity. Tell us about the virtual event. Why DockerCon virtual. Obviously everyone's cancelling their events, but this is special to you guys. Talk about Docker con virtual this year. >>Yeah. You know, the Docker community shows up at DockerCon every year and even though we didn't have the opportunity to do an in person event this year, we didn't want to lose the time that we all come together at DockerCon. The conversations, the amazing content and learning opportunities. So we decided back in December to make Docker con a virtual event. And of course when we did that, there was no quarantine. Um, we didn't expect, you know, I certainly didn't expect to be delivering it from my living room, but we were just, I mean we were completely blown away. There's nearly 70,000 people across the globe that have registered for Docker con today. And when you look at backer cons of past right live events, really and more learning are just the tip of the iceberg. And so thrilled to be able to deliver a more inclusive vocal event today. And we have so much planned. Uh, I think Brett, you want to tell us some of the things that you have planned? >>Well, I'm sure I'm going to forget something cause there's a lot going on. But, uh, we've obviously got interviews all day today on this channel with John the crew. Um, Jenny has put together an amazing set of all these speakers all day long in the sessions. And then you have a captain's on deck, which is essentially the YouTube live hangout where we just basically talk shop. Oh, it's all engineers, all day long, captains and special guests. And we're going to be in chat talking to you about answering your questions. Maybe we'll dig into some stuff based on the problems you're having or the questions you have. Maybe there'll be some random demos, but it's basically, uh, not scripted. It's an all day long unscripted event, so I'm sure it's going to be a lot of fun hanging out in there. >>Well guys, I want to just say it's been amazing how you structured this so everyone has a chance to ask questions, whether it's informal laid back in the captain's channel or in the sessions where the speakers will be there with their, with their presentations. But Jenny, I want to get your thoughts because we have a site out there that's structured a certain way for the folks watching. If you're on your desktop, there's a main stage hero. There's then tracks and Brett's running the captain's tracks. You can click on that link and jump into his session all day long. He's got an amazing set of line of sleet, leaning back, having a good time. And then each of the tracks, you can jump into those sessions. It's on a clock. It'll be available on demand. All that content is available if you're on your desktop, if you're on your mobile, it's the same thing. >>Look at the calendar, find the session that you want. If you're interested in it, you could watch it live and chat with the participants in real time or watch it on demand. So there's plenty of content to navigate through. We do have it on a clock and we'll be streaming sessions as they happen. So you're in the moment and that's a great time to chat in real time. But there's more, Jenny, you're getting more out of this event. We, you guys try to bring together the stimulation of community. How does the participants get more out of the the event besides just consuming some of the content all day today? >>Yeah. So first set up your profile, put your picture next to your chat handle and then chat. We have like, uh, John said we have various setups today to help you get the most out of your experience are breakout sessions. The content is prerecorded so you get quality content and the speakers and chat. So you can ask questions the whole time. Um, if you're looking for the hallway track, then definitely check out the captain's on deck channel. Uh, and then we have some great interviews all day on the queue so that up your profile, join the conversation and be kind, right. This is a community event. Code of conduct is linked on every page at the top and just have a great day. >>And Brett, you guys have an amazing lineup on the captain, so you have a great YouTube channel that you have your stream on. So the folks who were familiar with that can get that either on YouTube or on the site. The chat is integrated in, so you're set up, what do you got going on? Give us the highlights. What are you excited about throughout your day? Take us through your program on the captains. That's going to be probably pretty dynamic in the chat too. >>Yeah. Yeah. So, uh, I'm sure we're going to have less, uh, lots of, lots of stuff going on in chat. So no concerns there about, uh, having crickets in the, in the chat. But we're going to, uh, basically starting the day with two of my good Docker captain friends, uh, Nirmal Mehta and Laura taco. And we're going to basically start you out and at the end of this keynote, at the end of this hour, and we're going to get you going. And then you can maybe jump out and go to take some sessions. Maybe there's some cool stuff you want to check out in other sessions that are, you want to chat and talk with the, the instructors, the speakers there, and then you're going to come back to us, right? Or go over, check out the interview. So the idea is you're hopping back and forth and throughout the day we're basically changing out every hour. >>We're not just changing out the, uh, the guests basically, but we're also changing out the topics that we can cover because different guests will have different expertise. We're going to have some special guests in from Microsoft, talk about some of the cool stuff going on there. And basically it's captains all day long. And, uh, you know, if you've been on my YouTube live show you, you've watched that, you've seen a lot of the guests we have on there. I'm lucky to just hang out with all these really awesome people around the world, so it's going to be fun. >>Awesome. And the content again has been preserved. You guys had a great session on call for paper sessions. Jenny, this is good stuff. What are the things can people do to make it interesting? Obviously we're looking for suggestions. Feel free to, to chirp on Twitter about ideas that can be new. But you guys got some surprises. There's some selfies. What else? What's going on? Any secret, uh, surprises throughout the day. >>There are secret surprises throughout the day. You'll need to pay attention to the keynotes. Brett will have giveaways. I know our wonderful sponsors have giveaways planned as well in their sessions. Uh, hopefully right you, you feel conflicted about what you're going to attend. So do know that everything is recorded and will be available on demand afterwards so you can catch anything that you miss. Most of them will be available right after they stream the initial time. >>All right, great stuff. So they've got the Docker selfie. So the Docker selfies, the hashtag is just Docker con hashtag Docker con. If you feel like you want to add some of the hashtag no problem, check out the sessions. You can pop in and out of the captains is kind of the cool, cool. Kids are going to be hanging out with Brett and then all they'll knowledge and learning. Don't miss the keynote. The keynote should be solid. We got changed governor from red monk delivering a keynote. I'll be interviewing him live after his keynote. So stay with us and again, check out the interactive calendar. All you gotta do is look at the calendar and click on the session you want. You'll jump right in. Hop around, give us feedback. We're doing our best. Um, Brett, any final thoughts on what you want to share to the community around, uh, what you got going on the virtual event? Just random thoughts. >>Yeah. Uh, so sorry, we can't all be together in the same physical place. But the coolest thing about as business online is that we actually get to involve everyone. So as long as you have a computer and internet, you can actually attend DockerCon if you've never been to one before. So we're trying to recreate that experience online. Um, like Jenny said, the code of conduct is important. So, you know, we're all in this together with the chat, so try to try to be nice in there. These are all real humans that, uh, have feelings just like me. So let's, let's try to keep it cool and, uh, over in the Catherine's channel be taking your questions and maybe playing some music, playing some games, giving away some free stuff. Um, while you're, you know, in between sessions learning. Oh yeah. >>And I gotta say props to your rig. You've got an amazing setup there, Brett. I love what your show you do. It's really bad ass and kick ass. So great stuff. Jenny sponsors ecosystem response to this event has been phenomenal. The attendance 67,000. We're seeing a surge of people hitting the site now. So, um, if you're not getting in, just, you know, just wait going, we're going to crank through the queue, but the sponsors on the ecosystem really delivered on the content side and also the sport. You want to share a few shout outs on the sponsors who really kind of helped make this happen. >>Yeah, so definitely make sure you check out the sponsor pages and you go, each page is the actual content that they will be delivering. So they are delivering great content to you. Um, so you can learn and a huge thank you to our platinum and gold authors. >>Awesome. Well I got to say, I'm super impressed. I'm looking forward to the Microsoft Amazon sessions, which are going to be good. And there's a couple of great customer sessions there and you know, I tweeted this out last night and let them get you guys' reaction to this because you know, there's been a lot of talk around the covert crisis that we're in, but there's also a positive upshot to this is Cambridge and explosion of developers that are going to be building new apps. And I said, you know, apps apps aren't going to just change the world. They're gonna save the world. So a lot of the theme years, the impact that developers are having right now in the current situation, you know, if we get the goodness of compose and all the things going on in Docker and the relationships, this real impact happening with the developer community. And it's pretty evident in the program and some of the talks and some of the examples how containers and microservices are certainly changing the world and helping save the world. Your thoughts. >>Yeah. So if you, I think we have a, like you said, a number of sessions and interviews in the program today that really dive into that. And even particularly around coven, um, Clemente is sharing his company's experience, uh, from being able to continue operations in Italy when they were completely shut down. Uh, beginning of March, we have also in the cube channel several interviews about from the national Institute of health and precision cancer medicine at the end of the day. And you just can really see how containerization and, uh, developers are moving in industry and, and really humanity forward because of what they're able to build and create, uh, with advances in technology. Yeah. >>And first responders and these days is developers. Brett compose is getting a lot of traction on Twitter. I can see some buzz already building up. There's huge traction with compose, just the ease of use and almost a call for arms for integrating into all the system language libraries. I mean, what's going on with compose? I mean, what's the captain say about this? I mean, it seems to be really tracking in terms of demand and interest. >>Yeah, it's, it's a, I think we're over 700,000 composed files on GitHub. Um, so it's definitely beyond just the standard Docker run commands. It's definitely the next tool that people use to run containers. Um, just by having that we just by, and that's not even counting. I mean, that's just counting the files that are named Docker compose Yammel so I'm sure a lot of you out there have created a gamma file to manage your local containers or even on a server with Docker compose. And the nice thing is, is Docker is doubling down on that. So we've gotten some news recently, um, from them about what they want to do with opening the spec up, getting more companies involved, because compose is already gathered so much interest from the community. You know, AWS has importers, there's Kubernetes importers for it. So there's more stuff coming and we might just see something here in a few minutes. >>Well, let's get into the keynote. Guys, jump into the keynote. If you missed anything, come back to the stream, check out the sessions, check out the calendar. Let's go. Let's have a great time. Have some fun. Thanks for enjoy the rest of the day. We'll see you soon..
SUMMARY :
It's the queue with digital coverage of DockerCon I'll be with you throughout the day from an amazing lineup of content over 50 different We have a great day planned for you Obviously everyone's cancelling their events, but this is special to you guys. have the opportunity to do an in person event this year, we didn't want to lose the And we're going to be in chat talking to you about answering your questions. And then each of the tracks, you can jump into those sessions. Look at the calendar, find the session that you want. So you can ask questions the whole time. So the folks who were familiar with that can get that either on YouTube or on the site. the end of this keynote, at the end of this hour, and we're going to get you going. And, uh, you know, if you've been on my YouTube live show you, you've watched that, you've seen a lot of the What are the things can people do to make it interesting? you can catch anything that you miss. click on the session you want. So as long as you have a computer and internet, And I gotta say props to your rig. Um, so you can learn and a huge thank you in the current situation, you know, if we get the goodness of compose and all the things going on in Docker and the relationships, medicine at the end of the day. just the ease of use and almost a call for arms for integrating into all the system language libraries. I mean, that's just counting the files that are named Docker compose Yammel so I'm sure a lot of you out there have created a gamma file check out the sessions, check out the calendar.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Jenny | PERSON | 0.99+ |
Clemente | PERSON | 0.99+ |
Brett | PERSON | 0.99+ |
Italy | LOCATION | 0.99+ |
John | PERSON | 0.99+ |
Brett Fisher | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
December | DATE | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Jenny Marcio | PERSON | 0.99+ |
two | QUANTITY | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
DockerCon | EVENT | 0.99+ |
Laura | PERSON | 0.99+ |
each | QUANTITY | 0.99+ |
Docker | ORGANIZATION | 0.99+ |
67,000 | QUANTITY | 0.99+ |
YouTube | ORGANIZATION | 0.99+ |
each page | QUANTITY | 0.99+ |
DockerCon con 2020 | EVENT | 0.99+ |
Docker con | EVENT | 0.98+ |
today | DATE | 0.98+ |
Nirmal Mehta | PERSON | 0.98+ |
Catherine | PERSON | 0.98+ |
Docker con 2020 | EVENT | 0.97+ |
first | QUANTITY | 0.97+ |
Brett compose | PERSON | 0.97+ |
over 50 different sessions | QUANTITY | 0.96+ |
this year | DATE | 0.96+ |
last night | DATE | 0.96+ |
Docker | TITLE | 0.96+ |
over 700,000 composed files | QUANTITY | 0.96+ |
Amazon | ORGANIZATION | 0.95+ |
ORGANIZATION | 0.95+ | |
nearly 70,000 people | QUANTITY | 0.95+ |
GitHub | ORGANIZATION | 0.94+ |
DockerCon live 2020 | EVENT | 0.94+ |
Institute of health and precision cancer medicine | ORGANIZATION | 0.91+ |
DockerCon 2020 Kickoff | EVENT | 0.89+ |
John furrier | PERSON | 0.89+ |
Cambridge | LOCATION | 0.88+ |
Kubernetes | TITLE | 0.87+ |
two great co-hosts | QUANTITY | 0.84+ |
first responders | QUANTITY | 0.79+ |
this year | DATE | 0.78+ |
one | QUANTITY | 0.75+ |
them | QUANTITY | 0.7+ |
national | ORGANIZATION | 0.7+ |
beginning of March | DATE | 0.68+ |
every year | QUANTITY | 0.5+ |
Docker con. | EVENT | 0.49+ |
red monk | PERSON | 0.43+ |
Yammel | PERSON | 0.34+ |
UNLIST TILL 4/2 - Extending Vertica with the Latest Vertica Ecosystem and Open Source Initiatives
>> Sue: Hello everybody. Thank you for joining us today for the Virtual Vertica BDC 2020. Today's breakout session in entitled Extending Vertica with the Latest Vertica Ecosystem and Open Source Initiatives. My name is Sue LeClaire, Director of Marketing at Vertica and I'll be your host for this webinar. Joining me is Tom Wall, a member of the Vertica engineering team. But before we begin, I encourage you to submit questions or comments during the virtual session. You don't have to wait. Just type your question or comment in the question box below the slides and click submit. There will be a Q and A session at the end of the presentation. We'll answer as many questions as we're able to during that time. Any questions that we don't get to, we'll do our best to answer them offline. Alternatively, you can visit the Vertica forums to post you questions after the session. Our engineering team is planning to join the forums to keep the conversation going. Also a reminder that you can maximize your screen by clicking the double arrow button in the lower right corner of the slides. And yes, this virtual session is being recorded and will be available to view on demand later this week. We'll send you a notification as soon as it's ready. So let's get started. Tom, over to you. >> Tom: Hello everyone and thanks for joining us today for this talk. My name is Tom Wall and I am the leader of Vertica's ecosystem engineering team. We are the team that focuses on building out all the developer tools, third party integrations that enables the SoftMaker system that surrounds Vertica to thrive. So today, we'll be talking about some of our new open source initatives and how those can be really effective for you and make things easier for you to build and integrate Vertica with the rest of your technology stack. We've got several new libraries, integration projects and examples, all open source, to share, all being built out in the open on our GitHub page. Whether you use these open source projects or not, this is a very exciting new effort that will really help to grow the developer community and enable lots of exciting new use cases. So, every developer out there has probably had to deal with the problem like this. You have some business requirements, to maybe build some new Vertica-powered application. Maybe you have to build some new system to visualize some data that's that's managed by Vertica. The various circumstances, lots of choices will might be made for you that constrain your approach to solving a particular problem. These requirements can come from all different places. Maybe your solution has to work with a specific visualization tool, or web framework, because the business has already invested in the licensing and the tooling to use it. Maybe it has to be implemented in a specific programming language, since that's what all the developers on the team know how to write code with. While Vertica has many different integrations with lots of different programming language and systems, there's a lot of them out there, and we don't have integrations for all of them. So how do you make ends meet when you don't have all the tools you need? All you have to get creative, using tools like PyODBC, for example, to bridge between programming languages and frameworks to solve the problems you need to solve. Most languages do have an ODBC-based database interface. ODBC is our C-Library and most programming languages know how to call C code, somehow. So that's doable, but it often requires lots of configuration and troubleshooting to make all those moving parts work well together. So that's enough to get the job done but native integrations are usually a lot smoother and easier. So rather than, for example, in Python trying to fight with PyODBC, to configure things and get Unicode working, and to compile all the different pieces, the right way is to make it all work smoothly. It would be much better if you could just PIP install library and get to work. And with Vertica-Python, a new Python client library, you can actually do that. So that story, I assume, probably sounds pretty familiar to you. Sounds probably familiar to a lot of the audience here because we're all using Vertica. And our challenge, as Big Data practitioners is to make sense of all this stuff, despite those technical and non-technical hurdles. Vertica powers lots of different businesses and use cases across all kinds of different industries and verticals. While there's a lot different about us, we're all here together right now for this talk because we do have some things in common. We're all using Vertica, and we're probably also using Vertica with other systems and tools too, because it's important to use the right tool for the right job. That's a founding principle of Vertica and it's true today too. In this constantly changing technology landscape, we need lots of good tools and well established patterns, approaches, and advice on how to combine them so that we can be successful doing our jobs. Luckily for us, Vertica has been designed to be easy to build with and extended in this fashion. Databases as a whole had had this goal from the very beginning. They solve the hard problems of managing data so that you don't have to worry about it. Instead of worrying about those hard problems, you can focus on what matters most to you and your domain. So implementing that business logic, solving that problem, without having to worry about all of these intense, sometimes details about what it takes to manage a database at scale. With the declarative syntax of SQL, you tell Vertica what the answer is that you want. You don't tell Vertica how to get it. Vertica will figure out the right way to do it for you so that you don't have to worry about it. So this SQL abstraction is very nice because it's a well defined boundary where lots of developers know SQL, and it allows you to express what you need without having to worry about those details. So we can be the experts in data management while you worry about your problems. This goes beyond though, what's accessible through SQL to Vertica. We've got well defined extension and integration points across the product that allow you to customize this experience even further. So if you want to do things write your own SQL functions, or extend database softwares with UDXs, you can do so. If you have a custom data format that might be a proprietary format, or some source system that Vertica doesn't natively support, we have extension points that allow you to use those. To make it very easy to do passive, parallel, massive data movement, loading into Vertica but also to export Vertica to send data to other systems. And with these new features in time, we also could do the same kinds of things with Machine Learning models, importing and exporting to tools like TensorFlow. And it's these integration points that have enabled Vertica to build out this open architecture and a rich ecosystem of tools, both open source and closed source, of different varieties that solve all different problems that are common in this big data processing world. Whether it's open source, streaming systems like Kafka or Spark, or more traditional ETL tools on the loading side, but also, BI tools and visualizers and things like that to view and use the data that you keep in your database on the right side. And then of course, Vertica needs to be flexible enough to be able to run anywhere. So you can really take Vertica and use it the way you want it to solve the problems that you need to solve. So Vertica has always employed open standards, and integrated it with all kinds of different open source systems. What we're really excited to talk about now is that we are taking our new integration projects and making those open source too. In particular, we've got two new open source client libraries that allow you to build Vertica applications for Python and Go. These libraries act as a foundation for all kinds of interesting applications and tools. Upon those libraries, we've also built some integrations ourselves. And we're using these new libraries to power some new integrations with some third party products. Finally, we've got lots of new examples and reference implementations out on our GitHub page that can show you how to combine all these moving parts and exciting ways to solve new problems. And the code for all these things is available now on our GitHub page. And so you can use it however you like, and even help us make it better too. So the first such project that we have is called Vertica-Python. Vertica-Python began at our customer, Uber. And then in late 2018, we collaborated with them and we took it over and made Vertica-Python the first official open source client for Vertica You can use this to build your own Python applications, or you can use it via tools that were written in Python. Python has grown a lot in recent years and it's very common language to solve lots of different problems and use cases in the Big Data space from things like DevOps admission and Data Science or Machine Learning, or just homegrown applications. We use Python a lot internally for our own QA testing and automation needs. And with the Python 2 End Of Life, that happened at the end of 2019, it was important that we had a robust Python solution to help migrate our internal stuff off of Python 2. And also to provide a nice migration path for all of you our users that might be worried about the same problems with their own Python code. So Vertica-Python is used already for lots of different tools, including Vertica's admintools now starting with 9.3.1. It was also used by DataDog to build a Vertica-DataDog integration that allows you to monitor your Vertica infrastructure within DataDog. So here's a little example of how you might use the Python Client to do some some work. So here we open in connection, we run a query to find out what node we've connected to, and then we do a little DataLoad by running a COPY statement. And this is designed to have a familiar look and feel if you've ever used a Python Database Client before. So we implement the DB API 2.0 standard and it feels like a Python package. So that includes things like, it's part of the centralized package manager, so you can just PIP install this right now and go start using it. We also have our client for Go length. So this is called vertica-sql-go. And this is a very similar story, just in a different context or the different programming language. So vertica-sql-go, began as a collaboration with the Microsoft Focus SecOps Group who builds microfocus' security products some of which use vertica internally to provide some of those analytics. So you can use this to build your own apps in the Go programming language but you can also use it via tools that are written Go. So most notably, we have our Grafana integration, which we'll talk a little bit more about later, that leverages this new clients to provide Grafana visualizations for vertica data. And Go is another rising popularity programming language 'cause it offers an interesting balance of different programming design trade-offs. So it's got good performance, got a good current concurrency and memory safety. And we liked all those things and we're using it to power some internal monitoring stuff of our own. And here's an example of the code you can write with this client. So this is Go code that does a similar thing. It opens a connection, it runs a little test query, and then it iterates over those rows, processing them using Go data types. You get that native look and feel just like you do in Python, except this time in the Go language. And you can go get it the way you usually package things with Go by running that command there to acquire this package. And it's important to note here for the DC projects, we're really doing open source development. We're not just putting code out on our GitHub page. So if you go out there and look, you can see that you can ask questions, you can report bugs, you can submit poll requests yourselves and you can collaborate directly with our engineering team and the other vertica users out on our GitHub page. Because it's out on our GitHub page, it allows us to be a little bit faster with the way we ship and deliver functionality compared to the core vertica release cycle. So in 2019, for example, as we were building features to prepare for the Python 3 migration, we shipped 11 different releases with 40 customer reported issues, filed on GitHub. That was done over 78 different poll requests and with lots of community engagement as we do so. So lots of people are using this already, we see as our GitHub badge last showed with about 5000 downloads of this a day of people using it in their software. And again, we want to make this easy, not just to use but also to contribute and understand and collaborate with us. So all these projects are built using the Apache 2.0 license. The master branch is always available and stable with the latest creative functionality. And you can always build it and test it the way we do so that it's easy for you to understand how it works and to submit contributions or bug fixes or even features. It uses automated testing both for locally and with poll requests. And for vertica-python, it's fully automated with Travis CI. So we're really excited about doing this and we're really excited about where it can go in the future. 'Cause this offers some exciting opportunities for us to collaborate with you more directly than we have ever before. You can contribute improvements and help us guide the direction of these projects, but you can also work with each other to share knowledge and implementation details and various best practices. And so maybe you think, "Well, I don't use Python, "I don't use go so maybe it doesn't matter to me." But I would argue it really does matter. Because even if you don't use these tools and languages, there's lots of amazing vertica developers out there who do. And these clients do act as low level building blocks for all kinds of different interesting tools, both in these Python and Go worlds, but also well beyond that. Because these implementations and examples really generalize to lots of different use cases. And we're going to do a deeper dive now into some of these to understand exactly how that's the case and what you can do with these things. So let's take a deeper look at some of the details of what it takes to build one of these open source client libraries. So these database client interfaces, what are they exactly? Well, we all know SQL, but if you look at what SQL specifies, it really only talks about how to manipulate the data within the database. So once you're connected and in, you can run commands with SQL. But these database client interfaces address the rest of those needs. So what does the programmer need to do to actually process those SQL queries? So these interfaces are specific to a particular language or a technology stack. But the use cases and the architectures and design patterns are largely the same between different languages. They all have a need to do some networking and connect and authenticate and create a session. They all need to be able to run queries and load some data and deal with problems and errors. And then they also have a lot of metadata and Type Mapping because you want to use these clients the way you use those programming languages. Which might be different than the way that vertica's data types and vertica's semantics work. So some of this client interfaces are truly standards. And they are robust enough in terms of what they design and call for to support a truly pluggable driver model. Where you might write an application that codes directly against the standard interface, and you can then plug in a different database driver, like a JDBC driver, to have that application work with any database that has a JDBC driver. So most of these interfaces aren't as robust as a JDBC or ODBC but that's okay. 'Cause it's good as a standard is, every database is unique for a reason. And so you can't really expose all of those unique properties of a database through these standard interfaces. So vertica's unique in that it can scale to the petabytes and beyond. And you can run it anywhere in any environment, whether it's on-prem or on clouds. So surely there's something about vertica that's unique, and we want to be able to take advantage of that fact in our solutions. So even though these standards might not cover everything, there's often a need and common patterns that arise to solve these problems in similar ways. When there isn't enough of a standard to define those comments, semantics that different databases might have in common, what you often see is tools will invent plug in layers or glue code to compensate by defining application wide standard to cover some of these same semantics. Later on, we'll get into some of those details and show off what exactly that means. So if you connect to a vertica database, what's actually happening under the covers? You have an application, you have a need to run some queries, so what does that actually look like? Well, probably as you would imagine, your application is going to invoke some API calls and some client library or tool. This library takes those API calls and implements them, usually by issuing some networking protocol operations, communicating over the network to ask vertica to do the heavy lifting required for that particular API call. And so these API's usually do the same kinds of things although some of the details might differ between these different interfaces. But you do things like establish a connection, run a query, iterate over your rows, manage your transactions, that sort of thing. Here's an example from vertica-python, which just goes into some of the details of what actually happens during the Connect API call. And you can see all these details in our GitHub implementation of this. There's actually a lot of moving parts in what happens during a connection. So let's walk through some of that and see what actually goes on. I might have my API call like this where I say Connect and I give it a DNS name, which is my entire cluster. And I give you my connection details, my username and password. And I tell the Python Client to get me a session, give me a connection so I can start doing some work. Well, in order to implement this, what needs to happen? First, we need to do some TCP networking to establish our connection. So we need to understand what the request is, where you're going to connect to and why, by pressing the connection string. and vertica being a distributed system, we want to provide high availability, so we might need to do some DNS look-ups to resolve that DNS name which might be an entire cluster and not just a single machine. So that you don't have to change your connection string every time you add or remove nodes to the database. So we do some high availability and DNS lookup stuff. And then once we connect, we might do Load Balancing too, to balance the connections across the different initiator nodes in the cluster, or in a sub cluster, as needed. Once we land on the node we want to be at, we might do some TLS to secure our connections. And vertica supports the industry standard TLS protocols, so this looks pretty familiar for everyone who've used TLS anywhere before. So you're going to do a certificate exchange and the client might send the server certificate too, and then you going to verify that the server is who it says it is, so that you can know that you trust it. Once you've established that connection, and secured it, then you can start actually beginning to request a session within vertica. So you going to send over your user information like, "Here's my username, "here's the database I want to connect to." You might send some information about your application like a session label, so that you can differentiate on the database with monitoring queries, what the different connections are and what their purpose is. And then you might also send over some session settings to do things like auto commit, to change the state of your session for the duration of this connection. So that you don't have to remember to do that with every query that you have. Once you've asked vertica for a session, before vertica will give you one, it has to authenticate you. and vertica has lots of different authentication mechanisms. So there's a negotiation that happens there to decide how to authenticate you. Vertica decides based on who you are, where you're coming from on the network. And then you'll do an auth-specific exchange depending on what the auth mechanism calls for until you are authenticated. Finally, vertica trusts you and lets you in, so you going to establish a session in vertica, and you might do some note keeping on the client side just to know what happened. So you might log some information, you might record what the version of the database is, you might do some protocol feature negotiation. So if you connect to a version of the database that doesn't support all these protocols, you might decide to turn some functionality off and that sort of thing. But finally, after all that, you can return from this API call and then your connection is good to go. So that connection is just one example of many different APIs. And we're excited here because with vertica-python we're really opening up the vertica client wire protocol for the first time. And so if you're a low level vertica developer and you might have used Postgres before, you might know that some of vertica's client protocol is derived from Postgres. But they do differ in many significant ways. And this is the first time we've ever revealed those details about how it works and why. So not all Postgres protocol features work with vertica because vertica doesn't support all the features that Postgres does. Postgres, for example, has a large object interface that allows you to stream very wide data values over. Whereas vertica doesn't really have very wide data values, you have 30, you have long bar charts, but that's about as wide as you can get. Similarly, the vertica protocol supports lots of features not present in Postgres. So Load Balancing, for example, which we just went through an example of, Postgres is a single node system, it doesn't really make sense for Postgres to have Load Balancing. But Load Balancing is really important for vertica because it is a distributed system. Vertica-python serves as an open reference implementation of this protocol. With all kinds of new details and extension points that we haven't revealed before. So if you look at these boxes below, all these different things are new protocol features that we've implemented since August 2019, out in the open on our GitHub page for Python. Now, the vertica-sql-go implementation of these things is still in progress, but the core protocols are there for basic query operations. There's more to do there but we'll get there soon. So this is really cool 'cause not only do you have now a Python Client implementation, and you have a Go client implementation of this, but you can use this protocol reference to do lots of other things, too. The obvious thing you could do is build more clients for other languages. So if you have a need for a client in some other language that are vertica doesn't support yet, now you have everything available to solve that problem and to go about doing so if you need to. But beyond clients, it's also used for other things. So you might use it for mocking and testing things. So rather than connecting to a real vertica database, you can simulate some of that. You can also use it to do things like query routing and proxies. So Uber, for example, this log here in this link tells a great story of how they route different queries to different vertical clusters by intercepting these protocol messages, parsing the queries in them and deciding which clusters to send them to. So a lot of these things are just ideas today, but now that you have the source code, there's no limit in sight to what you can do with this thing. And so we're very interested in hearing your ideas and requests and we're happy to offer advice and collaborate on building some of these things together. So let's take a look now at some of the things we've already built that do these things. So here's a picture of vertica's Grafana connector with some data powered from an example that we have in this blog link here. So this has an internet of things use case to it, where we have lots of different sensors recording flight data, feeding into Kafka which then gets loaded into vertica. And then finally, it gets visualized nicely here with Grafana. And Grafana's visualizations make it really easy to analyze the data with your eyes and see when something something happens. So in these highlighted sections here, you notice a drop in some of the activity, that's probably a problem worth looking into. It might be a lot harder to see that just by staring at a large table yourself. So how does a picture like that get generated with a tool like Grafana? Well, Grafana specializes in visualizing time series data. And time can be really tricky for computers to do correctly. You got time zones, daylight savings, leap seconds, negative infinity timestamps, please don't ever use those. In every system, if it wasn't hard enough, just with those problems, what makes it harder is that every system does it slightly differently. So if you're querying some time data, how do we deal with these semantic differences as we cross these domain boundaries from Vertica to Grafana's back end architecture, which is implemented in Go on it's front end, which is implemented with JavaScript? Well, you read this from bottom up in terms of the processing. First, you select the timestamp and Vertica is timestamp has to be converted to a Go time object. And we have to reconcile the differences that there might be as we translate it. So Go time has a different time zone specifier format, and it also supports nanosecond precision, while Vertica only supports microsecond precision. So that's not too big of a deal when you're querying data because you just see some extra zeros, not fractional seconds. But on the way in, if we're loading data, we have to find a way to resolve those things. Once it's into the Go process, it has to be converted further to render in the JavaScript UI. So that there, the Go time object has to be converted to a JavaScript Angular JS Date object. And there too, we have to reconcile those differences. So a lot of these differences might just be presentation, and not so much the actual data changing, but you might want to choose to render the date into a more human readable format, like we've done in this example here. Here's another picture. This is another picture of some time series data, and this one shows you can actually write your own queries with Grafana to provide answers. So if you look closely here you can see there's actually some functions that might not look too familiar with you if you know vertica's functions. Vertica doesn't have a dollar underscore underscore time function or a time filter function. So what's actually happening there? How does this actually provide an answer if it's not really real vertica syntax? Well, it's not sufficient to just know how to manipulate data, it's also really important that you know how to operate with metadata. So information about how the data works in the data source, Vertica in this case. So Grafana needs to know how time works in detail for each data source beyond doing that basic I/O that we just saw in the previous example. So it needs to know, how do you connect to the data source to get some time data? How do you know what time data types and functions there are and how they behave? How do you generate a query that references a time literal? And finally, once you've figured out how to do all that, how do you find the time in the database? How do you do know which tables have time columns and then they might be worth rendering in this kind of UI. So Go's database standard doesn't actually really offer many metadata interfaces. Nevertheless, Grafana needs to know those answers. And so it has its own plugin layer that provides a standardizing layer whereby every data source can implement hints and metadata customization needed to have an extensible data source back end. So we have another open source project, the Vertica-Grafana data source, which is a plugin that uses Grafana's extension points with JavaScript and the front end plugins and also with Go in the back end plugins to provide vertica connectivity inside Grafana. So the way this works, is that the plugin frameworks defines those standardizing functions like time and time filter, and it's our plugin that's going to rewrite them in terms of vertica syntax. So in this example, time gets rewritten to a vertica cast. And time filter becomes a BETWEEN predicate. So that's one example of how you can use Grafana, but also how you might build any arbitrary visualization tool that works with data in Vertica. So let's now look at some other examples and reference architectures that we have out in our GitHub page. For some advanced integrations, there's clearly a need to go beyond these standards. So SQL and these surrounding standards, like JDBC, and ODBC, were really critical in the early days of Vertica, because they really enabled a lot of generic database tools. And those will always continue to play a really important role, but the Big Data technology space moves a lot faster than these old database data can keep up with. So there's all kinds of new advanced analytics and query pushdown logic that were never possible 10 or 20 years ago, that Vertica can do natively. There's also all kinds of data-oriented application workflows doing things like streaming data, or Parallel Loading or Machine Learning. And all of these things, we need to build software with, but we don't really have standards to go by. So what do we do there? Well, open source implementations make for easier integrations, and applications all over the place. So even if you're not using Grafana for example, other tools have similar challenges that you need to overcome. And it helps to have an example there to show you how to do it. Take Machine Learning, for example. There's been many excellent Machine Learning tools that have arisen over the years to make data science and the task of Machine Learning lot easier. And a lot of those have basic database connectivity, but they generally only treat the database as a source of data. So they do lots of data I/O to extract data from a database like Vertica for processing in some other engine. We all know that's not the most efficient way to do it. It's much better if you can leverage Vertica scale and bring the processing to the data. So a lot of these tools don't take full advantage of Vertica because there's not really a uniform way to go do so with these standards. So instead, we have a project called vertica-ml-python. And this serves as a reference architecture of how you can do scalable machine learning with Vertica. So this project establishes a familiar machine learning workflow that scales with vertica. So it feels similar to like a scickit-learn project except all the processing and aggregation and heavy lifting and data processing happens in vertica. So this makes for a much more lightweight, scalable approach than you might otherwise be used to. So with vertica-ml-python, you can probably use this yourself. But you could also see how it works. So if it doesn't meet all your needs, you could still see the code and customize it to build your own approach. We've also got lots of examples of our UDX framework. And so this is an older GitHub project. We've actually had this for a couple of years, but it is really useful and important so I wanted to plug it here. With our User Defined eXtensions framework or UDXs, this allows you to extend the operators that vertica executes when it does a database load or a database query. So with UDXs, you can write your own domain logic in a C++, Java or Python or R. And you can call them within the context of a SQL query. And vertica brings your logic to that data, and makes it fast and scalable and fault tolerant and correct for you. So you don't have to worry about all those hard problems. So our UDX examples, demonstrate how you can use our SDK to solve interesting problems. And some of these examples might be complete, total usable packages or libraries. So for example, we have a curl source that allows you to extract data from any curlable endpoint and load into vertica. We've got things like an ODBC connector that allows you to access data in an external database via an ODBC driver within the context of a vertica query, all kinds of parsers and string processors and things like that. We also have more exciting and interesting things where you might not really think of vertica being able to do that, like a heat map generator, which takes some XY coordinates and renders it on top of an image to show you the hotspots in it. So the image on the right was actually generated from one of our intern gaming sessions a few years back. So all these things are great examples that show you not just how you can solve problems, but also how you can use this SDK to solve neat things that maybe no one else has to solve, or maybe that are unique to your business and your needs. Another exciting benefit is with testing. So the test automation strategy that we have in vertica-python these clients, really generalizes well beyond the needs of a database client. Anyone that's ever built a vertica integration or an application, probably has a need to write some integration tests. And that could be hard to do with all the moving parts, in the big data solution. But with our code being open source, you can see in vertica-python, in particular, how we've structured our tests to facilitate smooth testing that's fast, deterministic and easy to use. So we've automated the download process, the installation deployment process, of a Vertica Community Edition. And with a single click, you can run through the tests locally and part of the PR workflow via Travis CI. We also do this for multiple different python environments. So for all python versions from 2.7 up to 3.8 for different Python interpreters, and for different Linux distros, we're running through all of them very quickly with ease, thanks to all this automation. So today, you can see how we do it in vertica-python, in the future, we might want to spin that out into its own stand-alone testbed starter projects so that if you're starting any new vertica integration, this might be a good starting point for you to get going quickly. So that brings us to some of the future work we want to do here in the open source space . Well, there's a lot of it. So in terms of the the client stuff, for Python, we are marching towards our 1.0 release, which is when we aim to be protocol complete to support all of vertica's unique protocols, including COPY LOCAL and some new protocols invented to support complex types, which is our new feature in vertica 10. We have some cursor enhancements to do things like better streaming and improved performance. Beyond that we want to take it where you want to bring it. So send us your requests in the Go client fronts, just about a year behind Python in terms of its protocol implementation, but the basic operations are there. But we still have more work to do to implement things like load balancing, some of the advanced auths and other things. But they're two, we want to work with you and we want to focus on what's important to you so that we can continue to grow and be more useful and more powerful over time. Finally, this question of, "Well, what about beyond database clients? "What else might we want to do with open source?" If you're building a very deep or a robust vertica integration, you probably need to do a lot more exciting things than just run SQL queries and process the answers. Especially if you're an OEM or you're a vendor that resells vertica packaged as a black box piece of a larger solution, you might to have managed the whole operational lifecycle of vertica. There's even fewer standards for doing all these different things compared to the SQL clients. So we started with the SQL clients 'cause that's a well established pattern, there's lots of downstream work that that can enable. But there's also clearly a need for lots of other open source protocols, architectures and examples to show you how to do these things and do have real standards. So we talked a little bit about how you could do UDXs or testing or Machine Learning, but there's all sorts of other use cases too. That's why we're excited to announce here our awesome vertica, which is a new collection of open source resources available on our GitHub page. So if you haven't heard of this awesome manifesto before, I highly recommend you check out this GitHub page on the right. We're not unique here but there's lots of awesome projects for all kinds of different tools and systems out there. And it's a great way to establish a community and share different resources, whether they're open source projects, blogs, examples, references, community resources, and all that. And this tool is an open source project. So it's an open source wiki. And you can contribute to it by submitting yourself to PR. So we've seeded it with some of our favorite tools and projects out there but there's plenty more out there and we hope to see more grow over time. So definitely check this out and help us make it better. So with that, I'm going to wrap up. I wanted to thank you all. Special thanks to Siting Ren and Roger Huebner, who are the project leads for the Python and Go clients respectively. And also, thanks to all the customers out there who've already been contributing stuff. This has already been going on for a long time and we hope to keep it going and keep it growing with your help. So if you want to talk to us, you can find us at this email address here. But of course, you can also find us on the Vertica forums, or you could talk to us on GitHub too. And there you can find links to all the different projects I talked about today. And so with that, I think we're going to wrap up and now we're going to hand it off for some Q&A.
SUMMARY :
Also a reminder that you can maximize your screen and frameworks to solve the problems you need to solve.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Tom Wall | PERSON | 0.99+ |
Sue LeClaire | PERSON | 0.99+ |
Uber | ORGANIZATION | 0.99+ |
Roger Huebner | PERSON | 0.99+ |
Vertica | ORGANIZATION | 0.99+ |
Tom | PERSON | 0.99+ |
Python 2 | TITLE | 0.99+ |
August 2019 | DATE | 0.99+ |
2019 | DATE | 0.99+ |
Python 3 | TITLE | 0.99+ |
two | QUANTITY | 0.99+ |
Sue | PERSON | 0.99+ |
Python | TITLE | 0.99+ |
python | TITLE | 0.99+ |
SQL | TITLE | 0.99+ |
late 2018 | DATE | 0.99+ |
First | QUANTITY | 0.99+ |
end of 2019 | DATE | 0.99+ |
Vertica | TITLE | 0.99+ |
today | DATE | 0.99+ |
Java | TITLE | 0.99+ |
Spark | TITLE | 0.99+ |
C++ | TITLE | 0.99+ |
JavaScript | TITLE | 0.99+ |
vertica-python | TITLE | 0.99+ |
Today | DATE | 0.99+ |
first time | QUANTITY | 0.99+ |
11 different releases | QUANTITY | 0.99+ |
UDXs | TITLE | 0.99+ |
Kafka | TITLE | 0.99+ |
Extending Vertica with the Latest Vertica Ecosystem and Open Source Initiatives | TITLE | 0.98+ |
Grafana | ORGANIZATION | 0.98+ |
PyODBC | TITLE | 0.98+ |
first | QUANTITY | 0.98+ |
UDX | TITLE | 0.98+ |
vertica 10 | TITLE | 0.98+ |
ODBC | TITLE | 0.98+ |
10 | DATE | 0.98+ |
Postgres | TITLE | 0.98+ |
DataDog | ORGANIZATION | 0.98+ |
40 customer reported issues | QUANTITY | 0.97+ |
both | QUANTITY | 0.97+ |
Dee Mooney, Executive Director, Micron Foundation | Micron Insight'18
>> Live from San Francisco, it's theCUBE, covering Micron Insight 2018. Brought to you by Micron. >> Welcome back to San Francisco Bay everybody. You're watching theCUBE, the leader in live tech coverage. We're covering Micron Insight 2018. It's just wrapping up behind us. It's a day long of thought-leading content around AI, AI for good, how it's affecting the human condition and healthcare and the future of AI. I'm Dave Vellante, he's Peter Burris and that's the Golden Gate Bridge over there. You used to live right up that hill over there. >> I did. >> Dee Mooney is here. >> Until they kicked me out. >> Dee Mooney is here. She's the Executive Director of the Micron Foundation. Dee, thanks so much for taking time out of your schedule and coming on theCUBE. >> You bet, I'm very pleased to be here with you today. >> So, you guys had some hard news today. We heard about the 100 million dollar fund that you're launching, but you also had some news around the Foundation. >> That's right. >> The grant, you announced two winners of the grant. Tell us about that. >> That's right. So, it was a great opportunity for Micron to showcase its goodness and what a great platform for us to be able to launch the Advancing Curiosity grant. It is all around really focusing on that, on advancing curiosity, in the hopes that we can think about how might AI help for good, whether that's in business and health or life, and it's really a great platform for us to be able to be a part of today. >> So, what are the specifics? It was a million dollar grant? >> So, it was a million dollar fund and today we announced our first recipients. It was to the Berkeley College of Engineering, specifically their BAIR, which is Berkeley A, Artificial Intelligence Research lab, then also Stanford PHIND lab, which is the Precision, Health and Integrated Diagnostics lab. And then also a non-profit called AI For All, and really their focus is to get the next generation excited about AI and really help the underrepresented groups be exposed to the field. >> So with AI For All, so underrepresented groups as in the diversity culture-- >> Females, underrepresented groups that might not actually get the exposure to this type of math and science in schools and so they do summer camps and we are helping to send students there next summer. >> How do you decide, what are the criteria around which you decide who gets the grants, and take us through that process. >> Today, because we are all about goodness and trying to enhance and improve our communities, this was all around how can AI do some good. So, we are taking a look at what problems can be solved utilizing AI. The second thing we're taking a look at is the type of technology. We want students and our researchers to take a good look at how the technology can work. Then also, what groups are being represented. We want a very diverse group that bring different perspectives and we really think that's our true ability to innovate. >> Well, there's some real research that suggests a more diverse organization solves problems differently, gets to more creativity and actually has business outcomes. That may not be the objective here, but certainly it's a message for organizations worldwide. >> We certainly think so. The more people that are being involved in a conversation, we think the richer the ideas that come out of it. One more thing that we are taking a look at in this grant is we'd like the recipients to think about the data collection, the privacy issues, the ethical issues that go along with collecting such massive amounts of data, so that's also something that we want people to consider when they're applying. >> One of the challenges in any ethical framework is that the individuals that get to write the ethical framework or test the ethical framework, the ethics always works for them. One of the big issues that you just raised is that there is research that shows that if you put a certain class of people and make them responsible for training the AI system, that their biases will absolutely dominate the AI system. So these issues of diversity are really important, not just from a how does it work for them, but also from a very starting point of what should go into the definition of the problem, the approach and solution, how you train it. Are you going the full scope or are you looking at just segments of that problem? >> We'll take a look at, we hope to solve the problems eventually, but right now, just to start with, it's the first announcement of the fund. It's a million dollars, like we mentioned. The first three recipients were announced today. The other recipients that come along, we're really excited to see what comes out of that because maybe there will be some very unique approaches to solving problems utilizing AI. >> What other areas might you look at? How do you determine, curiosity and AI, how'd you come up with that and how do you determine the topics in the areas that you go after? >> The Micron Foundation's mission is to enhance our lives through our people and our philanthropy and we focus on stem and also basic human needs. So, when Micron is engaged in large business endeavors like today, talking about AI, it was the perfect opportunity for us to bring our goodness and focus on AI and the problems that can be solved utilizing it. >> Pretty good day today, I thought. >> Oh, yeah. >> I have to say, I've followed Micron for awhile and you guys can get pretty down and dirty on the technical side, but it was an up-level conversation today. The last speaker in particular really made us think a little bit, talking about are we going to get people to refer-- >> Max Tegmark, right? >> Was that Max Tegmark? Yeah. >> I think that's the name. I didn't catch his name, I popped in late. But he was talking about artificial general intelligence >> I know. >> Reaching, I guess a singularity and then, what struck me is he had a panel of AI researchers, all male, by the way, I think >> Yes. >> I noticed that. >> Yes, we did too. >> The last one, which was Elon Musk, who of course we all know, thinks that there's going to be artificial general intelligence or super intelligence, and he asked every single panel member, will we achieve that, and they all unanimously said yes. So, either they're all dead wrong or the world is going to be a scary place in 20, 30, 50 years. >> Right, right. What are your thoughts on that? >> Well, it was certainly thought-provoking to think about all the good things that AI can do, but also maybe the other side and I'm actually glad that we concluded with that, because that is an element of our fund. We want the people that apply to it or that we'll work with to think about those other sides. If these certain problems are solved, is there a down side as well, so that is definitely where we want that diverse thinking to come in, so we can approach the problems in a good way that helps us all. >> Limited time left, let's talk a little bit about women in tech. In California, Jerry Brown just signed a law into effect that, I believe it's any public company, has to have a woman >> On the Board? >> on the Board. What do you think about that? >> Well, personally, I think that's fantastic. >> Well, you're biased. (laughs) >> I might be a little biased. I guess it's a little unfortunate we now have to have laws for this because maybe there's not enough, I'm not exactly sure, 6but I think it's a step in the right direction. That really aligns well with what we try to do, bring diversity into the workplace, diversity into the conversation, so I think it's a good step in the right direction. >> You know, let's face it, this industry had a lack, really, of women leaders. We lost Meg Whitman in a huge Fortune 50 company, in terms of a woman leader, replaced by Antonio Neri, great guy, know him well, but that was one, if you're counting, one down. Ginni Rometty, obviously, huge presence in the industry. I want to ask you, what do you think about, I don't want to use the word quotas, I hate to use it, but if you don't have quotas, what's the answer? >> I don't know about quotas either. We do know that we help, our Foundation grants span the pipeline from young students all the way up through college and we see this pipeline. It starts leaking along the way. Fifth grade, we start seeing girls fall out. Eighth grade is another big-- >> In the U.S. >> In the U.S. >> Not so much in other countries, which is pretty fascinating. >> We are a global foundation and when we talk with our other partners, they're also interested in having stem outreach into their schools because they want to bring in the critical thinking and problem-solving skills, so, I used to think it was quite just a U.S. problem, but now being exposed to other cultures and countries, definitely they have a different approach, but I think it's a problem that we all strive to overcome. >> Well, it's pretty good research that shows that governance that includes women is generally more successful. It reaches better decisions, it reaches decisions that lead to, in the case of Boards, greater profitability, more success, so if you can't convince people with data, you have to convince them with law. At the end of the day, it would be nice if people recognized that a diverse approach to governance usually ends up with a better result but if you can't, you got to hit 'em over the head. >> I guess so, I guess so. >> Well, obviously, with the Kavanaugh confirmation, there's been a lot of talk about this lately. There's been some pretty interesting stuff. I've got two daughters, you have a daughter. Some pretty interesting stuff in our family chat that's been floating around. I saw, I think it was yesterday, my wife sent me a little ditty by a young woman who was singing a song about how tough it is for men, sort of tongue-in-cheek and singing things like, I can't open the door in my pajamas, I can't walk down the street on my phone at night, I can't leave my drink unattended, so tough for men, so tough for men, so on the one hand, you have the Me Too movement, you have a lot more, since Satya Nadella put his foot in his mouth at the Grace Opera event, I don't know if you saw that, he said-- >> I didn't. >> He said, a couple years ago >> He's the CEO of Microsoft. >> Said a couple years ago, a woman in the audience, Grace Opera, big conference for women, asked, "If we're underpaid, should we say anything?" and he said, no, that's bad kharma, you should wait and be patient, and then of course, he got a lot of you know what for that. >> That probably didn't work for them. And then, he apologized for it, he did the right thing. He said, you know what, I'm way off base and then he took proactive action. But, since then, you feel like there's been certainly much more attention paid to it, the Google debacle of last summer with the employee that wrote that Jerry Maguire tome. >> Right, right. >> Now the Me Too movement, then you see the reaction of women from the Kavanaugh appointment. Do you feel like we've made a lot of progress, but then you go, well, hmm, maybe we haven't. >> It sometimes feels like that. It sometimes feels like that. In my career, over 20 years, I have had a very positive experience working with men, women alike and have been very supported and I hope that we can continue to have the conversations and raise awareness, that everyone can feel good in their workplace, walking down the street and, like you mentioned, I think that it's very important that we all have a voice and all of us bring a different, unique perspective to the table. >> So do you feel that it's two steps forward, Dee, and maybe one step back every now and then, or are we making constant progress? >> It kind of feels like that right now. I'm not sure exactly why, but it seems like we're talking a lot about it more now and maybe just with a lot more attention on it, that's why it's seeming like we're taking a step back, but I think progress has been made and we have to continue to improve that. >> Yeah, I think if you strip out the politics of the Kavanaugh situation and then focus on the impact on women, I think you take a different perspective. I think that's a discussion that's worth having. On theCUBE last week, I interviewed somebody, she called herself, "I'm a Fixer," and I said, "You know, here's some adjectives I think of in a fixer, is a good listener, somebody who's a leader, somebody who's assertive, somebody who takes action quickly. Were those the adjectives that were described about you throughout your career, and the answer was, not always. Sometimes it was aggressive or right? >> True, true. >> That whole thing, when a woman takes swift action and is a leader, sometimes she's called derogatory names. When a man does it, he's seen as a great leader. So there's still that bias that you see out there, so two steps forward, one step back maybe. Well Dee, last thoughts on today and your mission. >> Well, we really hope to encourage the next generation to pursue math and science degrees, whether they are female or male or however they identify, and we want them to do great and hopefully have a great career in technology. >> I'm glad you mentioned that, 'cause it's not just about women, it's about people of color and however you identify. So, thanks very much for coming on theCUBE. We really appreciate it. >> You bet, thank you. >> Alright, keep it right there everybody. Back with our next guest right after this short break. We're live from Micron Insight 2018 from San Francisco. You're watching theCUBE. (techno music)
SUMMARY :
Brought to you by Micron. and healthcare and the future of AI. She's the Executive Director of the Micron Foundation. We heard about the 100 million dollar fund The grant, you announced two winners of the grant. on advancing curiosity, in the hopes that we can think about and really their focus is to get the next generation get the exposure to this type of math and science in schools How do you decide, what are the criteria is the type of technology. That may not be the objective here, the ethical issues that go along with collecting such is that the individuals that get to write the ethical it's the first announcement of the fund. and the problems that can be solved utilizing it. down and dirty on the technical side, Was that Max Tegmark? I think that's the name. that there's going to be artificial What are your thoughts on that? but also maybe the other side and I'm actually glad has to have a woman on the Board. Well, you're biased. bring diversity into the workplace, but if you don't have quotas, what's the answer? all the way up through college and we see this pipeline. which is pretty fascinating. but I think it's a problem that we all strive to overcome. At the end of the day, it would be nice if people at the Grace Opera event, I don't know if you saw that, and then of course, he got a lot of you know what for that. the Google debacle of last summer with the employee Now the Me Too movement, then you see the reaction that we all have a voice and all of us bring and we have to continue to improve that. of the Kavanaugh situation and then focus on the impact So there's still that bias that you see out there, Well, we really hope to encourage the next generation I'm glad you mentioned that, 'cause it's not just about Back with our next guest right after this short break.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Vellante | PERSON | 0.99+ |
Dee Mooney | PERSON | 0.99+ |
Peter Burris | PERSON | 0.99+ |
Ginni Rometty | PERSON | 0.99+ |
California | LOCATION | 0.99+ |
Jerry Brown | PERSON | 0.99+ |
Satya Nadella | PERSON | 0.99+ |
Antonio Neri | PERSON | 0.99+ |
Meg Whitman | PERSON | 0.99+ |
Dee | PERSON | 0.99+ |
Elon Musk | PERSON | 0.99+ |
Micron Foundation | ORGANIZATION | 0.99+ |
two daughters | QUANTITY | 0.99+ |
San Francisco | LOCATION | 0.99+ |
Berkeley College of Engineering | ORGANIZATION | 0.99+ |
BAIR | ORGANIZATION | 0.99+ |
last week | DATE | 0.99+ |
yesterday | DATE | 0.99+ |
Golden Gate Bridge | LOCATION | 0.99+ |
Micron | ORGANIZATION | 0.99+ |
San Francisco Bay | LOCATION | 0.99+ |
Kavanaugh | PERSON | 0.99+ |
Eighth grade | QUANTITY | 0.99+ |
two winners | QUANTITY | 0.99+ |
over 20 years | QUANTITY | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
two steps | QUANTITY | 0.99+ |
U.S. | LOCATION | 0.99+ |
20 | QUANTITY | 0.99+ |
One | QUANTITY | 0.99+ |
today | DATE | 0.98+ |
last summer | DATE | 0.98+ |
first recipients | QUANTITY | 0.98+ |
first announcement | QUANTITY | 0.98+ |
second thing | QUANTITY | 0.98+ |
ORGANIZATION | 0.98+ | |
30 | QUANTITY | 0.98+ |
Berkeley A | ORGANIZATION | 0.98+ |
Today | DATE | 0.98+ |
million dollar | QUANTITY | 0.97+ |
50 years | QUANTITY | 0.97+ |
100 million dollar | QUANTITY | 0.96+ |
first three recipients | QUANTITY | 0.96+ |
Precision, Health and Integrated Diagnostics | ORGANIZATION | 0.96+ |
next summer | DATE | 0.95+ |
Grace Opera | EVENT | 0.93+ |
Max Tegmark | PERSON | 0.92+ |
a couple years ago | DATE | 0.91+ |
Stanford PHIND | ORGANIZATION | 0.91+ |
Fifth grade | QUANTITY | 0.9+ |
Micron | TITLE | 0.89+ |
couple years ago | DATE | 0.89+ |
one | QUANTITY | 0.88+ |
one step | QUANTITY | 0.88+ |
million dollars | QUANTITY | 0.86+ |
One more thing | QUANTITY | 0.79+ |
AI For | ORGANIZATION | 0.78+ |
Grace Opera | ORGANIZATION | 0.78+ |
50 | QUANTITY | 0.77+ |
Jerry Maguire | PERSON | 0.76+ |
Insight 2018 | TITLE | 0.76+ |
Advancing Curiosity | OTHER | 0.76+ |
single panel member | QUANTITY | 0.72+ |
Micron Insight 2018 | EVENT | 0.68+ |
AI For All | ORGANIZATION | 0.68+ |
CEO | PERSON | 0.65+ |
Me Too | TITLE | 0.64+ |
theCUBE | ORGANIZATION | 0.54+ |
Micron | EVENT | 0.46+ |
Sanjay Mehrotra, President & CEO, Micron | Micron Insight'18
(lively music) >> Live from San Francisco, it's theCUBE covering Micron Insight 2018. Brought to you by Micron. >> Welcome back to San Francisco Bay everybody, we're here covering Micron Insight 2018. You're watching theCUBE, the leader in live tech coverage. My name is Dave Vellante, I'm here with my cohost David Floyer. Sanjay Mehrotra is here, he's the president and CEO of Micron. Sanjay, thanks very much for coming on theCUBE. >> Great to be on the show. >> So quite an event here! First of all beautiful venue. >> Lovely venue. >> Got the Golden Gate that way, we got Nob Hill over there. So tell us about this event. It's not just about hardcore tech and memory. You guys are talking about AI for good, healthcare, changing the world. What's behind that? >> Yeah, our focus is on AI technologies and how AI is really changing the world. In terms of life, in terms of business, in terms of health. This is a showcase of how these technologies are in very very early innings, they've just barely begun. And what's happened is that AI algorithms have been around for a long time but now the compute capability and the memory and storage capability have advanced to the levels that you can really mine through a lot of data real-time, derive lot of insights and translate those insights into intelligence. And Micron plays a pivotal role here because our memory, our storage is where all this data resides, where all this data is processed. So we are very excited to bring together many industry figures, industry luminaries, park leaders, researchers, engineers all here today to engage in a dialogue on where technology is going, where AI is going, how it's shaping the world. And for the realization that hardware is absolutely central to this trend. And memory and storage is key. And we are very excited about what it means for the future. >> So a lot of thought leaders here today. Well first of all you guys have some hard news, which is relevant to what we're talking about. Talk about the hundred million dollar fund and how you've deployed it even just today you've made some sub-announcements. >> So, one of the things we announced today is we are launching a hundred million dollar fund to support, to fund start-ups in AI. Because we really think AI is going to transform the world. We want to be in the front row. With not only the large existing players that are driving this change but also the start-ups that will drive innovation. Having the front row seat with those start-ups, through our investment fund, will really help us accelerate intelligence, accelerate time to market of various AI applications. So a hundred million dollar fund is targeted toward supporting start-ups that are developing AI technologies. And what I'm really excited to talk about here is that 20% of that fund will go to start-ups that have leadership that is represented by women or under-represented groups. Under-represented--those groups that are under-represented in tech today. This demonstrates Micron's commitment to diversity and inclusion in the technologies phase. >> Well that's, well first of all congratulations on that we're big supporters >> Absolutely >> Of women and tech and diversity, it's something that we cover on the theCUBE extensively. And now you've announced two grants just today, a half a million dollars each. One with Stanford, one with Berkeley that we heard. We heard Amazon up on stage talking about Alexa AI, Microsoft was onstage we had NVIDIA on theCUBE earlier. So bringing together an ecosystem that involves academia, your partners, your customers, talk about that a little bit. >> So the two grants that you talked about, those are from Micron Foundation that is again supporting advancement of AI and AI research as well as teaching of AI to kids so that we can build the pipeline of strong engineers and technologists of the future. So the two grants that we have announced today are one to Stanford Precision Health and Integrated Diagnostics Center, 200,000 grant to Stanford, pioneers in AI applications to precision management of your health. Very exciting field that will really truly enrich life and prolong life in the future as well as advance detection of diseases. Second $200,000 grant that we are giving is to Berkeley. Artificial Intelligence Research Center, absolutely cutting-edge that will be applicable to many industries and many walks of life. These are intended to support advancement of AI research. In addition to this advanced curiosity grant to these two institutions later today you'll hear there will be announcing a $100,000 grant to AI4ALL. And this is an institution that is encouraging women and under-represented minorities at high school level, 9th grade to 11th grade to pursue STEM careers. So Micron is really promoting study of advanced research and supporting the pipeline. In addition to this of course our focus today is on bringing together industry luminaries just like you mentioned, NVIDIA, Qualcomm, autonomous driving of the future, automotive partners, BMW, Visteon, really to engage in a dialogue of how AI is advancing in these various applications. We just heard great talk from vice president at Amazon, on Alexa devices really really exciting how those devices are truly making your life so easy and so intelligent. We heard from Microsoft Corporate Vice-President of AI research. So you see we really are as leaders in our industry, we are really bringing together industry experts to engage in a thought-provoking and inspiring dialogue on AI so that when we leave here today we leave with insights into what is coming next but even more importantly what do we all need to do to get there faster, and this is all about technology. >> So Sanjay and David too, Micron is one of the few companies that was here when I started in the business and is still around. At the time you were just a component manufacturer doin' memories and wow to watch the diversification of Micron over the years but also recently, I mean it's incredibly well-run company so congratulations on the recent success. At the analyst event in New York City this year, you talked about not only that diversification in your investments and innovation but you talked about the cyclicality of this business the historical cyclicality of this business you've dampened that down a little bit, for a variety of reasons. The capital requirements in this business are enormous, there's been consolidation. So how is that going, talk about sort of the trends in your business both in terms of diversification and your ability to make this business more predictable. >> So Dave you are very right to know that Micron is 40 year old company, we actually just turned 40, very proud of it. Really a company founded on the principles of innovation and tenacity. In fact the company has contributed to the industry to the world over the course of 40 years, 40,000 patents, just imagine that's a thousand patents a year, three patents a day over the course of 40 years. We are really a prolific inventor and we absolutely through our innovations in memory and storage have shaped the world here. As technology advances it really unleashes more applications and this is what has brought about the change in our industry. Today memory is not just in your PC. Of course it is in this PC but it is also in your data center it is going to be in the autonomous records of the future you going to have as much memory as what you had in the server just a few years ago. It's inside your mobile phone Artificial Intelligence, facial recognition is only possible because of the data and memory that you have in there. You have NAND Flash that is in these devices and with technology advancing that's bringing down the price points of NAND Flash really bringing more SSD's into these notebook computers, making these notebook computers lighter, longer battery life, more powerful. And of course Flash drives are also replacing hard test drives in data centers and cloud computing. So many applications, these diverse applications really have brought greater stability in our industry. And of course technology complexity has over time moderated the supply growth. And that's what we mean that the cyclicality of our industry, yes one or two quarters here or there you can have demand and supply mismatches but overall when you look at the demand trends and combine them with the moderating supply trends the long-term trajectory for our industry is very healthy. In fact we just completed a record year. >> Our fiscal year '18 was a record 30 billion dollar year for us with profitability that puts us at the very top of the most companies with 50% operating margin and with 30 billion in revenue we are actually number two largest semiconductor company in the U.S. And a lot of opportunity ahead given the demand drivers in the industry. >> Massive free cash flow, you've said publicly the stock is undervalued which is ya know, I don't know any CEO that says it's overvalued but nonetheless the performance that you've had suggests that you very well might be right. Go ahead David please. >> Yeah I just wanted to ask your opinion on, you are leading in this area now, very very clearly you're growing faster than the industry, you've had a magnificent year and the whole area is grown both the NAND and the DRAM. How are you judging how much to invest in this for the future? What's the balance between giving money back to the stockholders by buying stock back or versus investing in this what seems to me a very very exciting area. >> Do you have an AI algorithm for that? (laughing) >> We are in a great position where we are extremely disciplined about investing in CapEx to reduce cost of production and to deploy new technologies into production. We are very ROI focused in terms of any CapEx investments we make. We of course invest in R and D. I mentioned earlier 40,000 patents over the course of 40 years that only comes in investment in R and D. Investments in R and D are essential because we are today the most comprehensive technology solutions provider in memory and storage in the world. >> Yeah. >> In the world. With our DRAM, our Flash, our 3D crosspoint technologies, as well as future emerging technologies really position us as the only company in the world that have all of these memory and storage technologies under one company roof. So we do invest very thoughtfully and we manage our expenses very carefully but we do invest in R and D and of course we are committed to driving shareholder value as well. And we had announced earlier in the year ten billion dollar share buy back program with at least 50% of a fee cash flow. Every quarter on an annual basis actually, 50% of our fee cash flow on an annual basis going, at least 50% going toward share buy back. So we are managing the business, all aspects of it, excitedly looking forward to the opportunities. At the same time prudently in an otherwise driven fashion, building shareholder value through investments in R and D and manufacturing. >> Well of course the great Warren Buffett, David, says when asked if stock buy backs are a good investment says if your stock's undervalued it's a good investment, so. Obviously you believe that Sanjay, so. >> Absolutely! >> So thanks, thanks very much for coming on the theCUBE it was great to have you. >> Thank you. >> I hope we can have you back again. >> Thank you. >> We could talk to you for a long long time. >> Thank you very much. >> Alright, keep it right there buddy, >> Thank you. >> We'll be back with our next guest. We're live from San Francisco Bay Micron Insight 2018. You're watching theCUBE. (upbeat music).
SUMMARY :
Brought to you by Micron. the leader in live tech coverage. First of all beautiful venue. Got the Golden Gate that way, the memory and storage capability have advanced to the Talk about the hundred million dollar fund and So, one of the things we announced today is we are it's something that we cover on the theCUBE extensively. So the two grants that you talked about, At the time you were just a component manufacturer the industry to the world over the course of 40 years, And a lot of opportunity ahead given the demand drivers but nonetheless the performance that you've had suggests What's the balance between giving money back to the memory and storage in the world. In the world. Well of course the great Warren Buffett, David, So thanks, thanks very much for coming on the theCUBE it I hope we can have We'll be back with our next guest.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Vellante | PERSON | 0.99+ |
David Floyer | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
BMW | ORGANIZATION | 0.99+ |
Sanjay Mehrotra | PERSON | 0.99+ |
Qualcomm | ORGANIZATION | 0.99+ |
NVIDIA | ORGANIZATION | 0.99+ |
Micron | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Stanford | ORGANIZATION | 0.99+ |
40 | QUANTITY | 0.99+ |
Sanjay | PERSON | 0.99+ |
30 billion | QUANTITY | 0.99+ |
Micron Foundation | ORGANIZATION | 0.99+ |
50% | QUANTITY | 0.99+ |
20% | QUANTITY | 0.99+ |
Visteon | ORGANIZATION | 0.99+ |
David | PERSON | 0.99+ |
New York City | LOCATION | 0.99+ |
$100,000 | QUANTITY | 0.99+ |
40 years | QUANTITY | 0.99+ |
CapEx | ORGANIZATION | 0.99+ |
Warren Buffett | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
one | QUANTITY | 0.99+ |
40,000 patents | QUANTITY | 0.99+ |
40 years | QUANTITY | 0.99+ |
One | QUANTITY | 0.99+ |
San Francisco | LOCATION | 0.99+ |
Berkeley | ORGANIZATION | 0.99+ |
two grants | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
U.S. | LOCATION | 0.99+ |
San Francisco Bay | LOCATION | 0.99+ |
Artificial Intelligence Research Center | ORGANIZATION | 0.99+ |
two | QUANTITY | 0.99+ |
Today | DATE | 0.99+ |
200,000 grant | QUANTITY | 0.98+ |
both | QUANTITY | 0.98+ |
AI4ALL | ORGANIZATION | 0.98+ |
this year | DATE | 0.98+ |
hundred million dollar | QUANTITY | 0.98+ |
one company | QUANTITY | 0.98+ |
Golden Gate | LOCATION | 0.98+ |
three patents a day | QUANTITY | 0.98+ |
ten billion dollar | QUANTITY | 0.97+ |
30 billion dollar | QUANTITY | 0.96+ |
Nob Hill | LOCATION | 0.95+ |
40 year old | QUANTITY | 0.95+ |
first | QUANTITY | 0.95+ |
fiscal year '18 | DATE | 0.95+ |
Stanford Precision Health and Integrated Diagnostics Center | ORGANIZATION | 0.95+ |
a thousand patents a year | QUANTITY | 0.94+ |
11th grade | QUANTITY | 0.94+ |
9th grade | QUANTITY | 0.94+ |
Alexa | TITLE | 0.93+ |
later today | DATE | 0.93+ |
two quarters | QUANTITY | 0.93+ |
Second $200,000 grant | QUANTITY | 0.92+ |
Micron Insight 2018 | EVENT | 0.89+ |
First | QUANTITY | 0.85+ |
Dr Prakriteswar Santikary, ERT | MIT CDOIQ 2018
>> Live from the MIT campus in Cambridge, Massachusetts, it's the Cube, covering the 12th Annual MIT Chief Data Officer and Information Quality Symposium. Brought to you by SiliconANGLE Media. >> Welcome back to the Cube's coverage of MITCDOIQ here in Cambridge, Massachusetts. I'm your host, Rebecca Knight, along with my co-host, Peter Burris. We're joined by Dr. Santikary, he is the vice-president and chief data officer at ERT. Thanks so much for coming on the show. >> Thanks for inviting me. >> We're going to call you Santi, that's what you go by. So, start by telling our viewers a little bit about ERT. What you do, and what kind of products you deliver to clients. >> I'll be happy to do that. The ERT is a clinical trial small company and we are a global data and technology company that minimizes risks and uncertainties within clinical trials for our customers. Our customers are top pharma companies, biotechnologic companies, medical device companies and they trust us to run their clinical trials so that they can bring their life-saving drugs to the market on time and every time. So we have a huge responsibility in that regard, because they put their trust in us, so we serve as their custodians of data and the processes, and the therapeutic experience that you bring to the table as well as compliance-related expertise that we have. So not only do we provide data and technology expertise, we also provide science expertise, regulatory expertise, so that's one of the reasons they trust us. And we also have been around since 1977, so it's almost over 50 years, so we have this collective wisdom that we have gathered over the years. And we have really earned trust in this past and because we deal with safety and efficacy of drugs and these are the two big components that help MDA, or any regulatory authority for that matter, to approve the drugs. So we have a huge responsibility in this regard, as well. In terms of product, as I said, we are in the safety and efficacy side of the clinical trial process, and as part of that, we have multiple product lines. We have respiratory product lines, we have cardiac safety product lines, we have imaging. As you know, imaging is becoming more and more so important for every clinical trial and particularly on oncology space for sure. To measure the growth of the tumor and that kind of things. So we have a business that focuses exclusively on the imaging side. And then we have data and analytics side of the house, because we provide real-time information about the trial itself, so that our customers can really measure risks and uncertainties before they become a problem. >> At this symposium, you're going to be giving a talk about clinical trials and the problems of, the missteps that can happen when the data is not accurate. Lay out the problem for our viewers, and then we're going to talk about the best practices that have emerged. >> I think that clinical trial space is very complex by its own nature, and the process itself is very lengthy. If you know one of the statistics, for example, it takes about 10 to 15 years to really develop and commercialize a drug. And it usually costs about $2.5 to 3 billion. Per drug. So think about the enormity of this. So the challenges are too many. One is data collection itself. Your clinical trials are becoming more and more complex. Becoming more and more global. Getting patients to the sites is another problem. Patient selection and retention, another one. Regulatory guidelines is another big issue because not every regulated authority follows the same sets of rules and regulations. And cost. Cost is a big imperative to the whole thing, because the development life-cycle of a drug is so lengthy. And as I said, it takes about $3 billion to commercialize a drug and that cost comes down to the consumers. That means patients. So the cost of the health care is growing, is sky-rocketing. And in terms of data collection, there are lots of devices in the field, as you know. Wearables, mobile helds, so the data volume is a tremendous problem. And the vendors. Each pharmaceutical companies use so many vendors to run their trials. CRO's. The Clinical Research Organizations. They have EDC systems, they can have labs. You name it. So they outsource all these to different vendors. Now, how do you coordinate and how do you make them to collaborate? And that's where the data plays a big role because now the data is everywhere across different systems, and those systems don't talk to each other. So how do you really make real-time decisioning when you don't know where your data is? And data is the primary ingredient that you use to make decisions? So that's where data and analytics, and bringing that data in real-time, is a very, very critical service that we provide to our customers. >> When you look at medicine, obviously, the whole notion of evidence-based medicine has been around for 15 years now, and it's becoming a seminal feature of how we think about the process of delivering medical services and ultimately paying it forward to everything else, and partly that's because doctors are scientists and they have an affinity for data. But if we think about going forward, it seems to me as though learning more about the genome and genomics is catalyzing additional need and additional understanding of the role that drugs play in the human body and it almost becomes an information problem, where the drug, I don't want to say that a drug is software, but a drug is delivering something that, ultimately, is going to get known at a genomic level. So does that catalyze additional need for data? is that changing the way we think about clinical trials? Especially when we think about, as you said, it's getting more complex because we have to make sure that a drug has the desired effect with men and women, with people from here, people from there. Are we going to push the data envelope even harder over the next few years? >> Oh, you bet. And that's where the real world evidence is playing a big role. So, instead of patients coming to the clinical trials, clinical trial is going to the patient. It is becoming more and more patient-centric. >> Interesting. >> And the early part of protocol design, for example, the study design, that is step one. So more and more the real world evidence data is being used to design the protocol. The very first stage of the clinical trial. Another thing that is pushing the envelope is artificial intelligence and other data mining techniques and now people can be used to really mine that data, the MAR data, prescription data, claims data. Those are real evidence data coming from the real patients. So now you can use these artificial intelligence and mission learning techniques to mine that data then to really design the protocol and the study design instead of flipping through the year MAR data manually. So patient collection, for example, is no patients, no trials, right? So gathering patients, and the right set of patients, is one of the big problems. It takes a lot of that time to bring those patients and even more troublesome is to retain those patients over time. These, too, are big, big things that take a long time and site selection, as well. Which site is going to really be able to bring the right patients for the right trials? >> So, two quick comments on that. One of the things, when you say the patients, when someone has a chronic problem, a chronic disease, when they start to feel better as a consequence of taking the drug, they tend to not take the drug anymore. And that creates this ongoing cycle. But going back to what you're saying, does it also mean that clinical trial processes, because we can gather data more successfully over time, it used to be really segmented. We did the clinical trial and it stopped. Then the drug went into production and maybe we caught some data. But now because we can do a better job with data, the clinical trial concept can be sustained a little bit more. That data becomes even more valuable over time and we can add additional volumes of data back in, to improve the process. >> Is that shortening clinical trials? Tell us a little bit about that. >> Yes, as I said, it takes 10 to 15 years if we follow the current process, like Phase One, Phase Two, Phase Three. And then post-marketing, that is Phase Four. I'm not taking the pre-clinical side of these trials in the the picture. That's about 10 to 15 years, about $3 billion kind of thing. So when you use these kind of AI techniques and the real world evidence data and all this, the projection is that it will reduce the cycle by 60 to 70%. >> Wow. >> The whole study, beginning to end time. >> So from 15 down to four or five? >> Exactly. So think about, there are two advantages. One is obviously, you are creating efficiency within the system, and this drug industry and drug discovery industry is rife for disruption. Because it has been using that same process over and over for a long time. It's like, it is working, so why fix it? But unfortunately, it's not working. Because the health care cost has sky-rocketed. So these inefficiencies are going to get solved when we employ real world evidencing into the mixture. Real-time decision making. Risks analysis before they become risks. Instead of spending one year to recruit patients, you use AI techniques to get to the right patients in minutes, so think about the efficiency again. And also, the home monitoring, or mHealth type of program, where the patients don't need to come to the sites, the clinical sites, for check-up anymore. You can wear wearables that are MDA regulated and approved and then, they're going to do all the work from within the comfort of their home. So think about that. And the other thing is, very, terminally sick patients, for example. They don't have time, nor do they have the energy, to come to the clinical site for check-up. Because every day is important to them. So, this is the paradigm shift that is going on. Instead of patients coming to the clinical trials, clinical trials are coming to the patients. And that shift, that's a paradigm shift and that is happening because of these AI techniques. Blockchain. Precision Medicine is another one. You don't run a big clinical trial anymore. You just go micro-trial, you just group small number of patients. You don't run a trial on breast cancer anymore, you just say, breast cancer for these patients, so it's micro-trials. And that needs -- >> Well that can still be aggregated. >> Exactly. It still needs to be aggregated, but you can get the RTD's quickly, so that you can decide whether you need to keep investing in that trial, or not. Instead of waiting 10 years, only to find out that your trial is going to fail. So you are wasting not only your time, but also preventing patients from getting the right medicine on time. So you have that responsibility as a pharmaceutical company, as well. So yes, it is a paradigm shift and this whole industry is rife for disruption and ERT is right at the center. We have not only data and technology experience, but as I said, we have deep domain experience within the clinical domain as well as regulatory and compliance experience. You need all these to navigate through this turbulent water of clinical research. >> Revolutionary changes taking place. >> It is and the satisfaction is, you are really helping the patients. You know? >> And helping the doctor. >> Helping the doctors. >> At the end of the day, the drug company does not supply the drug. >> Exactly. >> The doctor is prescribing, based on knowledge that she has about that patient and that drug and how they're going to work together. >> And out of the good statistics, in 2017, just last year, 60% of the MDA approved drugs were supported through our platform. 60 percent. So there were, I think, 60 drugs got approved? I think 30 or 35 of them used our platform to run their clinical trial, so think about the satisfaction that we have. >> A job well done. >> Exactly. >> Well, thank you for coming on the show Santi, it's been really great having you on. >> Thank you very much. >> Yes. >> Thank you. >> I'm Rebecca Knight. For Peter Burris, we will have more from MITCDOIQ, and the Cube's coverage of it. just after this. (techno music)
SUMMARY :
Brought to you by SiliconANGLE Media. Thanks so much for coming on the show. We're going to call you Santi, that's what you go by. and the therapeutic experience that you bring to the table the missteps that can happen And data is the primary ingredient that you use is that changing the way we think about clinical trials? patients coming to the clinical trials, So more and more the real world evidence data is being used One of the things, when you say the patients, Is that shortening clinical trials? and the real world evidence data and all this, and then, they're going to do all the work is rife for disruption and ERT is right at the center. It is and the satisfaction is, At the end of the day, and how they're going to work together. And out of the good statistics, Well, thank you for coming on the show Santi, and the Cube's coverage of it.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Jeff Frick | PERSON | 0.99+ |
David | PERSON | 0.99+ |
Rebecca Knight | PERSON | 0.99+ |
Alan | PERSON | 0.99+ |
Jeff | PERSON | 0.99+ |
Adrian | PERSON | 0.99+ |
Peter Burris | PERSON | 0.99+ |
Paul | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Adrian Swinscoe | PERSON | 0.99+ |
Jeff Brewer | PERSON | 0.99+ |
MAN Energy Solutions | ORGANIZATION | 0.99+ |
2017 | DATE | 0.99+ |
Tony | PERSON | 0.99+ |
Shelly | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Volkswagen | ORGANIZATION | 0.99+ |
Tony Fergusson | PERSON | 0.99+ |
Pega | ORGANIZATION | 0.99+ |
Europe | LOCATION | 0.99+ |
Paul Greenberg | PERSON | 0.99+ |
James Hutton | PERSON | 0.99+ |
Shelly Kramer | PERSON | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
Rob Walker | PERSON | 0.99+ |
Dylan | PERSON | 0.99+ |
10 | QUANTITY | 0.99+ |
June 2019 | DATE | 0.99+ |
Corey Quinn | PERSON | 0.99+ |
Don | PERSON | 0.99+ |
Santikary | PERSON | 0.99+ |
Croom | PERSON | 0.99+ |
china | LOCATION | 0.99+ |
Tony Ferguson | PERSON | 0.99+ |
30 | QUANTITY | 0.99+ |
60 drugs | QUANTITY | 0.99+ |
roland cleo | PERSON | 0.99+ |
UK | LOCATION | 0.99+ |
Don Schuerman | PERSON | 0.99+ |
cal poly | ORGANIZATION | 0.99+ |
Santi | PERSON | 0.99+ |
1985 | DATE | 0.99+ |
Duncan Macdonald | PERSON | 0.99+ |
Silicon Valley | LOCATION | 0.99+ |
millions | QUANTITY | 0.99+ |
Cloud Native Computing Foundation | ORGANIZATION | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
one year | QUANTITY | 0.99+ |
10 years | QUANTITY | 0.99+ |
Pegasystems | ORGANIZATION | 0.99+ |
80% | QUANTITY | 0.99+ |
Jeff Clarke, Dell Technologies | Dell Technologies World 2018
>> Announcer: Live from Las Vegas, it's theCUBE. Covering Dell Technologies World 2018. Brought to you by Dell EMC and its ecosystem partners. >> Welcome back, it's a beautiful day here in Las Vegas and this is theCUBE's live coverage of Dell Technologies World 2018. I'm Stu Miniman and happy to welcome, fresh off the keynote stage and for the first time on our program Jeff Clark, who is the Vice-Chairman of Products and Operations at Dell Technologies. Jeff, great to see you Thanks for joining us. >> Thanks, Stu. Thanks for having me. >> All right, so first of all Jeff, you know, you'll be a CUBE alum when we finish this, so for our audience that's not familiar-- >> Jeff: Do I get a badge? >> I've got a sticker for you actually. >> A sticker will work. >> Absolutely. Tell us a little bit about your background, you've been at Dell for a number of years. You now own really kind of the client and ISG businesses. >> Jeff: Sure. >> Which is a huge chunk of Michael's business. Give us your background. >> I'm an electrical engineer, by training. I went to the University of Texas at San Antonio. Got my double E degree. Out of school went to work for Motorola. And I joined what was PC's Limited when that was the first private name of Dell in 1987. I've been here for 31 years. And I've done a variety of things all on the engineering and product side. I've had the fortunate opportunity, I started in the factory as a process/test/quality/reliability engineer, we were Jacks of many trades at that time. Went to product development in 1989 and have been in that side ever since. I've worked in every kind of product that we had at the core design roles. I got to start a business, one of the funnest things I've ever done. I started the Precision business in 1997 from ground zero, me and a few of our top engineers and building that into the business that it is today. Expanded responsibilities, had a stint of running our enterprise business back in 2002 through 2005. Actually got to work with EMC back then. Dave Donatelli and many others back in the day. And now I lead a combined products and operations organization that has our CSG PC peripheral portfolio and ISG portfolio, our infrastructure products, as well as the fundamental supply chain that runs the company. >> Yeah, so Jeff, you've done it all and you've seen Michael through well, an amazing journey. >> We've worked together for a long time and it's been a heck of a ride. And to be honest, I think the ride's not over and the ride in front of us I think is more exciting than the past 30 years. >> Yeah, as we always say, it's a good thing, nothing's changing. There's nothing new to get those that love technology excited about, right? >> If there's any constant in our industry, and certainly in our company, it is change. And thinking about what's unfolded in my three plus decades at this is amazing to where we are today. But again, the future, as Bob DeCrescenzo said today, wicked cool. >> Wicked cool, absolutely. When you get up to Boston a little bit more, you can get a Boston accent. Yeah, exactly. Jeff, if we look at the Dell Technologies family, client side of the business is about half, the ISG is another 37%, so you know, you own major, major chunk of what's going on inside. Maybe give us a little bit of how you look at this portfolio. Are there interactions between the client side and the enterprise side? You know, we've seen most of the other big tech players that had both, either shed or split or, you know, kept the HPs and the IBMs of the world, no longer have both of those together. >> Yeah, those are interesting thoughts. You know, for us, our customers are asking us to provide a more set of comprehensive solutions. They want more end to end. And I don't see how you provide an end to end solution if you don't have one of the ends. And as trite as that may sound, I think it's the core fabric of what we're doing and certainly the role I have now leading this organization of being able to cultivate and build, I think, the world's leading and innovative PC products and peripherals around them. Same thing on the infrastructure side, where we have the privilege of being a leader in a number of categories. And then beginning to bring them together in new and unique ways. I referenced in my keynote this morning about how new entrants to the workforce are pressuring conventional definitions of how we do work and we deploy technology. So we have leadership, products, and now you capture or able to tie that together with VMware Workspace ONE or an AirWatch or RSA class of products and you begin to modernize the experience. How could you not do that if you're not integrating the pieces? Or a VDI experience where you take a thin client, or VxRail infrastructure, and some VMware Horizon software and build out a solution set. That's what our customers are asking us to do. And I think we're in a very unique position. In fact, I know we are, 'cause no one else has all of what I just described. >> Jeff, there was a main theme you talked about in your keynote, that IT can drive and change business and it resonated from what I'm hearing with customers. But if you dial back a few years ago, it was IT wasn't getting it done, IT wasn't listening to the business, we had Stealth IT. Why are things different now? What's the role of IT going forward? And how does Dell fit into that big picture? >> You know, Michael touched on it in his opening yesterday about IT and business have become much much more closely integrated to compete in this modern world. And I suspect some of this goes back to we've always thought of IT as a cost center, OPEX. Yet, over the past decade, we've seen some fundamental disruption of business that has been fundamentally IT-led. New technology-led. New business models that have been fueled by new technology. I think that modernization, whether it's modernization of applications, taking advantage of information at your disposal and turning that into useful insights to make better business decisions, is a catalyst for a reframing, if you will, of what IT does. And the role of IT in a business, and a role that IT can help companies be more competitive, or at a minimum, help them not get disrupted by someone who's doing it, as well. So I think that's what's changing and I think you're seeing companies embrace that. And as soon as you do, you begin to I think challenge what have you invested in, where are you going, how am I taking advantage of some of the new trends that I outlined maybe this morning. And it gets I think a pretty interesting time in front of us. >> Yeah, you know, you actually went through immersive and collaborative computing, IOT, multi-clouded options, offer to find anything and AI and ML. So a lot of new things. One area I'd like to touch on, we heard some great side from Allison Dew earlier this week. It's great when we have the new tools and the new technology but sometimes we wonder how does adoption go and how does that impact productivity and people's engagement? And I'm curious how we help the enterprise and help the client side, not just do something new but be more productive and move their business forward. >> Look, if start with the client side, I think it's pretty easy to think about productivity. Particularly if you believe this boundary between work and the workplace is fundamentally changed and think about where people do work. You're actually getting a much more productive workforce by allowing people to work when the want to work, where they want to work. And that traditional boundary of eight to five, whatever it might be, physically in the office. You now have access to all 168 hours in a week and people want to work when they want to work. And we find that the work more, particularly if you put technology in their hand that makes them more productive and they have access to what they need to do their job. You cast that forward into the enterprise and I think, look, at some level IT is hard and we have a huge role in making it much easier. How to simplify. How to make it more automated so IT practitioners can actually migrate to how do I configure this LAN? How do I set up this server? And interesting things and still important things, but can migrate to how do I take this data and turn it into information that helps my business unit, my company win. That's where I think, again, I think this migrates, too and we play a huge role in helping that. >> Yeah, there's a theme that, another thing came up in the keynote, data really at the center of everything and not just talking about storage, but you had McClaren up on stage talking about that. How do you see the role of data changing? How do we capture for companies? How valued data is? >> A tie back to Michael's opening, he talked about data being, if you will, the rocket fuel for this rocket change and digitization of our world, the digital transformation that's underway. And between Michael, Pat, and myself, we all talked about that happening at the edge in a decentralized manner. I tried to build upon that and say you hadn't seen nothing yet, there's a whole lot more coming. Well, if believe that, you have to start preparing today, and anticipating that. And again, I think we play a role in helping companies do that. I think it requires a modern approach. It requires an approach to understand how that information is coming in to be able to do something with it. That's where we're focusing, as I mentioned. In fact, I think I specifically said it's sort of the heart of our vision for IT transformation. The data's the gold. In fact, Pat may have said that yesterday. Now, the challenge will be how do you take all of that data sort through it, figure out which pieces are most valuable and then get them to where they're supposed to go to make decisions. That's yet to be seen how we do that but I'm encouraged, given our track record in this industry. We'll find ways to do that. Engines like AI and machine or capabilities like artificial intelligence and machine learning are certainly a vast step forward of making sense of all that stuff. >> Yeah. Jeff, I wonder if you could bring us inside some of your customers. You know, where do you find some of the strategic discussions happening? I think back to early PC or server days, you know, who bought boxes versus now, it seems like more of a C level discussion for some of these large trends that you're seeing. What are some of the big changes that you're seeing in the customers and what are some of the biggest challenges that they're having today? >> I think you mentioned it. One of the things that I've seen in the customer interactions I've had in this new role and getting to see more and more each and every month. The conversations I have, or participate in, are seldom, if ever, about the speeds and feeds of this, the performance of that. It's about here's my business problem, how do you help me? How do you help me get this done? How do you provide me a set of solutions to get to where I want to go? By the way, if you have advice, recommendation to help us, they want to hear that. So they want to access our technical knowledge base across our organization. But again, I think this theme that I tried to say a couple of times this morning around outcomes, so it's an outcome-driven discussion. It's solutions. It's end to end. And how can you help me? Probably, I guess, I could generalize them to fit those four attributes. >> Great. Last thing, you talked about the modern data center. What's that mean for your customers? >> To me, it's all about putting at the disposal of our customers a set of technologies and infrastructure solutions and services that allows 'em to take advantage of that data. Allow them to have the data services they need and the underlying horsepower to do it in a fairly intelligent way. Hopefully automating a few of those tasks and giving them the agility and flexibility they need. >> Yeah. Jeff, wonder if you could speak to really, the engineering culture inside of Dell. Think back to before Dell made a lot of exhibitions, it's like, oh well Dell was a supply chain company, people would say. And then a number of acquisitions came through, you know, you lived with a lot of the engineers, you've got more engineers through the EMC merger. Sometimes people that don't understand, they're like oh, it's just all going to commodity stuff, software defined anything means that infrastructure doesn't matter. You know, where does the Dell engineering culture differentiate and position you in the market? >> You know, it might not surprise you, given my background, that certainly we are a supply chain company. We were doing hardcore engineering for a long time. I look at some of the advancements we made back in the day in leading the industry. I think we have a long distinguished track record of doing that. And now with the combination of the two companies, I look at this organization and the engineering capability we have, I like my hand, we like our hand. The trick is, is to getting our teams to innovate where we can differentiate, where we can help customers solve problems. And part of what I've been doing across this community of engineers, is doing that. Pivoting resources to the most important things. Pivoting resources to where we can differentiate. Pivoting resources where our innovation can actually distinguish, or shine against the competitive set. We've seen this in every category, PC, server, storage. And many of these cases, we start from the privileged position of being the leader. So think about when we get everything aligned to be able to innovate and differentiate, I like my hand. >> All right. Jeff, I want to give you the final word, coming away from Dell Technologies World this year. There's a lot of product announcements, people are going to learn a lot in the sessions, but what do you want people to come away with? Understanding the Dell portfolio and Dell as a company, as a partner? >> Well, if I could leave any parting statement, and make it very specific to the ISG portfolio, I talked about power, our power brand now being the brand of our future state ISG products, walk away with a commitment to build a power branded portfolio that is going to be innovative, differentiated in the marketplace, and something that helps our customers with. That's our commitment and that's what we'll deliver going forward. >> All right. Jeff Clark, thank you for sharing with us all the information, your update. Your first time on theCUBE, but I'm sure we'll have you on many times in the future. >> My pleasure, thanks for having me. >> All right. We'll be back with lots more coverage here from Dell Technologies World 2018. I'm Stu Miniman and you're watching theCUBE.
SUMMARY :
Brought to you by Dell EMC and for the first time Thanks for having me. the client and ISG businesses. of Michael's business. and building that into the Yeah, so Jeff, you've done and the ride in front of There's nothing new to get at this is amazing to where we are today. the ISG is another 37%, so you know, and you begin to modernize the experience. What's the role of IT going forward? of some of the new trends and help the client side, You cast that forward into the enterprise in the keynote, data really and then get them to where of the strategic discussions happening? By the way, if you have advice, the modern data center. and the underlying horsepower to do it a lot of the engineers, and the engineering capability a lot in the sessions, differentiated in the marketplace, all the information, your update. I'm Stu Miniman and
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Michael | PERSON | 0.99+ |
Jeff | PERSON | 0.99+ |
Jeff Clark | PERSON | 0.99+ |
1987 | DATE | 0.99+ |
1997 | DATE | 0.99+ |
1989 | DATE | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
Jeff Clarke | PERSON | 0.99+ |
Dave Donatelli | PERSON | 0.99+ |
2002 | DATE | 0.99+ |
2005 | DATE | 0.99+ |
Motorola | ORGANIZATION | 0.99+ |
Bob DeCrescenzo | PERSON | 0.99+ |
EMC | ORGANIZATION | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
31 years | QUANTITY | 0.99+ |
two companies | QUANTITY | 0.99+ |
Precision | ORGANIZATION | 0.99+ |
Dell Technologies | ORGANIZATION | 0.99+ |
Boston | LOCATION | 0.99+ |
Pat | PERSON | 0.99+ |
Stu | PERSON | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
yesterday | DATE | 0.99+ |
168 hours | QUANTITY | 0.99+ |
eight | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
Dell EMC | ORGANIZATION | 0.99+ |
37% | QUANTITY | 0.99+ |
first time | QUANTITY | 0.99+ |
IBMs | ORGANIZATION | 0.99+ |
five | QUANTITY | 0.98+ |
this year | DATE | 0.98+ |
Allison Dew | PERSON | 0.98+ |
University of Texas | ORGANIZATION | 0.98+ |
One | QUANTITY | 0.98+ |
HPs | ORGANIZATION | 0.98+ |
McClaren | PERSON | 0.97+ |
Dell Technologies World 2018 | EVENT | 0.97+ |
PC's Limited | ORGANIZATION | 0.96+ |
San Antonio | LOCATION | 0.93+ |
AirWatch | COMMERCIAL_ITEM | 0.92+ |
earlier this week | DATE | 0.92+ |
this morning | DATE | 0.92+ |
Dell Technologies World | ORGANIZATION | 0.92+ |
one | QUANTITY | 0.9+ |
few years ago | DATE | 0.9+ |
a week | QUANTITY | 0.89+ |
three plus decades | QUANTITY | 0.87+ |
ISG | ORGANIZATION | 0.87+ |
each | QUANTITY | 0.87+ |
theCUBE | ORGANIZATION | 0.83+ |
VMware Horizon | TITLE | 0.83+ |
four attributes | QUANTITY | 0.82+ |
this morning | DATE | 0.8+ |
first | QUANTITY | 0.79+ |
Laura Stevens, American Heart Association | AWS re:Invent
>> Narrator: Live from Las Vegas, it's theCUBE, covering AWS re:Invent 2017, presented by AWS, Intel, and our ecosystem of partners. >> Hey, welcome back everyone, this is theCUBE's exclusive live coverage here in Las Vegas for AWS Amazon web services re:Invent 2017. I'm John Furrier with Keith Townsend. Our next guest is Laura Stevens, data scientist at the American Heart Association, an AWS customer, welcome to theCUBE. >> Hi, it's nice to be here. >> So, the new architecture, we're seeing all this great stuff, but one of the things that they mention is data is the killer app, that's my word, Verna didn't say that, but essentially saying that. You guys are doing some good work with AWS and precision medicine, what's the story? How does this all work, what are you working with them on? >> Yeah, so the American Heart Association was founded in 1924, and it is the oldest and largest voluntary organization dedicated to curing heart disease and stroke, and I think in the past few years what the American Heart Association has realized is that the potential of technology and data can really help us create innovative ways and really launch precision medicine in a fashion that hasn't been capable to do before. >> What are you guys doing with AWS? What's that, what's the solution? >> Yeah so the HA has strategically partnered with Amazon Web Services to basically use technology as a way to power precision medicine, and so when I say precision medicine, I mean identifying individual treatments, based on one's genetics, their environmental factors, their life factors, that then results in preventative and treatment that's catered to you as an individual rather than kind of a one size fits all approach that is currently happening. >> So more tailored? >> Yeah, specifically tailored to you as an individual. >> What do I do, get a genome sequence? I walk in, they throw a high force computing, sequence my genomes, maybe edit some genes while they're at it, I mean what's going on. There's some cutting edge conversations out there we see in some of the academic areas, course per that was me just throwing that in for fun, but data has to be there. What kind of data do you guys look at? Is it personal data, is it like how big is the data? Give us a sense of some of the data science work that you're doing? >> Yeah so the American Heart Association has launched the Institute for Precision Cardiovascular Medicine, and as a result, with Amazon, they created the precision medicine platform, which is a data marketplace that houses and provides analytic tools that enable high performance computing and data sharing for all sorts of different types of data, whether it be personal data, clinical trial data, pharmaceutical data, other data that's collected in different industries, hospital data, so a variety of data. >> So Laura, there's a lot of think fud out there around the ability to store data in a cloud, but there's also some valid concerns. A lot of individual researchers, I would imagine, don't have the skillset to properly protect data. What is the Heart Association doing with the framework to help your customers protect data? >> Yeah so the I guess security of data, the security of the individual, and the privacy of the individual is at the heart of the AHA, and it's their number one concern, and making anything that they provide that a number one priority, and the way that we do that in partnering with AWS is with this cloud environment we've been able to create even if you have data that you'd like to use sort of a walled garden behind your data so that it's not accessible to people who don't have access to the data, and it's also HIPAA compliant, it meets the standards that the utmost secure standards of health care today. >> So I want to make sure we're clear on this, the Heart Association doesn't collect data themselves. Are you guys creating a platform for your members to leverage this technology? >> So there's, I would so maybe both actually. The American Heart Association does have data that it is associated with, with its volunteers and the hospitals that it's associated with, and then on top of that, we've actually just launched My Research Legacy, which allows individuals of the community to, who want to share their data, whether you're healthy or just sick, either one, they want to share their data and help in aiding to cure heart disease and stroke, and so they can share their own data, and then on top of that, anybody, we are committed to strategically partnering with anybody who's involved and wants to share their data and make their data accessible. >> So I can share my data? >> Yes, you can share your data. >> Wow, so what type of tools do you guys use against that data set and what are some of the outcomes? >> Yeah so I think the foundation is the cloud, and that's where the data is stored and housed, and then from there, we have a variety of different tools that enable researchers to kind of custom build data sets that they want to answer the specific research questions they have, and so some of those tools, they range from common tools that are already in use today on your personal computer, such as Python or R Bioconductor, and then they have more high performance computing tools, such as Hal or any kind of s3 environment, or Amazon services, and then on top of that I think what is so awesome about the platform is that it's very dynamic, so a tool that's needed to use for high performance computing or a tool that's needed even just as a on a smaller data set, that can easily be installed and may be available to researchers, and so that they can use it for their research. >> So kind of data as a service. I would love to know about the community itself. How are you guys sharing the results of kind of oh this process worked great for this type of analysis amongst your members? >> Yeah so I think that there's kind of two different targets in that sense that you can think of is that there's the researchers and the researchers that come to the platform and then there's actually the patient itself, and ultimately the HA's goal is to make, to use the data and use the researcher for patient centered care, so with the researchers specifically, we have a variety of tutorials available so that researchers can one, learn how to perform high performance computing analysis, see what other people have done. We have a forum where researchers can log on and enable, I guess access other researchers and talk to them about different analysis, and then additionally we have My Research Legacy, which is patient centered, so it's this is what's been found and this is what we can give back to you as the patient about your specific individualized treatment. >> What do you do on a daily basis? Take us through your job, are you writing code, are you slinging API's around? What are some of the things that you're doing? >> I think I might say all of the above. I think right now my main effort is focused on one, conducting research using the platform, so I do use the platform to answer my own research questions, and those we have presented at different conferences, for example the American Heart Association, we had a talk here about the precision medicine platform, and then two, I'm focused on strategically making the precision medicine platform better by getting more data, adding data to the platform, improving the way that data is harmonized in the platform, and improving the amount of data that we have, and the diversity, and the variety. >> Alright, we'll help you with that, so let's help you get some people recruited, so what do they got to do to volunteer, volunteer their data, because I think this is one of those things where you know people do want to help. So, how do they, how you onboard? You use the website, is it easy, one click? Do they have to wear an iWatch, I mean what I mean? >> Yeah. >> What's the deal? What do I got to do? >> So I think I would encourage researchers and scientists and anybody who is data centric to go to precision.heart.org, and they can just sign up for an account, they can contact us through that, there's plenty of different ways to get in touch with us and plenty of ways to help. >> Precision.heart.org. >> Yup, precision.heart.org. >> Stu: Register now. >> Register now click, >> Powered by AWS. >> Yup. >> Alright so I gotta ask you as an AWS customer, okay take your customer hat off, put your citizen's hat on, what is Amazon mean to you, I mean is it, how do you describe it to people who don't use it? >> Okay yeah, so I think... the HA's ultimate mission right, is to provide individualized treatment and cures for cardiovascular disease and stroke. Amazon is a way to enable that and make that actually happen so that we can mine extremely large data sets, identify those individualized patterns. It allows us to store data in a fashion where we can provide a market place where there's extremely large amounts of data, extremely diverse amounts of data, and data that can be processed effectively, so that it can be directly used for research. >> What's your favorite tool or product or service within Amazon? >> That's a good question. I think, I mean the cloud and s3 buckets are definitely in a sense they're my favorites because there's so much that can be stored right there, Athena I think is also pretty awesome, and then the EMR clusters with Spark. >> The list is too long. >> My jam. >> It is. (laughs) >> So, one of the interesting things that I love is a lot of my friends are in non-profits, fundraising is a big, big challenge, grants are again, a big challenge, have you guys seen any new opportunities as a result of the results of the research coming out of HA and AWS in the cloud? >> Yeah so I think one of the coolest things about the HA is that they have this Institute for Precision Cardiovascular Medicine, and the strategic partnership between the HA and AWS, even just this year we've launched 13 new grants, where the HA kind of backs the research behind, and the AWS provides credit so that people can come to the cloud and use the cloud and use the tools available on a grant funded basis. >> So tell me a little bit more about that program. Anybody specifically that you, kind of like saying, seeing that's used these credits from AWS to do some cool research? >> Yeah definitely, so I think specifically we have one grantee right now that is really focused on identifying outcomes across multiple clinical trials, so currently clinical trials take 20 years, and there's a large variety of them. I don't know if any of you are familiar with the Framingham heart study, the Dallas heart study, the Jackson heart study, and trying to determine how those trials compare, and what outcomes we can generate, and research insights we can generate across multiple data sets is something that's been challenging due to the ability to not being able to necessarily access that data, all of those different data sets together, and then two, trying to find ways to actually compare them, and so with the precision medicine platform, we have a grantee at the University of Colorado-Denver, who has been able to find those synchronicities across data sets and has actually created kind of a framework that then can be implemented in the precision medicine platform. >> Well I just registered, it takes really two seconds to register, that's cool. Thanks so much for pointing out precision.heart.org. Final question, you said EMR's your jam. (laughing) >> Why, why is it? Why do you like it so much, is it fast, is it easy to use? >> I think the speed is one of the things. When it comes to using genetic data and multiple biological levels of data, whether it be your genetics, your lifestyle, your environment factors, there's... it just ends up being extremely large amounts of data, and to be able to implement things like server-less AI, and artificial intelligence, and machine learning on that data set is time consuming, and having the power of an EMR cluster that is scalable makes that so much faster so that we can then answer our research questions faster and identify those insights and get them to out in the world. >> Gotta love the new services they're launching, too. It just builds on top of it. Doesn't it? >> Yes. >> Yeah, soon everyone's gonna be jamming on AWS in our opinion. Thanks so much for coming on, appreciate the stories and commentary. >> Yeah. >> Precision.heart.org, you want to volunteer if you're a researcher or a user, want to share your data, they've got a lot of data science mojo going on over there, so check it out. It's theCUBE bringing a lot of data here, tons of data from the show, three days of wall to wall coverage, we'll be back with more live coverage after this short break. (upbeat music)
SUMMARY :
Narrator: Live from Las Vegas, scientist at the American Heart Association, but one of the things that they mention is that the potential of technology Yeah so the HA has strategically partnered What kind of data do you guys look at? Yeah so the American Heart Association has launched the framework to help your customers protect data? so that it's not accessible to people who the Heart Association doesn't collect data themselves. and the hospitals that it's associated with, and so that they can use it for their research. How are you guys sharing the results of kind back to you as the patient about your conferences, for example the American Heart Association, do they got to do to volunteer, volunteer to go to precision.heart.org, and they can actually happen so that we can mine extremely I mean the cloud and s3 buckets It is. and the AWS provides credit so that people from AWS to do some cool research? kind of a framework that then can be implemented Final question, you said EMR's your jam. of data, and to be able to implement Gotta love the new services they're launching, too. Thanks so much for coming on, appreciate the Precision.heart.org, you want to volunteer
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
AWS | ORGANIZATION | 0.99+ |
Laura Stevens | PERSON | 0.99+ |
Laura | PERSON | 0.99+ |
American Heart Association | ORGANIZATION | 0.99+ |
Amazon Web Services | ORGANIZATION | 0.99+ |
Keith Townsend | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
two seconds | QUANTITY | 0.99+ |
20 years | QUANTITY | 0.99+ |
Heart Association | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Institute for Precision Cardiovascular Medicine | ORGANIZATION | 0.99+ |
1924 | DATE | 0.99+ |
AHA | ORGANIZATION | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
Institute for Precision Cardiovascular Medicine | ORGANIZATION | 0.99+ |
13 new grants | QUANTITY | 0.99+ |
precision.heart.org | OTHER | 0.99+ |
Python | TITLE | 0.99+ |
HA | ORGANIZATION | 0.99+ |
two | QUANTITY | 0.99+ |
Precision.heart.org | OTHER | 0.99+ |
University of Colorado | ORGANIZATION | 0.99+ |
this year | DATE | 0.99+ |
HIPAA | TITLE | 0.98+ |
one | QUANTITY | 0.98+ |
R Bioconductor | TITLE | 0.98+ |
both | QUANTITY | 0.98+ |
Dallas | LOCATION | 0.98+ |
Intel | ORGANIZATION | 0.98+ |
iWatch | COMMERCIAL_ITEM | 0.97+ |
one click | QUANTITY | 0.97+ |
three days | QUANTITY | 0.96+ |
Verna | PERSON | 0.96+ |
tons of data | QUANTITY | 0.96+ |
s3 | TITLE | 0.92+ |
one grantee | QUANTITY | 0.92+ |
theCUBE | TITLE | 0.9+ |
two different targets | QUANTITY | 0.9+ |
My Research Legacy | TITLE | 0.9+ |
Invent 2017 | EVENT | 0.89+ |
Framingham | ORGANIZATION | 0.89+ |
Spark | TITLE | 0.85+ |
Denver | ORGANIZATION | 0.83+ |
today | DATE | 0.82+ |
Hal | TITLE | 0.82+ |
lot of data | QUANTITY | 0.79+ |
Narrator: Live from Las | TITLE | 0.79+ |
Invent | EVENT | 0.71+ |
re:Invent 2017 | EVENT | 0.71+ |
past few years | DATE | 0.7+ |
one size | QUANTITY | 0.67+ |
EMR | ORGANIZATION | 0.64+ |
Vegas | LOCATION | 0.63+ |
s3 | COMMERCIAL_ITEM | 0.56+ |
re | EVENT | 0.53+ |
Jackson | PERSON | 0.52+ |
precision | ORGANIZATION | 0.5+ |
Bill Mannel & Dr. Nicholas Nystrom | HPE Discover 2017
>> Announcer: Live, from Las Vegas, it's the Cube, covering HPE Discover 2017. Brought to you by Hewlett Packard Enterprise. >> Hey, welcome back everyone. We are here live in Las Vegas for day two of three days of exclusive coverage from the Cube here at HPE Discover 2017. Our two next guests is Bill Mannel, VP and General Manager of HPC and AI for HPE. Bill, great to see you. And Dr. Nick Nystrom, senior of research at Pittsburgh's Supercomputer Center. Welcome to The Cube, thanks for coming on, appreciate it. >> My pleasure >> Thanks for having us. >> As we wrap up day two, first of all before we get started, love the AI, love the high performance computing. We're seeing great applications for compute. Everyone now sees that a lot of compute actually is good. That's awesome. What is the Pittsburgh Supercomputer Center? Give a quick update and describe what that is. >> Sure. The quick update is we're operating a system called Bridges. Bridges is operating for the National Science Foundation. It democratizes HPC. It brings people who have never used high performance computing before to be able to use HPC seamlessly, almost as a cloud. It unifies HPC big data and artificial intelligence. >> So who are some of the users that are getting access that they didn't have before? Could you just kind of talk about some of the use cases of the organizations or people that you guys are opening this up to? >> Sure. I think one of the newest communities that's very significant is deep learning. So we have collaborations between the University of Pittsburgh life sciences and the medical center with Carnegie Mellon, the machine learning researchers. We're looking to apply AI machine learning to problems in breast and lung cancer. >> Yeah, we're seeing the data. Talk about some of the innovations that HPE's bringing with you guys in the partnership, because we're seeing, people are seeing the results of using big data and deep learning and breakthroughs that weren't possible before. So not only do you have the democratization cool element happening, you have a tsunami of awesome open source code coming in from big places. You see Google donating a bunch of machine learning libraries. Everyone's donating code. It's like open bar and open source, as I say, and the young kids that are new are the innovators as well, so not just us systems guys, but a lot of young developers are coming in. What's the innovation? Why is this happening? What's the ah-ha moment? Is it just cloud, is it a combination of things, talk about it. >> It's a combination of all the big data coming in, and then new techniques that allow us to analyze and get value from it and from that standpoint. So the traditional HPC world, typically we built equations which then generated data. Now we're actually kind of doing the reverse, which is we take the data and then build equations to understand the data. So it's a different paradigm. And so there's more and more energy understanding those two different techniques of kind of getting two of the same answers, but in a different way. >> So Bill, you and I talked in London last year. >> Yes. With Dr. Gho. And we talked a lot about SGI and what that acquisition meant to you guys. So I wonder if you could give us a quick update on the business? I mean it's doing very well, Meg talked about it on the conference call this last quarter. Really high point and growing. What's driving the growth, and give us an update on the business. >> Sure. And I think the thing that's driving the growth is all this data and the fact that customers want to get value from it. So we're seeing a lot of growth in industries like financial services, like in manufacturing, where folks are moving to digitization, which means that in the past they might have done a lot of their work through experimentation. Now they're moving it to a digital format, and they're simulating everything. So that's driven a lot more HPC over time. As far as the SGI, integration is concern. We've integrated about halfway, so we're at about the halfway point. And now we've got the engineering teams together and we're driving a road map and a new set of products that are coming out. Our Gen 10-based products are on target, and they're going to be releasing here over the next few months. >> So Nick, from your standpoint, when you look at, there's been an ebb and flow in the supercomputer landscape for decades. All the way back to the 70s and the 80s. So from a customer perspective, what do you see now? Obviously China's much more prominent in the game. There's sort of an arms race, if you will, in computing power. From a customer's perspective, what are you seeing, what are you looking for in a supplier? >> Well, so I agree with you, there is this arms race for exaflops. Where we are really focused right now is enabling data-intensive applications, looking at big data service, HPC is a service, really making things available to users to be able to draw on the large data sets you mentioned, to be able to put the capability class computing, which will go to exascale, together with AI, and data and Linux under one platform, under one integrated fabric. That's what we did with HPE for Bridges. And looking to build on that in the future, to be able to do the exascale applications that you're referring to, but also to couple on data, and to be able to use AI with classic simulation to make those simulations better. >> So it's always good to have a true practitioner on The Cube. But when you talk about AI and machine learning and deep learning, John and I sometimes joke, is it same wine, new bottle, or is there really some fundamental shift going on that just sort of happened to emerge in the last six to nine months? >> I think there is a fundamental shift. And the shift is due to what Bill mentioned. It's the availability of data. So we have that. We have more and more communities who are building on that. You mentioned the open source frameworks. So yes, they're building on the TensorFlows, on the Cafes, and we have people who have not been programmers. They're using these frameworks though, and using that to drive insights from data they did not have access to. >> These are flipped upside down, I mean this is your point, I mean, Bill pointed it out, it's like the models are upside down. This is the new world. I mean, it's crazy, I don't believe it. >> So if that's the case, and I believe it, it feels like we're entering this new wave of innovation which for decades we talked about how we march to the cadence of Moore's Law. That's been the innovation. You think back, you know, your five megabyte disk drive, then it went to 10, then 20, 30, now it's four terabytes. Okay, wow. Compared to what we're about to see, I mean it pales in comparison. So help us envision what the world is going to look like in 10 or 20 years. And I know it's hard to do that, but can you help us get our minds around the potential that this industry is going to tap? >> So I think, first of all, I think the potential of AI is very hard to predict. We see that. What we demonstrated in Pittsburgh with the victory of Libratus, the poker-playing bot, over the world's best humans, is the ability of an AI to beat humans in a situation where they have incomplete information, where you have an antagonist, an adversary who is bluffing, who is reacting to you, and who you have to deal with. And I think that's a real breakthrough. We're going to see that move into other aspects of life. It will be buried in apps. It will be transparent to a lot of us, but those sorts of AI's are going to influence a lot. That's going to take a lot of IT on the back end for the infrastructure, because these will continue to be compute-hungry. >> So I always use the example of Kasperov and he got beaten by the machine, and then he started a competition to team up with a supercomputer and beat the machine. Yeah, humans and machines beat machines. Do you expect that's going to continue? Maybe both your opinions. I mean, we're just sort of spitballing here. But will that augmentation continue for an indefinite period of time, or are we going to see the day that it doesn't happen? >> I think over time you'll continue to see progress, and you'll continue to see more and more regular type of symmetric type workloads being done by machines, and that allows us to do the really complicated things that the human brain is able to better process than perhaps a machine brain, if you will. So I think it's exciting from the standpoint of being able to take some of those other roles and so forth, and be able to get those done in perhaps a more efficient manner than we're able to do. >> Bill, talk about, I want to get your reaction to the concept of data. As data evolves, you brought up the model, I like the way you're going with that, because things are being flipped around. In the old days, I want to monetize my data. I have data sets, people are looking at their data. I'm going to make money from my data. So people would talk about how we monetizing the data. >> Dave: Old days, like two years ago. >> Well and people actually try to solve and monetize their data, and this could be use case for one piece of it. Other people are saying no, I'm going to open, make people own their own data, make it shareable, make it more of an enabling opportunity, or creating opportunities to monetize differently. In a different shift. That really comes down to the insights question. What's your, what trends do you guys see emerging where data is much more of a fabric, it's less of a discreet, monetizable asset, but more of an enabling asset. What's your vision on the role of data? As developers start weaving in some of these insights. You mentioned the AI, I think that's right on. What's your reaction to the role of data, the value of the data? >> Well, I think one thing that we're seeing in some of our, especially our big industrial customers is the fact that they really want to be able to share that data together and collect it in one place, and then have that regularly updated. So if you look at a big aircraft manufacturer, for example, they actually are putting sensors all over their aircraft, and in realtime, bringing data down and putting it into a place where now as they're doing new designs, they can access that data, and use that data as a way of making design trade-offs and design decision. So a lot of customers that I talk to in the industrial area are really trying to capitalize on all the data possible to allow them to bring new insights in, to predict things like future failures, to figure out how they need to maintain whatever they have in the field and those sorts of things at all. So it's just kind of keeping it within the enterprise itself. I mean, that's a challenge, a really big challenge, just to get data collected in one place and be able to efficiently use it just within an enterprise. We're not even talking about sort of pan-enterprise, but just within the enterprise. That is a significant change that we're seeing. Actually an effort to do that and see the value in that. >> And the high performance computing really highlights some of these nuggets that are coming out. If you just throw compute at something, if you set it up and wrangle it, you're going to get these insights. I mean, new opportunities. >> Bill: Yeah, absolutely. >> What's your vision, Nick? How do you see the data, how do you talk to your peers and people who are generally curious on how to approach it? How to architect data modeling and how to think about it? >> I think one of the clearest examples on managing that sort of data comes from the life sciences. So we're working with researchers at University of Pittsburgh Medical Center, and the Institute for Precision Medicine at Pitt Cancer Center. And there it's bringing together the large data as Bill alluded to. But there it's very disparate data. It is genomic data. It is individual tumor data from individual patients across their lifetime. It is imaging data. It's the electronic health records. And trying to be able to do this sort of AI on that to be able to deliver true precision medicine, to be able to say that for a given tumor type, we can look into that and give you the right therapy, or even more interestingly, how can we prevent some of these issues proactively? >> Dr. Nystrom, it's expensive doing what you do. Is there a commercial opportunity at the end of the rainbow here for you or is that taboo, I mean, is that a good thing? >> No, thank you, it's both. So as a national supercomputing center, our resources are absolutely free for open research. That's a good use of our taxpayer dollars. They've funded these, we've worked with HP, we've designed the system that's great for everybody. We also can make this available to industry at an extremely low rate because it is a federal resource. We do not make a profit on that. But looking forward, we are working with local industry to let them test things, to try out ideas, especially in AI. A lot of people want to do AI, they don't know what to do. And so we can help them. We can help them architect solutions, put things on hardware, and when they determine what works, then they can scale that up, either locally on prem, or with us. >> This is a great digital resource. You talk about federally funded. I mean, you can look at Yosemite, it's a state park, you know, Yellowstone, these are natural resources, but now when you start thinking about the goodness that's being funded. You want to talk about democratization, medicine is just the tip of the iceberg. This is an interesting model as we move forward. We see what's going on in government, and see how things are instrumented, some things not, delivery of drugs and medical care, all these things are coalescing. How do you see this digital age extending? Because if this continues, we should be doing more of these, right? >> We should be. We need to be. >> It makes sense. So is there, I mean I just not up to speed on what's going on with federally funded-- >> Yeah, I think one thing that Pittsburgh has done with the Bridges machine, is really try to bring in data and compute and all the different types of disciplines in there, and provide a place where a lot of people can learn, they can build applications and things like that. That's really unusual in HPC. A lot of times HPC is around big iron. People want to have the biggest iron basically on the top 500 list. This is where the focus hasn't been on that. This is where the focus has been on really creating value through the data, and getting people to utilize it, and then build more applications. >> You know, I'll make an observation. When we first started doing The Cube, we observed that, we talked about big data, and we said that the practitioners of big data, are where the guys are going to make all the money. And so far that's proven true. You look at the public big data companies, none of them are making any money. And maybe this was sort of true with ERP, but not like it is with big data. It feels like AI is going to be similar, that the consumers of AI, those people that can find insights from that data are really where the big money is going to be made here. I don't know, it just feels like-- >> You mean a long tail of value creation? >> Yeah, in other words, you used to see in the computing industry, it was Microsoft and Intel became, you know, trillion dollar value companies, and maybe there's a couple of others. But it really seems to be the folks that are absorbing those technologies, applying them, solving problems, whether it's health care, or logistics, transportation, etc., looks to where the huge economic opportunities may be. I don't know if you guys have thought about that. >> Well I think that's happened a little bit in big data. So if you look at what the financial services market has done, they've probably benefited far more than the companies that make the solutions, because now they understand what their consumers want, they can better predict their life insurance, how they should-- >> Dave: You could make that argument for Facebook, for sure. >> Absolutely, from that perspective. So I expect it to get to your point around AI as well, so the folks that really use it, use it well, will probably be the ones that benefit it. >> Because the tooling is very important. You've got to make the application. That's the end state in all this That's the rubber meets the road. >> Bill: Exactly. >> Nick: Absolutely. >> All right, so final question. What're you guys showing here at Discover? What's the big HPC? What's the story for you guys? >> So we're actually showing our Gen 10 product. So this is with the latest microprocessors in all of our Apollo lines. So these are specifically optimized platforms for HPC and now also artificial intelligence. We have a platform called the Apollo 6500, which is used by a lot of companies to do AI work, so it's a very dense GPU platform, and does a lot of processing and things in terms of video, audio, these types of things that are used a lot in some of the workflows around AI. >> Nick, anything spectacular for you here that you're interested in? >> So we did show here. We had video in Meg's opening session. And that was showing the poker result, and I think that was really significant, because it was actually a great amount of computing. It was 19 million core hours. So was an HPC AI application, and I think that was a really interesting success. >> The unperfect information really, we picked up this earlier in our last segment with your colleagues. It really amplifies the unstructured data world, right? People trying to solve the streaming problem. With all this velocity, you can't get everything, so you need to use machines, too. Otherwise you have a haystack of needles. Instead of trying to find the needles in the haystack, as they was saying. Okay, final question, just curious on this natural, not natural, federal resource. Natural resource, feels like it. Is there like a line to get in? Like I go to the park, like this camp waiting list, I got to get in there early. How do you guys handle the flow for access to the supercomputer center? Is it, my uncle works there, I know a friend of a friend? Is it a reservation system? I mean, who gets access to this awesomeness? >> So there's a peer reviewed system, it's fair. People apply for large allocations four times a year. This goes to a national committee. They met this past Sunday and Monday for the most recent. They evaluate the proposals based on merit, and they make awards accordingly. We make 90% of the system available through that means. We have 10% discretionary that we can make available to the corporate sector and to others who are doing proprietary research in data-intensive computing. >> Is there a duration, when you go through the application process, minimums and kind of like commitments that they get involved, for the folks who might be interested in hitting you up? >> For academic research, the normal award is one year. These are renewable, people can extend these and they do. What we see now of course is for large data resources. People keep those going. The AI knowledge base is 2.6 petabytes. That's a lot. For industrial engagements, those could be any length. >> John: Any startup action coming in, or more bigger, more-- >> Absolutely. A coworker of mine has been very active in life sciences startups in Pittsburgh, and engaging many of these. We have meetings every week with them now, it seems. And with other sectors, because that is such a great opportunity. >> Well congratulations. It's fantastic work, and we're happy to promote it and get the word out. Good to see HP involved as well. Thanks for sharing and congratulations. >> Absolutely. >> Good to see your work, guys. Okay, great way to end the day here. Democratizing supercomputing, bringing high performance computing. That's what the cloud's all about. That's what great software's out there with AI. I'm John Furrier, Dave Vellante bringing you all the data here from HPE Discover 2017. Stay tuned for more live action after this short break.
SUMMARY :
Brought to you by Hewlett Packard Enterprise. of exclusive coverage from the Cube What is the Pittsburgh Supercomputer Center? to be able to use HPC seamlessly, almost as a cloud. and the medical center with Carnegie Mellon, and the young kids that are new are the innovators as well, It's a combination of all the big data coming in, that acquisition meant to you guys. and they're going to be releasing here So from a customer perspective, what do you see now? and to be able to use AI with classic simulation in the last six to nine months? And the shift is due to what Bill mentioned. This is the new world. So if that's the case, and I believe it, is the ability of an AI to beat humans and he got beaten by the machine, that the human brain is able to better process I like the way you're going with that, You mentioned the AI, I think that's right on. So a lot of customers that I talk to And the high performance computing really highlights and the Institute for Precision Medicine the end of the rainbow here for you We also can make this available to industry I mean, you can look at Yosemite, it's a state park, We need to be. So is there, I mean I just not up to speed and getting people to utilize it, the big money is going to be made here. But it really seems to be the folks that are So if you look at what the financial services Dave: You could make that argument So I expect it to get to your point around AI as well, That's the end state in all this What's the story for you guys? We have a platform called the Apollo 6500, and I think that was really significant, I got to get in there early. We make 90% of the system available through that means. For academic research, the normal award is one year. and engaging many of these. and get the word out. Good to see your work, guys.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave | PERSON | 0.99+ |
National Science Foundation | ORGANIZATION | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
London | LOCATION | 0.99+ |
John | PERSON | 0.99+ |
Institute for Precision Medicine | ORGANIZATION | 0.99+ |
Pittsburgh | LOCATION | 0.99+ |
Carnegie Mellon | ORGANIZATION | 0.99+ |
Nick | PERSON | 0.99+ |
Meg | PERSON | 0.99+ |
Nick Nystrom | PERSON | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
Bill | PERSON | 0.99+ |
Bill Mannel | PERSON | 0.99+ |
90% | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
20 | QUANTITY | 0.99+ |
10% | QUANTITY | 0.99+ |
ORGANIZATION | 0.99+ | |
University of Pittsburgh Medical Center | ORGANIZATION | 0.99+ |
HP | ORGANIZATION | 0.99+ |
10 | QUANTITY | 0.99+ |
last year | DATE | 0.99+ |
ORGANIZATION | 0.99+ | |
HPE | ORGANIZATION | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
Discover | ORGANIZATION | 0.99+ |
Hewlett Packard Enterprise | ORGANIZATION | 0.99+ |
Yosemite | LOCATION | 0.99+ |
30 | QUANTITY | 0.99+ |
Nystrom | PERSON | 0.99+ |
one year | QUANTITY | 0.99+ |
three days | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
Nicholas Nystrom | PERSON | 0.99+ |
HPC | ORGANIZATION | 0.99+ |
two next guests | QUANTITY | 0.99+ |
SGI | ORGANIZATION | 0.99+ |
Kasperov | PERSON | 0.99+ |
2.6 petabytes | QUANTITY | 0.99+ |
80s | DATE | 0.98+ |
one piece | QUANTITY | 0.98+ |
two years ago | DATE | 0.98+ |
70s | DATE | 0.98+ |
Yellowstone | LOCATION | 0.98+ |
five megabyte | QUANTITY | 0.98+ |
one platform | QUANTITY | 0.97+ |
two different techniques | QUANTITY | 0.97+ |
Pitt Cancer Center | ORGANIZATION | 0.97+ |
20 years | QUANTITY | 0.97+ |
Monday | DATE | 0.97+ |
Dr. | PERSON | 0.96+ |
one thing | QUANTITY | 0.96+ |
Gho | PERSON | 0.96+ |
one | QUANTITY | 0.95+ |
first | QUANTITY | 0.95+ |
one place | QUANTITY | 0.95+ |
day two | QUANTITY | 0.94+ |
four terabytes | QUANTITY | 0.94+ |
past Sunday | DATE | 0.93+ |
Pittsburgh Supercomputer Center | ORGANIZATION | 0.93+ |
University of Pittsburgh life sciences | ORGANIZATION | 0.9+ |
last quarter | DATE | 0.89+ |
four times a year | QUANTITY | 0.89+ |
Linux | TITLE | 0.88+ |
19 million core hours | QUANTITY | 0.86+ |
nine months | QUANTITY | 0.84+ |
decades | QUANTITY | 0.83+ |
Bridges | ORGANIZATION | 0.81+ |