Image Title

Search Results for Data Genomics Index:

William Bell, PhoenixNap | VMware Explore 2022


 

(upbeat music) >> Good afternoon, everyone. Welcome back to the CUBE's day one coverage of VMware Explorer 22, live from San Francisco. I'm Lisa Martin. Dave Nicholson is back with me. Welcome back to the set. We're pleased to welcome William Bell as our next guest. The executive vice president of products at Phoenix NAP. William, welcome to the CUBE. Welcome back to the CUBE. >> Thank you, thank you so much. Happy to be here. >> Talk to us a little, and the audience a little bit about Phoenix NAP. What is it that you guys do? Your history, mission, value prop, all that good stuff. >> Absolutely, yeah. So we're global infrastructures as a service company, foundationally, we are trying to build pure play infrastructure as a service, so that customers that want to adopt cloud infrastructure but maybe don't want to adopt platform as a service and really just, you know, program themselves to a specific API can have that cloud adoption without that vendor lock in of a specific platform service. And we're doing this in 17 regions around the globe today. Yeah, so it's just flexible, easy. That's where we're at. >> I like flexible and easy. >> Flexible and easy. >> You guys started back in Phoenix. Hence the name. Talk to us a little bit about the evolution of the company in the last decade. >> Yeah, 100%. We built a data center in Phoenix expecting that we could build the centralized network access point of Phoenix, Arizona. And I am super proud to say that we've done that. 41 carriers, all three hyperscalers in the building today, getting ready to expand. However, that's not the whole story, right. And what a lot of people don't know is we founded an infrastructure as a service company, it's called Secured Servers no longer exists, but we founded that company the same time and we built it up kind of sidecar to Phoenix NAP and then we merged all of those together to form this kind of global infrastructure platform that customers can consume. >> Talk to us about the relationship with VMware. Obviously, here we are at VMware Explore. There's about seven... We're hearing 7,000 to 10,000 people here. People are ready to be back to hear from VMware and it's partner ecosystem. >> Yeah, I mean, I think that we have this huge history with VMware that maybe a lot of people don't know. We were one of the first six, the SPPs in 2011 at the end of the original kind of data center, whatever, vCloud data center infrastructure thing that they did. And so early on, there was only 10 of us, 11 of us. And most of those names don't exist anymore. We're talking, Terramark, Blue Lock, some of these guys. Good companies, but they've been bought or whatnot. And here's plucky Phoenix NAP, still, you know, offering great VMware cloud services for customers around the globe. >> What are some of the big trends that you're seeing in the market today where customers are in this multi-cloud world? You know this... I love the theme of this event. The center of the multi-cloud universe. Customers are in that by default. How do you help them navigate that and really unlock the value of it? >> Yeah, I think for us, it's about helping customers understand what applications belong where. We're very, very big believers both in the right home. But if you drill down on that right home for right applicator or right application, right home, it's more about the infrastructure choices that you're making for that application leads to just super exciting optimizations, right. If you, as an example, have a large media streaming business and you park it in a public called hyperscaler and you just eat those egress fees, like it's a big deal. Right? And there are other ways to do that, right. If you need a... If your application needs to scale from zero cores to 15,000 cores for an hour, you know, there are hyperscalers for that, right. And people need to learn how to make that choice. Right app, right home, right infrastructure. And that's kind of what we help them do. >> It's interesting that you mentioned the concept of being a pure play in infrastructure as a service. >> Yeah. >> At some point in the past, people would have argued that infrastructure as a service only exists because SaaS isn't good enough yet. In other words, if there's a good enough SaaS application then you don't want IaaS because who wants to mess around with IaaS, infrastructures as a service. Do you have customers who look at what they're developing as so much a core of what their value proposition is that they want to own it? I mean, is that a driving factor? >> I would challenge to say that we're seeing almost every enterprise become a SaaS company. And when that transition happens, SaaS companies actually care a lot about the cost basis, efficiency, uptime of their application. And ultimately, while they don't want to be in the data center business anymore, it doesn't mean that they want to pay someone else to do things that they feel wholly competent in doing. And we're seeing this exciting transition of open source technologies, open source platforms becoming good enough that they don't actually have to manage a lot of things. They can do it in software and the hardware's kind of abstracted. But that actually, I would say is a boon for infrastructure as a service, as an independent thing. It's been minimized over the years, right. People talk about hyperscalers as being cloud infrastructure companies and they're not. They're cloud platform companies, right. And the infrastructure is high quality. It is easy to access and scale, right, but it's ultimately, if you're just using one of those hyperscalers for that infrastructure, building VMs and doing a bunch of things yourself, you're not getting the value out of that hyperscaler. And ultimately that infrastructure's very expensive if you look at it that way. >> So it's interesting because if you look at what infrastructure consists of, which is hardware and software-- >> Yeah. >> People who said, eh, IaaS as is just a bridge to a bright SaaS future, people also will make the argument that the hardware doesn't matter anymore. I imagine that you are doing a lot of optimization with both hardware and stuff like the VMware cloud stack that you deploy as a VCPP partner. >> Absolutely, yeah. >> So to talk about that. >> Absolutely. >> I mean, you agree. I mean, if I were to just pose a question to you, does hardware still matter? Does infrastructure still matter? >> Way more than people think. >> Well, there you go. So what are you doing in that arena, specifically with VCPP? >> Yeah, absolutely. And so I think a good example of that, right, so last VMworld in person, 2019, we showcased a piece of technology that we had been working with Intel on for about two years at the time which was Intel persistent memory DC, persistent memory. Right? And we launched the first VMware cloud offering to have Intel DC persistent memory onboard. So that customers with the VMs that needed that technology could leverage it with the integrations in vSphere 6.7 and ultimately in seven more, right. Now I do think that was maybe a swing and a miss technology potentially but we're going to see it come back. And that specialized infrastructure deployment is a big part of our business, right. Helping people identify, you know, this application, if you'd have this accelerator, this piece of infrastructure, this quality of network can be better, faster, cheaper, right. That kind of mentality of optimization matters a lot. And VMware plays a critical role in that because it still gives the customer the operational excellence that they need without having to do everything themselves, right. And our customers rely on that a lot from VMware to get that whole story, operationally efficient, easy to manage, automated. All those things make a lot of difference to our VMware customers. >> Speaking of customers, what are you hearing, if anything, from customers, VMware customers that are your joint customers about the Broadcom acquisition? Are they excited about it? Are they concerned about it? And how do you talk about that? >> Yeah, I mean, I think that everyone that's in the infrastructure business is doing business with Broadcom, all right. And we've had so many businesses that we've been engaged with that have ultimately been a acquiree. I can say that this one feels different only in the size of the acquisition. VMware carries so much weight. VMware's brand exceeds Broadcom's brand, in my opinion. And I think ultimately, I don't know anything that's not public, right-- >> Well, they rebranded. By the way, on the point of brand, they rebranded their software business, VMware. >> Yeah. I mean, that's what I was going to say. That was the word on the street. I don't know if there's beneficial. Is that a-- >> Well, that's been-- >> But that's the word, right? >> That's what they've said. Well, but when a Avago acquired Broadcom they said, "we'll call ourselves Broadcom." >> Absolutely. Why wouldn't you? >> So yeah. So I imagine that what's been reported is likely-- >> Likely. Yeah, I 100% agree. I think that makes a ton of sense and we can start to see even more great intellectual property in software. That's where, you know, all of these businesses, CA, Symantec, VMware and all of the acquisitions that VMware has made, it's a great software intellectual property platform and they're going to be able to get so much more value out of the leadership team that VMware has here, is going to make a world of difference to the Broadcom software team. Yeah, so I'm very excited, you know. >> It's a lot of announcements this morning, a lot of technical product announcements. What did you hear in that excites you about the evolution of VMware as well as the partnership and the value in it for your customers? >> You know, one of our fastest growing parts of our business is this metal as a service infrastructure business and doing very, very... Using very specific technologies to do very interesting things, makes a big difference in our world and for our customers. So anything that's like smartNICs, disaggregated hypervisor, accelerators as a first class citizen in VMware, all that stuff makes the Phoenix NAP story better. So I'm super excited about that, right. Yeah. >> Well, it's interesting because VCPP is not a term that people who are not insiders know of. What they know is that there are services available in hyperscale cloud providers where you can deploy VMware. Well, you know, VMware cloud stack. Well, you can deploy those VMware cloud stacks with you. >> Absolutely. >> In exactly the same manner. However, to your point, all of this talk about disaggregation of CPU, GPU, DPU, I would argue with it, you're in a better position to deploy that in an agile way than a hyperscale cloud provider would be and foremost, I'm not trying to-- >> No, yeah. >> I'm not angling for a job in your PR department. >> Come on in. >> But the idea that when you start talking about something like metal as a service, as an adjunct or adjacent to a standard deployment of a VMware cloud, it makes a lot of sense. >> Yeah. >> Because there are people who can't do everything within the confines of what the STDC-- >> Yes. >> Consists of. >> Absolutely. >> So, I mean... Am I on the right track? >> No, you are 100% hitting it. I think that that point you made about agility to deliver new technology, right, is a key moment in our kind of delivery every single year, right. As a new chip comes out, Intel chip or Accelerator or something like that, we are likely going to be first to market by six months potentially and possibly ever. Persistent memory never launched in public cloud in any capacity but we have customers running on it today that is providing extreme value for their business, right. When, you know, the discreet GPUs coming from the just announced Flex series GPU from Intel, you're likely not going to see them in public cloud hyperscalers quickly, right. Over time, absolutely. We'll have them day one. Isolate came out, you could get it in our metal as a service platform the morning it launched on demand, right. Those types of agility points, they're not... Because they're hyperscale by nature. If they can't hyperscale it, they're not doing it, right. And I think that that is a very key point. Now, as it comes in towards VMware, we're driving this intersection of building that VCF or VMware cloud foundation which is going to be a key point of the VMware ecosystem. As you see this transition to core based licensing and some of the other things that have been talked about, VMware cloud foundation is going to be the stack that they expect their customers to adopt and deliver. And the fact that we can automate that, deliver it instantaneously in a couple of hours to hardware that you don't need to own, into networks you don't need to manage, but yet you are still in charge, keys to the kingdom, ready to go, just like you're doing it in your own data center, that's the message that we're driving for. >> Can you share a customer example that you think really just shines a big flashlight on the value that you guys are delivering? >> We definitely, you know, we had the pleasure of working with Make-A-Wish foundation for the last seven years. And ultimately, you know, we feel very compelled that every time we help them do something unique, different or what not, save money, that money's going into helping some child that's in need, right. And so we've done so many things together. VMware has stepped up as the plate over the years, done so many things with them. We've sponsored stuff. We've done grants, we've done all kinds of things. The other thing I would say is we are helping the City of Hope and Translational Genomics Research Institute on sequencing single cell RNA so that they can fight COVID, so that they can build cure, well, not cures but build therapies for colon cancer and things like that. And so I think that, you know, this is a driving light for us internally is helping people through efficiency and change. And that's what we're looking for. We're looking for more stories like that. We're looking... If you have a need, we're looking for people to come to us and say, "this is my problem. This is what this looks like. Let us see if we can find a solution that's a little bit different, a little bit out of the box and doesn't have to change your business dramatically." Yeah. >> And who are you talking to within customers? Is this a C level conversation? >> Yeah, I mean, I would say that we would love it to be... I think most companies would love to have that, you know, CFO conversation with every single customer. I would say VPs of engineering, increasingly, especially as we become more API centric, those guys are driving a lot of those purchasing decisions. Five years ago, I would've said director of IT, like director of IT. Now today, it's like VP of engineering, usually software oriented folks looking to deliver some type of application on top of a piece of hardware or in a cloud, right. And those guys are, you know, I guess, that's even another point, VMware's doing so much work on the API side that they don't get any credit for. Terraform, Ansible, all these integrations, VMware doing so much in this area and they just don't get any credit for it ever, right. It's just like, VMware's the dinosaur and they're just not, right. But that's the thing that people think of today because of the hype of the hyperscaler. I think that's... Yeah. >> When you're in customer conversations, maybe with prospects, are you seeing more customers that have gone all in on a hyperscaler and are having issues and coming to you guys saying help, this is getting way too expensive? >> Yeah, I think it's the unexpected growth problem or even the expected growth problem where they just thought it would be okay, but they've suffered some type of competitive pressure that they've had to optimize for and they just didn't really expect it. And so, I think that increasingly we are finding organizations that quickly adopted public cloud. If they did a full digital transformation of their business and then transformation of their applications, a lot of them now feel very locked in because every application is just reliant on x hyperscaler forever, or they didn't transform anything and they just migrated and parked it. And the bills that are coming in are just like, whoa like, how is that possible? We are typically never recommending get out of the public cloud. We are just... It's not... If I say the right home for the right application, it's by default saying that there are right applications for hyperscalers. Parking your VMware environment that you just migrated to a hyperscaler, not the right application. You know, I would love you to be with me but if you want to do that, at least go to VMC on AWS or go to OCVS or GCVE or any of those. If that's going to go with a Google or an Amazon and that's just the mandate and you're going to move your applications, don't just move them into native. Move them into a VMware solution and then if you still want to make that journey, that full transformation, go ahead and make it. I would still argue that that's not the most efficient way but, you know, if you're going to do anything, don't just dump it all into cloud, the native hyperscaler stuff. >> Good advice. >> So what do typical implementations look like with you guys when you're moving on premises environments into going back to the VCPP, STDC model? >> Absolutely. Do you have people moving and then transforming and re-platforming? What does that look like? What's the typical-- >> Yeah. I mean, I do not believe that anybody has fully made up their mind if exactly where they want to be. I'm only going to be in this cloud. It's an in the close story, right. And so even when we get customers, you know, we firmly believe that the right place to just pick up and migrate is to a VCPP cloud. Better cost effectiveness, typically better technology, you know service, right. Better service, right. We've been part of VMware for 12 years. We love the technology behind VMC's, now AWS is fantastic, but it's still just infrastructure without any help at all right, right. They're going to be there to support their technology but they're not going to help you with the other stuff. We can do some of those things. And if it's not us, it's another VCPP provider that has that expertise that you might need. So yes, we help you quickly, easily migrate everything to a VMware cloud. And then you have a decision point to make. You're happy where you are, you are leveraging public cloud for a certain applications. You're leveraging VMware cloud offerings for the standard applications that you've been running for years. Do you transform them? Do you keep them? What do you do? All those decisions can be made later. But I stress that repurchasing all your hardware again, staying inside your colo and doing everything yourself, it is for me, it's like a company telling me they're going to build a data center for themselves, single tenant data center. Like no one's doing that, right. But there are more options out there than just I'm going to go to Azure, right. Think about it. Take the time, assess the landscape. And VMware cloud providers as a whole, all 17,000 of us or whatever across the globe, people don't know that group of individuals of the companies is the third or fourth potentially largest cloud in the world. Right? That's the power of the VMware cloud provider ecosystem. >> Last question for you as we wrap up here. Where can the audience go to learn more about Phoenix NAP and really start test driving with you guys? >> Absolutely. Well, if you come to phoenixnap.com, I guarantee you that we will re-target you and you can click on a banner later if you don't want to stay there. (Lisa laughs) But yeah, phoenixnap.com has all the information that you need. We also put out tons of helpful content. So if you're looking for anything technology oriented and you're just, "I want to upgrade to Ubuntu," you're likely going to end up on a phoenixnap.com website looking for that. And then you can find out more about what we do. >> Awesome, phoenixnap.com. William, thank you very much for joining Dave and me, talking about what you guys are doing, what you're enabling customers to achieve as the world continues to evolve at a very dynamic pace. We appreciate your insights. >> Absolutely, thank you so much >> For our guest and Dave Nicholson, I'm Lisa Martin. You've been watching the CUBE live from VMware Explorer, 2022. Dave and I will be joined by a guest consultant for our keynote wrap at the end of the day in just a few minutes. So stick around. (upbeat music)

Published Date : Aug 31 2022

SUMMARY :

Welcome back to the Happy to be here. What is it that you guys do? you know, program company in the last decade. And I am super proud to say People are ready to be back still, you know, offering I love the theme of this event. and you just eat those egress It's interesting that you mentioned I mean, is that a driving factor? and the hardware's kind of abstracted. I imagine that you are I mean, you agree. So what are you doing in that arena, And VMware plays a critical role in that I can say that this one By the way, on the point of brand, I mean, that's what I was going to say. Well, but when a Avago acquired Broadcom Absolutely. So I imagine that what's VMware and all of the that excites you about all that stuff makes the Well, you know, VMware cloud stack. In exactly the same manner. job in your PR department. But the idea that when you Am I on the right track? to hardware that you don't need to own, And so I think that, you know, And those guys are, you know, that you just migrated to a hyperscaler, Do you have people moving that you might need. Where can the audience go to information that you need. talking about what you guys are doing, Dave and I will be joined

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave NicholsonPERSON

0.99+

Lisa MartinPERSON

0.99+

VMwareORGANIZATION

0.99+

DavePERSON

0.99+

SymantecORGANIZATION

0.99+

2011DATE

0.99+

BroadcomORGANIZATION

0.99+

PhoenixLOCATION

0.99+

AmazonORGANIZATION

0.99+

AWSORGANIZATION

0.99+

San FranciscoLOCATION

0.99+

William BellPERSON

0.99+

WilliamPERSON

0.99+

GoogleORGANIZATION

0.99+

100%QUANTITY

0.99+

12 yearsQUANTITY

0.99+

7,000QUANTITY

0.99+

TerramarkORGANIZATION

0.99+

15,000 coresQUANTITY

0.99+

AvagoORGANIZATION

0.99+

41 carriersQUANTITY

0.99+

thirdQUANTITY

0.99+

2019DATE

0.99+

fourthQUANTITY

0.99+

VMCORGANIZATION

0.99+

zero coresQUANTITY

0.99+

2022DATE

0.99+

City of Hope and Translational Genomics Research InstituteORGANIZATION

0.99+

Blue LockORGANIZATION

0.99+

AnsibleORGANIZATION

0.99+

six monthsQUANTITY

0.99+

bothQUANTITY

0.99+

todayDATE

0.98+

TerraformORGANIZATION

0.98+

IntelORGANIZATION

0.98+

10,000 peopleQUANTITY

0.98+

oneQUANTITY

0.98+

Five years agoDATE

0.98+

Make-A-WishORGANIZATION

0.97+

UbuntuTITLE

0.97+

CUBEORGANIZATION

0.97+

17 regionsQUANTITY

0.97+

firstQUANTITY

0.97+

LisaPERSON

0.97+

an hourQUANTITY

0.96+

vSphere 6.7TITLE

0.96+

last decadeDATE

0.96+

VCFORGANIZATION

0.96+

sevenQUANTITY

0.96+

about two yearsQUANTITY

0.95+

VMware ExploreORGANIZATION

0.95+

singleQUANTITY

0.95+

Phoenix, ArizonaLOCATION

0.95+

COVIDOTHER

0.95+

Phoenix NAPORGANIZATION

0.94+

about sevenQUANTITY

0.93+

Max Peterson, AWS | AWS Public Sector Online Summit


 

>>from around the globe. It's >>the Q with digital coverage of AWS Public sector online brought to you by Amazon Web services. Hello. I'm John for a host of the Cube. We're here covering A W. S s international public sector virtual event. We have a great guest. The star of the program is Max Peterson, Good friend of the Cube. Also Vice President of A W s International for Public Sector Max. Great to see you. Thanks for coming on this virtual remote interview. Cuban interview. Hey, >>John. Great to be back on the Cube, even if it is virtual >>well, you know, we're not face to face. We have to go virtual. So the cube virtual, you've got to public sector summit. Virtual. Um, this is the time of the year where normally we'd be out on the road in Bahrain, Japan, Asia, Pacific Europe. We'd be out on the summits talking to all the guests and presenting that the update on public sector. But we have to do it remotely. Um, a little bit of trade off. The good news is with cove it for at least you guys. It's a global media network. And with these remote interviews. Uh, public sector is seeing a lot more global activity, and that's what I want to get your thoughts on. What is the business update internationally for public sector? I'm sure that with CO over the pandemic, you're seeing a lot of activity. How is the public sector business doing internationally? >>John, You know, you mentioned one of the silver linings of a pretty bad situation with the Koven pandemic. And that's been that it has meant that people have to be resourceful. Governments have to be resourceful on DSO. There's been a tremendous amount of innovation people have gotten used to now using modern cloud technology to support remote work and remote war learning. Um, out of necessity, we've had to figure out how do we deliver far greater health care services using digital technology, telemedicine, digital social care, uh, chime rooms? Uh, it really, in a nutshell, has been a tough six months for people, but a relative relatively busy six months for innovation. And for i t for the public sector customers, >>you know, I did an interview a few months ago for one of the award programs in Canada. Um, with the center had a customer on disk customers. The classic customer, a Amazon. You know, I'm not sure we do it all internally. He deployed A W S Connect in literally days that saved the lives of many of his countrymen and women by getting the entitlement checks out. And he was a glowing endorsement because, he said, with Cove in 19 they were crippled. He said they will. They stood up a call center and literally he was converted. That's just one example again. That's Canada of the kind of solutions that you guys air, enabling with Cloud to quickly respond to the crisis, to use technology to solve other technology problems and also business problems. Can you give an example on the international front of where you're seeing some activity? Because this seems to be the same pattern we're seeing, People who have used in the cloud we cube virtual. Will there be no Cuba's wasn't for our cloud implementations, but this is, um, obvious, but I want to call it out. It's important. Can you share some examples of people internationally using the cloud to get and respond to the to the cove in 19 pandemic in delivering services? >>Yeah, In fact, John, we're focusing a lot on that at the public sector summit online that comes up here in October. Um, a couple of quick examples. In fact, one of the top learnings is speed matters. And so we have Eve Curry from Australia, who talks about social and health care and how they were able to get a complete digital suite up and running for supporting 5000 elderly patients and over 3000 employees in less than a week, and that included getting up and running a video conferencing and tele consultation capability using AWS chime. It involved getting up and running collaboration space for the remote workers using work work docks. And it involves setting up a complete call center on the cloud, using Amazon time and literally that was done in less than a week. Another example, really ambitious example, which again is a testament to the innovation and, uh, the capability, the capability that AWS brings to customers. I'm in India. They had a number of tele medicine applications. They were available for a fee, but they didn't have a universal way to reach the vast population in India. And so when the pandemic hit three organization that was responsible for the public health component was challenged to get a no cost tele consultation hella medicine system up and running for outpatient services that could scale to reach a billion people. Um, they did that in 19 days. They got the system up and running Now hasn't gotten to a billion people online at one time. But there right now, doing 6000 consultations a day with about 4000 doctors, and they're headed toward 100,000 consultations today. Eso just to your point, speed and scale. We're seeing it across the board from from our public sector customers. >>You know, it's just mind boggling just to kind of pinch myself from it in 19 days. It's crazy, right? I mean, crazy fast If you throw back to the eighties and nineties when I broke into the business, you know, young gun client server was all the rage back then. And if you wanted to do, like a big apt upon an oracle s a p, whatever it was years, it was months just to do planning. E mean, I mean, think about the telemedicine example 19 days. That's huge. I mean, just the scale is just off the charts. So So I mean, even if you're not a believer in cloud I don't feel should be should just go home and retire at this point because it's just obvious. Uh, the question I wanna ask you specifically because Theresa brought this up on my last interview with her. And I wanna ask you the same question is, what is AWS doing specifically to help customers? I know customers are helping themselves. You mentioned that. What are you guys doing? Toe? Accelerate this. How are you helping of you guys changed a little bit. Can you just share what you guys specifically doing to help customers pivot toe not only solving it, but having a growth strategy behind it? >>Yeah, John, that's a great question. Some of the things that we're doing our long standing programs and so customers from day one have had a need for skills and workforce development. We keep on doubling down on those programs. Things like a W s academy aws educate our restart programs in different countries. So number one is we continue to help customers double down on getting the right cloud skills to enable the digital workforce. The second thing, in fact, if I can, for just amendment, um, there is actually a section of the public sector online called the New Workforce, which talks about both the digital skills that are required and then also some of the remote working skills that we need to help folks with. So So workforce is a big one. Um, the second one. Yeah, and I'm super excited about this because we've opened up the opportunity, form or customers around the globe to participate in our city on the Cloud Challenge Onda That gives a great opportunity to showcase and highlight the innovation of public sector customers and, you know, win some AWS credits and technical assistance to help them build their programs. But I think one of the most the things I'm most proud about in the last 6 to 9 months was when the when this pandemic struck and we listen to our customers about what they needed. We came out with something called the AWS Diagnostic Development Initiative, and that was a program specifically aimed at providing technical assistance. Um, a ws cloud credits all to researchers to help them, um, tackle the tough questions that need to be answered to help us deal with and then hopefully resolve the pandemic. >>So on the international front, like I said earlier in the open, we would've been in Bahrain. That's a new region, only a couple of years old, Obviously the historic, um this, um, geopolitical things happening there, opening things up, that's been a very successful region. This is the playbook. Can you just give us an update on some of the successes in the different regions by rain and then a pack and other areas? What? Some of the highlights? >>Sure, John, One of the things that I think it's super exciting is that all of these customers are developing new capabilities right now. Um, one example from Egypt. Uh, they had to get literally an entire student population back to school. When the pandemic hit on DSO. They quickly pivoted to bringing a online learning management system or LMS up on the cloud on AWS. Um, and they have been able to continue to teach classes, literally to millions of students there. We've seen that same sort of distance learning online education across the globe. Another example would be when countries needed to figure out how to beam or effective in that sort of time tested, contact tracing process. So So when ah person has been found to have the the flu or the illness the subject illness, um, they typically have a lot of manual contact tracers that have to try to identify kind of where that person's been and see if they can. Then, um, helped to control the spread of whatever the diseases Kobe 19. In this case, um, we put together with governments across the world with a W s partners across the world again in very fast order, automated systems to help governments manage this, um, Singapore is a super example. India's a massively scaled example, but we did it in countries of across the globe, and we did it by working with them and the partners there to specifically respond to their needs. So everybody's case, while similar at a high level, you know, was unique in the way that they had to implement it. >>And it's been a great, great ride international us with co vid. You guys have ah current situation. You guys are providing benefits and I'll see the cloud itself for the customer to build those modern APS. The question I wanna ask you, Max, as an executive at eight of yourself. So you've been in the industry, Um, with public sector pre covert, it's, you know, it's before Cove. And there's after Govind is gonna be kind of like that demarcation line in the society. Um, it has become a global thing. I just did an event with Cal Poly was mentioned before we came on, um, small little symposium that would have been, you know, face to face. But because we did it virtually it's now global reinvents coming up. That's gonna be essentially virtual. So it's gonna be more global, less physical, space to face. Everything is introduced, no boundaries. So how >>does that >>impact? How do you How do you guys, How do you look at that? Because it impacts you, I guess a little bit because there's no boundaries, >>right? You know, John, I think this plays into what we're talking about in terms of people and governments and organizations getting used to new ways of working on de so some of our new workforce development is based around that, not just the digital skills in the cloud skills a couple of the things that we've recognized by the way, Um, it's different, but done well, there's new benefits. And so so one of the things that we've seen is where people employ chime, for instance, Uh, video conferencing solution or solutions from our partners like Zoom and others. Onda people have been able to actually be Maurin touch, for instance, with elder care. Um, there were a number of countries that introduced shielding. That meant that people couldn't physically go and visit their moms and dads. Um and so what we've seen is a number of systems on care organizations that have responded andare helping thing the elderly, uh, to use this new tech on. But it's really actually, uh, heartwarming, uh, to see those connections happen again, even in this virtual world. And the interesting thing is, you can actually step up the frequency on DSO. You don't have to be there physically, but you can be there, Andi and interact and support with the number of these thes tools. I think one of the other big learnings that we've seen for many organizations and just about every public sector group has toe work with, um uh, their constituents on the phone. Of course, we've got physical offices, you know, whether it's a hospital or a outpatient center or a social care center. Um, but you always have to have a way to work on phones. What's happened during the Cove in 19 Pandemic is there's been a surge is where information needed to get out to citizens or where citizens literally rushed the phone lines to be able to get the most current information back. Andi, the legacy called systems have been completely overwhelmed, their inadequate. And we've seen customers launch the online call center in the cloud piece, using Amazon connect as their starting point. But then, you know, continuously innovating. And so starting to use things like Lex to be able to deliver a chat box function, Um, in the in the US, for example, one of our partners, Smartronix, was able to automate the welfare and social care systems for a number of different states to the point now where 90 plus percent of those calls get initially handled, satisfied using a chat bots, which frees up agents the deal, you know, with the more difficult inbound calls that they get. >>I gotta ask you, where do we go from here? What's next for these organizations? Post Covad World. You know, if we're sitting at a cocktail party was sitting down having dinner or where he talking remotely here, how would you? How would you explain to me what's what's next? Where do we go from here? And how do organizations take that next post co vid recovery and growth? What's your take? >>And John? I think that's a fantastic question to ask. Let me tell you what we learn from our customers every day because we see them try and do new things. If I had to take my sort of crystal ball, I think we're in version one of figuring out How do we work in this new environment? I think there's a couple of key things that we're going to see. Number one. Um, resilience and continuity of service is not gonna be optional. Everybody is coming to expect that government care, not for profits. Education is going to be able to seamlessly continue to deliver the core services irrespective of these world events or emergencies on B C customers. Now you know, really getting that right. It used to take. You talked about it? Um, heck, you couldn't get a system up and running in 19 days. You'd be lucky if you cut a purchase order in 19 days and citizens and constituents that aren't going to accept that anymore, right? That's one big, uh, change that I think is with us. And we'll keep on driving cloud adoption. I think the next one is how do we start putting the pieces together in ways that make some of this invisible and an example? Um, you know, kind of starts with that with that example in the US with partner that was building systems to help, uh, welfare and social care call centers operate smoother. But if you think about the range of AWS services and the building blocks that customers have, we'll find customers starting to create that virtual experience in aversion to dot away where they tie the contact center into chat box and into transcription. Like, for instance, being able to have a conversation with the parents and using comprehend medical actually get a medically accurate transcription. So the doctor can focus on that patient interaction and not on actually data captured, right, and then if that patient asks. Well, g Doc, could you give me more information about, you know, X y z, uh, medication, or about what a course of treatment sounds like? Instead of tying up the doctors time, you could go and use a tool like Amazon Polly to then go text to speech and give all of that further rich information to that citizen. Um e think some of them things. Same scenarios, right? How do we go from this? This very fast version one dot response to a a mawr immersive, less tech evident capability that strings these things together that to meet kind of unique use cases or unique needs. >>Yeah, I think that's totally right. I think you know the 19 days. Yeah, I'm blown away by that. But I think you know, we thought about agility. That was a cloud term. Being more agile with your code business. Agility has come on the scene and then with business agility you have I call I call business latency. Andi, you went from years to months, months, two days. And I think now, as you get into the decks versions, it's days, two hours, hours, two minutes, hours two seconds Because when you look at the scale of the cloud some of things we were talking what's going on? Space force and globally around with space Leighton See, technically and business late and see this is the new dynamic and it's gonna be automation. Ai these air. This is the new reality. I think co vid points that out. Uh, what's your reaction to that? And give a final message to the AWS international community out there on on how to get through this and what you guys are doing? >>Yeah, John, I think your observation is you know that increasingly, uh, there needs to be a connectedness between the services that thes public sector customers deliver on dso Um, that connectedness can be in terms of making sure that a citizen who eyes on their life journey doesn't need to continuously explain to government where they're at. But rather, government learns how to create secure, scalable data stores so that so that they understand the journey of the citizen and can provide help through that journey. Eso it becomes mawr citizen centric. I think another example is in the entire healthcare arena where what we have found is that the ability thio to securely collaborate on very complex problems and complex data sets? Uh, like like genomes, um is increasingly important on DSO. I think what you'll find is you'll find we're seeing it today, right? With customers like, uh, Genomics England and the UK Bio Bank were there, in fact, creating these secure collaboration spaces so that the best researchers can work against these very important data sets in a secure, yet trusted collaboration environment. So I think we're seeing much more of that on I would say The third thing that we're probably learning from our customers is just how important that skills and workforce pieces. Um, with the accelerated pace, we continue to see pressure on smart skills, and resource is that our customers need. Fortunately, we've got a great global partner ecosystem, Um, but you'll see us continuing to push that forward as a zone agenda that will help customers with eso. I guess my parting comment would be how could it not be? I hope that the customers that attend the summit are from all over the world. I hope they find something that's useful to them in pursuing their mission and in their journey to the cloud. And John, I just This is always a pleasure to join the Cube. Thanks very much for the time today. Thank >>you, Max. Great. Call out. Just I'll call it out. One more time to amplify the learnings in the workforce development starting younger and younger. The path to get proficiency is quickly. You could be a cloud computing cybersecurity application, modern application development, all hot areas. Uh, the new playbook is cloud. It's all there online. And, of course, Max. Global footprint with the regions, the world has changed, and it's gonna be pretty busy. Time for you. We'll be covering it. Thanks for coming on. >>That's great. Thanks, John. >>Okay, I'm John. Free with the Cube. You're watching any of US? Public sector summit, The international online event. I'm John. Hard to keep your host. Thank you for watching

Published Date : Oct 20 2020

SUMMARY :

from around the globe. brought to you by Amazon Web services. We'd be out on the summits talking to all the guests and presenting that the update on public And for i t for the public sector customers, the cloud to get and respond to the to the cove in 19 pandemic in delivering services? the capability that AWS brings to customers. Uh, the question I wanna ask you specifically because in our city on the Cloud Challenge Onda That gives a great opportunity to showcase So on the international front, like I said earlier in the open, we would've been in Bahrain. and the partners there to specifically respond to their needs. You guys are providing benefits and I'll see the cloud itself for the customer to build those modern APS. And the interesting thing is, you can actually step up the How would you explain to me what's what's next? I think that's a fantastic question to ask. Agility has come on the scene and then with business agility you have I call I call business latency. have found is that the ability thio to securely One more time to amplify the learnings in the workforce development That's great. Hard to keep your host.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JohnPERSON

0.99+

CanadaLOCATION

0.99+

TheresaPERSON

0.99+

BahrainLOCATION

0.99+

Max PetersonPERSON

0.99+

Eve CurryPERSON

0.99+

AWSORGANIZATION

0.99+

JapanLOCATION

0.99+

Genomics EnglandORGANIZATION

0.99+

IndiaLOCATION

0.99+

AmazonORGANIZATION

0.99+

OctoberDATE

0.99+

100,000 consultationsQUANTITY

0.99+

USLOCATION

0.99+

two hoursQUANTITY

0.99+

two minutesQUANTITY

0.99+

AsiaLOCATION

0.99+

two daysQUANTITY

0.99+

less than a weekQUANTITY

0.99+

AustraliaLOCATION

0.99+

19 daysQUANTITY

0.99+

MaxPERSON

0.99+

todayDATE

0.99+

six monthsQUANTITY

0.99+

5000 elderly patientsQUANTITY

0.99+

threeQUANTITY

0.99+

19 daysQUANTITY

0.99+

Koven pandemicEVENT

0.99+

EgyptLOCATION

0.99+

UK Bio BankORGANIZATION

0.99+

over 3000 employeesQUANTITY

0.99+

bothQUANTITY

0.98+

Max.PERSON

0.98+

third thingQUANTITY

0.98+

about 4000 doctorsQUANTITY

0.98+

pandemicEVENT

0.98+

SmartronixORGANIZATION

0.98+

one exampleQUANTITY

0.98+

90 plus percentQUANTITY

0.98+

oneQUANTITY

0.97+

one timeQUANTITY

0.97+

second oneQUANTITY

0.97+

Amazon WebORGANIZATION

0.97+

Pacific EuropeLOCATION

0.97+

second thingQUANTITY

0.96+

millions of studentsQUANTITY

0.96+

OneQUANTITY

0.96+

ZoomORGANIZATION

0.94+

6000 consultations a dayQUANTITY

0.93+

eightiesDATE

0.92+

A W. S s international public sectorEVENT

0.92+

eightQUANTITY

0.91+

Cal PolyORGANIZATION

0.89+

CubeCOMMERCIAL_ITEM

0.89+

Cloud Challenge OndaEVENT

0.88+

Dr. Eng Lim Goh, Joachim Schultze, & Krishna Prasad Shastry | HPE Discover 2020


 

>> Narrator: From around the globe it's theCUBE, covering HPE Discover Virtual Experience brought to you by HPE. >> Hi everybody. Welcome back. This is Dave Vellante for theCUBE, and this is our coverage of discover 2020, the virtual experience of HPE discover. We've done many, many discoveries, as usually we're on the show floor, theCUBE has been virtualized and we talk a lot at HPE discovers, a lot of storage and server and infrastructure and networking which is great. But the conversation we're going to have now is really, we're going to be talking about helping the world solve some big problems. And I'm very excited to welcome back to theCUBE Dr. Eng Lim Goh. He's a senior vice president of and CTO for AI, at HPE. Hello, Dr. Goh. Great to see you again. >> Hello. Thank you for having us, Dave. >> You're welcome. And then our next guest is Professor Joachim Schultze, who is the Professor for Genomics, and Immunoregulation at the university of Bonn amongst other things Professor, welcome. >> Thank you all. Welcome. >> And then Prasad Shastry, is the Chief Technologist for the India Advanced Development Center at HPE. Welcome, Prasad. Great to see you. >> Thank you. Thanks for having me. >> So guys, we have a CUBE first. I don't believe we've ever had of three guests in three separate times zones. I'm in a fourth time zone. (guests chuckling) So I'm in Boston. Dr. Goh, you're in Singapore, Professor Schultze, you're in Germany and Prasad, you're in India. So, we've got four different time zones. Plus our studio in Palo Alto. Who's running this program. So we've got actually got five times zones, a CUBE first. >> Amazing. >> Very good. (Prasad chuckles) >> Such as the world we live in. So we're going to talk about some of the big problems. I mean, here's the thing we're obviously in the middle of this pandemic, we're thinking about the post isolation economy, et cetera. People compare obviously no surprise to the Spanish flu early part of last century. They talk about the great depression, but the big difference this time is technology. Technology has completely changed the way in which we've approached this pandemic. And we're going to talk about that. Dr. Goh, I want to start with you. You've done a lot of work on this topic of swarm learning. If we could, (mumbles) my limited knowledge of this is we're kind of borrowing from nature. You think about, bees looking for a hive as sort of independent agents, but somehow they come together and communicate, but tell us what do we need to know about swarm learning and how it relates to artificial intelligence and we'll get into it. >> Oh, Dave, that's a great analogy using swarm of bees. That's exactly what we do at HPE. So let's use the of here. When deploying artificial intelligence, a hospital does machine learning of the outpatient data that could be biased, due to demographics and the types of cases they see more also. Sharing patient data across different hospitals to remove this bias is limited, given privacy or even sovereignty the restrictions, right? Like for example, across countries in the EU. HPE, so I'm learning fixers this by allowing each hospital, let's still continue learning locally, but at each cycle we collect the lumped weights of the neural networks, average them and sending it back down to older hospitals. And after a few cycles of doing this, all the hospitals would have learned from each other, removing biases without having to share any private patient data. That's the key. So, the ability to allow you to learn from everybody without having to share your private patients. That's swarm learning, >> And part of the key to that privacy is blockchain, correct? I mean, you you've been too involved in blockchain and invented some things in blockchain and that's part of the privacy angle, is it not? >> Yes, yes, absolutely. There are different ways of doing this kind of distributed learning, which swarm learning is over many of the other distributed learning methods. Require you to have some central control. Right? So, Prasad, and the team and us came up together. We have a method where you would, instead of central control, use blockchain to do this coordination. So, there is no more a central control or coordinator, especially important if you want to have a truly distributed swamp type learning system. >> Yeah, no need for so-called trusted third party or adjudicator. Okay. Professor Schultze, let's go to you. You're essentially the use case of this swarm learning application. Tell us a little bit more about what you do and how you're applying this concept. >> I'm actually by training a physician, although I haven't seen patients for a very long time. I'm interested in bringing new technologies to what we call precision medicine. So, new technologies both from the laboratories, but also from computational sciences, married them. And then I basically allow precision medicine, which is a medicine that is built on new measurements, many measurements of molecular phenotypes, how we call them. So, basically that process on different levels, for example, the genome or genes that are transcribed from the genome. We have thousands of such data and we have to make sense out of this. This can only be done by computation. And as we discussed already one of the hope for the future is that the new wave of developments in artificial intelligence and machine learning. We can make more sense out of this huge data that we generate right now in medicine. And that's what we're interesting in to find out how can we leverage these new technologies to build a new diagnostics, new therapy outcome predictors. So, to know the patient benefits from a disease, from a diagnostics or a therapy or not, and that's what we are doing for the last 10 years. The most exciting thing I have been  through in the last three, four, five years is really when HPE introduced us to swarm learning. >> Okay and Prasad, you've been helping Professor Schultze, actually implements swarm learning for specific use cases that we're going to talk about COVID, but maybe describe a little bit about what you've been or your participation in this whole equation. >> Yep, thank. As Dr Eng Lim Goh, mentioned. So, we have used blockchain as a backbone to implement the decentralized network. And through that we're enabling a privacy preserved these centralized network without having any control points, as Professor explained in terms of depression medicines. So, one of the use case we are looking at he's looking at the blood transcriptomes, think of it, different hospitals having a different set of transcriptome data, which they cannot share due to the privacy regulations. And now each of those hospitals, will clean the model depending upon their local data, which is available in that hospital. And shared the learnings coming out of that training with the other hospitals. And we played to over several cycles to merge all these learnings and then finally get into a global model. So, through that we are able to kind of get into a model which provides the performance is equal of collecting all the data into a central repository and trying to do it. And we could really think of when we are doing it, them, could be multiple kinds of challenges. So, it's good to do decentralized learning. But what about if you have a non ID type of data, what about if there is a dropout in the network connections? What about if there are some of the compute nodes we just practice or probably they're not seeing sufficient amount of data. So, that's something we tried to build into the swarm learning framework. You'll handle the scenarios of having non ID data. All in a simple word we could call it as seeing having the biases. An example, one of the hospital might see EPR trying to, look at, in terms of let's say the tumors, how many number of cases and whereas the other hospital might have very less number of cases. So, if you have kind of implemented some techniques in terms of doing the merging or providing the way that different kind of weights or the tuneable parameters to overcome these set of challenges in the swarm learning. >> And Professor Schultze, you you've applied this to really try to better understand and attack the COVID pandemic, can you describe in more detail your goals there and what you've actually done and accomplished? >> Yeah. So, we have actually really done it for COVID. The reason why we really were trying to do this already now is that we have to generate it to these transcriptomes from COVID-19 patients ourselves. And we realized that the scene of the disease is so strong and so unique compared to other infectious diseases, which we looked at in some detail that we felt that the blood transcriptome would be good starting point actually to identify patients. But maybe even more important to identify those with severe diseases. So, if you can identify them early enough that'd be basically could care for those more and find particular for those treatments and therapies. And the reason why we could do that is because we also had some other test cases done before. So, we used the time wisely with large data sets that we had collected beforehand. So, use cases learned how to apply swarm learning, and we are now basically ready to test directly with COVID-19. So, this is really a step wise process, although it was extremely fast, it was still a step wise probably we're guided by data where we had much more knowledge of which was with the black leukemia. So, we had worked on that for years. We had collected many data. So, we could really simulate a Swarm learning very nicely. And based on all the experience we get and gain together with Prasad, and his team, we could quickly then also apply that knowledge to the data that are coming now from COVID-19 patients. >> So, Dr. Goh, it really comes back to how we apply machine intelligence to the data, and this is such an interesting use case. I mean, the United States, we have 50 different States with 50 different policies, different counties. We certainly have differences around the world in terms of how people are approaching this pandemic. And so the data is very rich and varied. Let's talk about that dynamic. >> Yeah. If you, for the listeners who are or viewers who are new to this, right? The workflow could be a patient comes in, you take the blood, and you send it through an analysis? DNA is made up of genes and our genes express, right? They express in two steps the first they transcribe, then they translate. But what we are analyzing is the middle step, the transcription stage. And tens of thousands of these Transcripts that are produced after the analysis of the blood. The thing is, can we find in the tens of thousands of items, right? Or biomarkers a signature that tells us, this is COVID-19 and how serious it is for this patient, right? Now, the data is enormous, right? For every patient. And then you have a collection of patients in each hospitals that have a certain demographic. And then you have also a number of hospitals around. The point is how'd you get to share all that data in order to have good training of your machine? The ACO is of course a know privacy of data, right? And as such, how do you then share that information if privacy restricts you from sharing the data? So in this case, swarm learning only shares the learnings, not the private patient data. So we hope this approach would allow all the different hospitals to come together and unite sharing the learnings removing biases so that we have high accuracy in our prediction as well at the same time, maintaining privacy. >> It's really well explained. And I would like to add at least for the European union, that this is extremely important because the lawmakers have clearly stated, and the governments that even non of these crisis conditions, they will not minimize the rules of privacy laws, their compliance to privacy laws has to stay as high as outside of the pandemic. And I think there's good reasons for that, because if you lower the bond, now, why shouldn't you lower the bar in other times as well? And I think that was a wise decision, yes. If you would see in the medical field, how difficult it is to discuss, how do we share the data fast enough? I think swarm learning is really an amazing solution to that. Yeah, because this discussion is gone basically. Now we can discuss about how we do learning together. I'd rather than discussing what would be a lengthy procedure to go towards sharing. Which is very difficult under the current privacy laws. So, I think that's why I was so excited when I learned about it, the first place with faster, we can do things that otherwise are either not possible or would take forever. And for a crisis that's key. That's absolutely key. >> And is the byproduct. It's also the fact that all the data stay where they are at the different hospitals with no movement. >> Yeah. Yeah. >> Learn locally but only shared the learnings. >> Right. Very important in the EU of course, even in the United States, People are debating. What about contact tracing and using technology and cell phones, and smartphones to do that. Beside, I don't know what the situation is like in India, but nonetheless, that Dr. Goh's point about just sharing the learnings, bubbling it up, trickling just kind of metadata. If you will, back down, protects us. But at the same time, it allows us to iterate and improve the models. And so, that's a key part of this, the starting point and the conclusions that we draw from the models they're going to, and we've seen this with the pandemic, it changes daily, certainly weekly, but even daily. We continuously improve the conclusions and the models don't we. >> Absolutely, as Dr. Goh explained well. So, we could look at like they have the clinics or the testing centers, which are done in the remote places or wherever. So, we could collect those data at the time. And then if we could run it to the transcripting kind of a sequencing. And then as in, when we learn to these new samples and the new pieces all of them put kind of, how is that in the local data participate in the kind of use swarm learning, not just within the state or in a country could participate into an swarm learning globally to share all this data, which is coming up in a new way, and then also implement some kind of continuous learning to pick up the new signals or the new insight. It comes a bit new set of data and also help to immediately deploy it back into the inference or into the practice of identification. To do these, I think one of the key things which we have realized is to making it very simple. It's making it simple, to convert the machine learning models into the swarm learning, because we know that our subject matter experts who are going to develop these models on their choice of platforms and also making it simple to integrate into that complete machine learning workflow from the time of collecting a data pre processing and then doing the model training and then putting it onto inferencing and looking performance. So, we have kept that in the mind from the beginning while developing it. So, we kind of developed it as a plug able microservices kind of packed data with containers. So the whole library could be given it as a container with a kind of a decentralized management command controls, which would help to manage the whole swarm network and to start and initiate and children enrollment of new hospitals or the new nodes into the swarm network. At the same time, we also looked into the task of the data scientists and then try to make it very, very easy for them to take their existing models and convert that into the swarm learning frameworks so that they can convert or enabled they're models to participate in a decentralized learning. So, we have made it to a set callable rest APIs. And I could say that the example, which we are working with the Professor either in the case of leukemia or in the COVID kind of things. The noodle network model. So we're kind of using the 10 layer neural network things. We could convert that into the swarm model with less than 10 lines of code changes. So, that's kind of a simply three we are looking at so that it helps to make it quicker, faster and loaded the benefits. >> So, that's an exciting thing here Dr. Goh is, this is not an R and D project. This is something that you're actually, implementing in a real world, even though it's a narrow example, but there are so many other examples that I'd love to talk about, but please, you had a comment. >> Yes. The key thing here is that in addition to allowing privacy to be kept at each hospital, you also have the issue of different hospitals having day to day skewed differently. Right? For example, a demographics could be that this hospital is seeing a lot more younger patients, and other hospitals seeing a lot more older patients. Right? And then if you are doing machine learning in isolation then your machine might be better at recognizing the condition in the younger population, but not older and vice versa by using this approach of swarm learning, we then have the biases removed so that both hospitals can detect for younger and older population. All right. So, this is an important point, right? The ability to remove biases here. And you can see biases in the different hospitals because of the type of cases they see and the demographics. Now, the other point that's very important to reemphasize is what precise Professor Schultze mentioned, right? It's how we made it very easy to implement this.Right? This started out being so, for example, each hospital has their own neural network and they training their own. All you do is we come in, as Pasad mentioned, change a few lines of code in the original, machine learning model. And now you're part of the collective swarm. This is how we want to easy to implement so that we can get again, as I like to call, hospitals of the world to uniting. >> Yeah. >> Without sharing private patient data. So, let's double click on that Professor. So, tell us about sort of your team, how you're taking advantage of this Dr. Goh, just describe, sort of the simplicity, but what are the skills that you need to take advantage of this? What's your team look like? >> Yeah. So, we actually have a team that's comes from physicians to biologists, from medical experts up to computational scientists. So, we have early on invested in having these interdisciplinary research teams so that we can actually spend the whole spectrum. So, people know about the medicine they know about them the biological basics, but they also know how to implement such new technology. So, they are probably a little bit spearheading that, but this is the way to go in the future. And I see that with many institutions going this way many other groups are going into this direction because finally medicine understands that without computational sciences, without artificial intelligence and machine learning, we will not answer those questions with this large data that we're using. So, I'm here fine. But I also realize that when we entered this project, we had basically our model, we had our machine learning model from the leukemia's, and it really took almost no efforts to get this into the swarm. So, we were really ready to go in very short time, but I also would like to say, and then it goes towards the bias that is existing in medicine between different places. Dr. Goh said this very nicely. It's one aspect is the patient and so on, but also the techniques, how we do clinical essays, we're using different robots a bit. Using different automates to do the analysis. And we actually try to find out what the Swan learning is doing if we actually provide such a bias by prep itself. So, I did the following thing. We know that there's different ways of measuring these transcriptomes. And we actually simulated that two hospitals had an older technology and a third hospital had a much newer technology, which is good for understanding the biology and the diseases. But it is the new technology is prone for not being able anymore to generate data that can be used to learn and then predicting the old technology. So, there was basically, it's deteriorating, if you do take the new one and you'll make a classifier model and you try old data, it doesn't work anymore. So, that's a very hard challenge. We knew it didn't work anymore in the old way. So, we've pushed it into swarm learning and to swarm recognize that, and it didn't take care of it. It didn't care anymore because the results were even better by bringing everything together. I was astonished. I mean, it's absolutely amazing. That's although we knew about this limitations on that one hospital data, this form basically could deal with it. I think there's more to learn about these advantages. Yeah. And I'm very excited. It's not only a transcriptome that people do. I hope we can very soon do it with imaging or the DCNE has 10 sites in Germany connected to 10 university hospitals. There's a lot of imaging data, CT scans and MRIs, Rachel Grimes. And this is the next next domain in medicine that we would like to apply as well as running. Absolutely. >> Well, it's very exciting being able to bring this to the clinical world And make it in sort of an ongoing learnings. I mean, you think about, again, coming back to the pandemic, initially, we thought putting people on ventilators was the right thing to do. We learned, okay. Maybe, maybe not so much the efficacy of vaccines and other therapeutics. It's going to be really interesting to see how those play out. My understanding is that the vaccines coming out of China, or built to for speed, get to market fast, be interested in U.S. Maybe, try to build vaccines that are maybe more longterm effective. Let's see if that actually occurs some of those other biases and tests that we can do. That is a very exciting, continuous use case. Isn't it? >> Yeah, I think so. Go ahead. >> Yes. I, in fact, we have another project ongoing to use a transcriptome data and other data like metabolic and cytokines that data, all these biomarkers from the blood, right? Volunteers during a clinical trial. But the whole idea of looking at all those biomarkers, we talking tens of thousands of them, the same thing again, and then see if we can streamline it clinical trials by looking at it data and training with that data. So again, here you go. Right? We have very good that we have many vaccines on. In candidates out there right now, the next long pole in the tenth is the clinical trial. And we are working on that also by applying the same concept. Yeah. But for clinical trials. >> Right. And then Prasad, it seems to me that this is a good, an example of sort of an edge use case. Right? You've got a lot of distributed data. And I know you've spoken in the past about the edge generally, where data lives bringing moving data back to sort of the centralized model. But of course you don't want to move data if you don't have to real time AI inferencing at the edge. So, what are you thinking in terms of other other edge use cases that were there swarm learning can be applied. >> Yeah, that's a great point. We could kind of look at this both in the medical and also in the other fields, as we talked about Professor just mentioned about this radiographs and then probably, Using this with a medical image data, think of it as a scenario in the future. So, if we could have an edge note sitting next to these medical imaging systems, very close to that. And then as in when this the systems producers, the medical immediate speed could be an X-ray or a CT scan or MRI scan types of thing. The system next to that, sitting on the attached to that. From the modernity is already built with the swarm lending. It can do the inferencing. And also with the new setup data, if it looks some kind of an outlier sees the new or images are probably a new signals. It could use that new data to initiate another round up as form learning with all the involved or the other medical images across the globe. So, all this can happen without really sharing any of the raw data outside of the systems but just getting the inferencing and then trying to make all of these systems to come together and try to build a better model. >> So, the last question. Yeah. >> If I may, we got to wrap, but I mean, I first, I think we've heard about swarm learning, maybe read about it probably 30 years ago and then just ignored it and forgot about it. And now here we are today, blockchain of course, first heard about with Bitcoin and you're seeing all kinds of really interesting examples, but Dr. Goh, start with you. This is really an exciting area, and we're just getting started. Where do you see swarm learning, by let's say the end of the decade, what are the possibilities? >> Yeah. You could see this being applied in many other industries, right? So, we've spoken about life sciences, to the healthcare industry or you can't imagine the scenario of manufacturing where a decade from now you have intelligent robots that can learn from looking at across men building a product and then to replicate it, right? By just looking, listening, learning and imagine now you have multiple of these robots, all sharing their learnings across boundaries, right? Across state boundaries, across country boundaries provided you allow that without having to share what they are seeing. Right? They can share, what they have lunch learnt You see, that's the difference without having to need to share what they see and hear, they can share what they have learned across all the different robots around the world. Right? All in the community that you allow, you mentioned that time, right? That will even in manufacturing, you get intelligent robots learning from each other. >> Professor, I wonder if as a practitioner, if you could sort of lay out your vision for where you see something like this going in the future, >> I'll stay with the medical field at the moment being, although I agree, it will be in many other areas, medicine has two traditions for sure. One is learning from each other. So, that's an old tradition in medicine for thousands of years, but what's interesting and that's even more in the modern times, we have no traditional sharing data. It's just not really inherent to medicine. So, that's the mindset. So yes, learning from each other is fine, but sharing data is not so fine, but swarm learning deals with that, we can still learn from each other. We can, help each other by learning and this time by machine learning. We don't have to actually dealing with the data sharing anymore because that's that's us. So for me, it's a really perfect situation. Medicine could benefit dramatically from that because it goes along the traditions and that's very often very important to get adopted. And on top of that, what also is not seen very well in medicine is that there's a hierarchy in the sense of serious certain institutions rule others and swarm learning is exactly helping us there because it democratizes, onboarding everybody. And even if you're not sort of a small entity or a small institutional or small hospital, you could become remembering the swarm and you will become as a member important. And there is no no central institution that actually rules everything. But this democratization, I really laugh, I have to say, >> Pasad, we'll give you the final word. I mean, your job is very helping to apply these technologies to solve problems. what's your vision or for this. >> Yeah. I think Professor mentioned about one of the very key points to use saying that democratization of BI I'd like to just expand a little bit. So, it has a very profound application. So, Dr. Goh, mentioned about, the manufacturing. So, if you look at any field, it could be health science, manufacturing, autonomous vehicles and those to the democratization, and also using that a blockchain, we are kind of building a framework also to incentivize the people who own certain set of data and then bring the insight from the data into the table for doing and swarm learning. So, we could build some kind of alternative monetization framework or an incentivization framework on top of the existing fund learning stuff, which we are working on to enable the participants to bring their data or insight and then get rewarded accordingly kind of a thing. So, if you look at eventually, we could completely make dais a democratized AI, with having the complete monitorization incentivization system which is built into that. You may call the parties to seamlessly work together. >> So, I think this is just a fabulous example of we hear a lot in the media about, the tech backlash breaking up big tech but how tech has disrupted our lives. But this is a great example of tech for good and responsible tech for good. And if you think about this pandemic, if there's one thing that it's taught us is that disruptions outside of technology, pandemics or natural disasters or climate change, et cetera, are probably going to be the bigger disruptions then technology yet technology is going to help us solve those problems and address those disruptions. Gentlemen, I really appreciate you coming on theCUBE and sharing this great example and wish you best of luck in your endeavors. >> Thank you. >> Thank you. >> Thank you for having me. >> And thank you everybody for watching. This is theCUBE's coverage of HPE discover 2020, the virtual experience. We'll be right back right after this short break. (upbeat music)

Published Date : Jun 24 2020

SUMMARY :

the globe it's theCUBE, But the conversation we're Thank you for having us, Dave. and Immunoregulation at the university Thank you all. is the Chief Technologist Thanks for having me. So guys, we have a CUBE first. Very good. I mean, here's the thing So, the ability to allow So, Prasad, and the team You're essentially the use case of for the future is that the new wave Okay and Prasad, you've been helping So, one of the use case we And based on all the experience we get And so the data is very rich and varied. of the blood. and the governments that even non And is the byproduct. Yeah. shared the learnings. and improve the models. And I could say that the that I'd love to talk about, because of the type of cases they see sort of the simplicity, and the diseases. and tests that we can do. Yeah, I think so. and then see if we can streamline it about the edge generally, and also in the other fields, So, the last question. by let's say the end of the decade, All in the community that you allow, and that's even more in the modern times, to apply these technologies You may call the parties to the tech backlash breaking up big tech the virtual experience.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
PrasadPERSON

0.99+

IndiaLOCATION

0.99+

Joachim SchultzePERSON

0.99+

DavePERSON

0.99+

Palo AltoLOCATION

0.99+

Dave VellantePERSON

0.99+

BostonLOCATION

0.99+

ChinaLOCATION

0.99+

SchultzePERSON

0.99+

GermanyLOCATION

0.99+

SingaporeLOCATION

0.99+

United StatesLOCATION

0.99+

10 sitesQUANTITY

0.99+

Prasad ShastryPERSON

0.99+

10 layerQUANTITY

0.99+

10 university hospitalsQUANTITY

0.99+

COVID-19OTHER

0.99+

GohPERSON

0.99+

50 different policiesQUANTITY

0.99+

two hospitalsQUANTITY

0.99+

thousandsQUANTITY

0.99+

two stepsQUANTITY

0.99+

Krishna Prasad ShastryPERSON

0.99+

pandemicEVENT

0.99+

thousands of yearsQUANTITY

0.99+

Eng Lim GohPERSON

0.99+

HPEORGANIZATION

0.99+

firstQUANTITY

0.99+

ACOORGANIZATION

0.99+

DCNEORGANIZATION

0.99+

European unionORGANIZATION

0.99+

each hospitalsQUANTITY

0.99+

less than 10 linesQUANTITY

0.99+

both hospitalsQUANTITY

0.99+

oneQUANTITY

0.99+

Rachel GrimesPERSON

0.99+

eachQUANTITY

0.99+

three guestsQUANTITY

0.99+

each cycleQUANTITY

0.99+

third hospitalQUANTITY

0.99+

each hospitalQUANTITY

0.98+

fourQUANTITY

0.98+

30 years agoDATE

0.98+

India Advanced Development CenterORGANIZATION

0.98+

bothQUANTITY

0.98+

tens of thousandsQUANTITY

0.98+

fourth time zoneQUANTITY

0.98+

threeQUANTITY

0.98+

one aspectQUANTITY

0.97+

EULOCATION

0.96+

five yearsQUANTITY

0.96+

2020DATE

0.96+

todayDATE

0.96+

Dr.PERSON

0.95+

PasadPERSON

0.95+

Dr. Swaine Chen, Singapore Genomics Institute | AWS Public Sector Summit 2018


 

>> Live from Washington D.C., it's theCUBE. Covering AWS Public Sector Summit 2018. Brought to you by Amazon Web Services and its ecosystem partners. (upbeat music) >> Hey welcome back everyone we're here live in Washington D.C. for Amazon Web Services Public Sector Summit, I'm John Furrier. Stu Miniman our next guest is Dr. Swaine Chen, Senior Research Scientist of Infectious Disease, the Genome institute of Singapore. And also an assistant professor at The Medicinal National University of Singapore. Great to have you on, I know you've been super busy, you were on stage yesterday, we tried to get you on today, thanks for coming in and kind of bring it in to our two days of coverage here. >> Thank you for having me, I'm very excited to be here. >> So we were in between breaks here and we're talking about some of the work around DNA sequencing but, you know it's super fascinating. I know you've done some work there but, I want to talk first about your presence here at the Public Sector Summit. You were on stage, tell your story 'cause you have an very interesting presentation around some of the cool things you're doing in the cloud, take a minute to explain. >> That's right, so one of the big things that's happening in genomics is the rate of data acquisition is outstripping Moore's Law right? So for a single institute to try to keep up with compute for that, we really can't do it. So that really is the big driver for us to move to cloud, and why we're on AWS. And so then, of course once we can do that once we can sort of have this capacity, there's lots of things that my research is mostly on infection diseases, so one of the things where really you've got, all of a sudden, you've got a huge amount of data you need to process would be a case like an outbreak. And that just happens it happens unexpectedly. So we had one of these that happened that I talked about. And the keynote yesterday was on Group B Streptococcus. This is a totally unexpected disease. And so all of a sudden we had all this data we had to process, and try to figure out what was going on with that outbreak. And unfortunately we're pretty sure that there's going to be other outbreaks coming up in the future as well, and just, being able to be prepared for that. AWS helps us provide some of that capacity, and we're you know, continuously trying to upgrade our analytics for that as well. >> So give me an example of kind of where this kind of hits home for you, where it works. What is doing specifically? Is it changing the timeframe? Is it changing the analysis? Where is the impact for you? >> Yeah so it's all of this right? So it's all the sort of standard things that AWS is providing all of the other companies. So it's cheaper for us to just pay for what we use, especially when we have super spiky work loads. Like in the case of an outbreak right? If all of a sudden we need to sort of take over the cluster internally, well there's going to be a lot of people screaming about that, right? So we can kick that out to the cloud, just pay for what we use, we don't have to sort of requisition all the hardware to do that, so it really helps us along these things. And also gives us the capacity too think about you know as data just comes in more and more, we start to think about, lets just increase our scale. This is somethings that been happening, sort of incessantly in science, incessantly in genomics. So as just an example from my work and my lab we're studying infectious diseases we're studying mostly bacterial genomics. So the genomes of bacteria that cause infections. We've increased our scale 100x in the last four years in terms of the data sets that we're processing. And we see the samples coming in, we're going to do another 10x in the next two years. We just really wouldn't have been able to do that on our current hardware. >> Yeah, Dr. Chen, fascinating space. We love for years there was discussion of well oh how much it costs, to be able to do everything had gone down. But what has been fascinating is you've look, you've talked about that date and outstripping Moore's Law, and not only what you can do but in collaboration with others now, because there's many others around the globe that are doing this. 'Cause talk about that level of data, and how the cloud enables that. >> Yeah so that's actually another great point. So genomics is very strong into open source, especially in the academic community. Whenever we publish a paper, all the genomic data that's in that paper, it gets, uh oh (laughs). Whenever we, whenever we publish-- >> Mall's closing in three minutes. >> Three minutes cloud count. >> Three minutes, okay. Whenever we publish a paper, that data goes up and gets submitted to these public databases. So when I talk about 100x scale, that's really incorporating world wide globally all the data that's present for that species. So as an example, I talked about Group B Streptococcus, another bacteria we study a lot is E. coli, Escherichia coli. So that causes diarrhea, it causes urinary tract infections, bloodstream infections. When we pull down a data set locally, in Singapore, with 100, 200, 300 strains we can now integrate that with a global database of 10,000, 20,000 strains and just gain a global prospective on that. We get higher resolution, and really AWS helps us to pull in from these public databases, and gives the scale to burst out that processing of that many more strains. >> So the DNA piece of your work, does that tie into this at all? I mean obviously you've done a lot of work with the DNA side, was that playing into this as well? >> The? >> The DNA work, you've done in the past? >> Yeah so all of the stuff that we're doing is DNA, basically. So there are other frontiers, that have been explored quite a lot. So looking at RNA and looking at proteins and carbohydrates and lipids, but at the Genome Institute in Singapore, we're very focused on the genetics, and mostly are doing DNA. >> How has the culture changed from academic communities with cloud computing. We're seeing sharing, certainly a key part of data sharing. Can you talk about that dynamic, and what's different now than it was say five to even 10 years ago? >> Huh, I'd say that the academic community has always been pretty open, the academic community right? It's always been a very strong open source compatible kind of community right? So data was always supposed to be submitted to public databases. Didn't always happen, but I think as the data scale goes up and we see the value of the sort of having a global perspective on infectious diseases and looking for the source of an outbreak, the imperative to share data right? That looking at outbreaks like Ebola, where in the past people might try to hold data back because they wanted to publish that. But from a public health point of view, the imperative to share that data immediately is much stronger now that we see the value of having that out there. So I would say that's one of the biggest changes is the imperative is there more. >> I agree I think academic people I talk to, they always want to share, it might be not uploaded fast enough. So time is key. But I got to ask you a personal question, of all the work you've done on, you've seen a lot of outbreaks. This is kind of like scary stuff. Have you had those aha moments, just like mind blowing moments where you go, oh my God we did that because of the cloud? I mean an you point to some examples where it's like that is awesome, that's great stuff. >> Well so we certainly have quite a few examples. I mean outbreaks are just unexpected. Figuring out any of them and being able to impact, or sort of say this is how this transmission is, or this is what the source is. This is how we should try to control this outbreak. I mean all of those are great stories. I would say that , you know, to be honest were still early in our transition to the cloud, and we're kind of running a hybrid environment right now. Like really when we need to burst out, then we'll do that with the cloud. But most of our examples, so far, you know we're still early in this for cloud. >> To the spiky is the key value for you, when the hits pipe out. >> So what excited you about the future of the technology that, do you believe we'll be able to do as we just accelerate, prices go down, access to more information, access to more. What do you think we're going to see in this field the next, you know, one to three years? >> Oh I think on of the biggest changes that's going to happen, is we're going to shift completely how we do, for example in outbreaks right? We're going to shift completely how we do outbreak detection. It's already happening in the U.S. and Europe. We're trying to implement this in Singapore as well. Basically the way we detect outbreaks right now, is we see a rise in the number of cases, you see it at the hospitals, you see a cluster of cases of people getting sick. And what defines a cluster? You kind of need enough of these cases that it sort of statistically goes above your base line. But we actually, when we look at genomic data we can tell, we can find clusters of outbreaks that are buried in the baseline. Because we just have higher resolution. We can see the same bacteria causing infections in groups of people. It might be a small outbreak, it might be self limited. But we can see this stuff happening, and it's buried below the baseline. So this is really what's going to happen, is instead of waiting until, a bunch of people get sick before you know that there's an outbreak. We're going to see that in the baseline or as it's coming up with two, three, five cases. We can save hundreds of infections. And that's one of the things that's super exciting about moving towards the future where sequencing is just going to be a lot cheaper. Sequencing will be faster. Yeah it's a super exciting time. >> And more researching is a flywheel. More researching come over the top. >> Yep, exactly, exactly. >> That's great work, Dr. Swaine Chen, thanks for coming on theCUBE. We really appreciate-- >> No thank you. >> Congratulations, great talk on the keynote yesterday, really appreciate it. This is theCUBE bringing you all the action here as we close down our reporting. They're going to shut us down. theCUBE will go on until they pull the plug, literally. Thanks for watching, I'm John Ferrier, Stu Miniman, and Dave Vellante. Amazons Web Services Public Sector Summit, thanks for watching. (upbeat techno music)

Published Date : Jun 21 2018

SUMMARY :

Brought to you by Amazon Web Services of Infectious Disease, the Genome institute of Singapore. So we were in between breaks here and we're So that really is the big driver for us to move Where is the impact for you? So it's all the sort of standard things that and how the cloud enables that. especially in the academic community. and gives the scale to burst out that Yeah so all of the stuff that we're How has the culture changed from academic the imperative to share that data immediately of all the work you've done on, This is how we should try to control this outbreak. To the spiky is the key value for you, the next, you know, one to three years? Basically the way we detect outbreaks right now, More researching come over the top. We really appreciate-- Congratulations, great talk on the

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

Amazon Web ServicesORGANIZATION

0.99+

SingaporeLOCATION

0.99+

Stu MinimanPERSON

0.99+

John FerrierPERSON

0.99+

twoQUANTITY

0.99+

Three minutesQUANTITY

0.99+

Swaine ChenPERSON

0.99+

AWSORGANIZATION

0.99+

John FurrierPERSON

0.99+

Genome InstituteORGANIZATION

0.99+

ChenPERSON

0.99+

three minutesQUANTITY

0.99+

Escherichia coliOTHER

0.99+

100QUANTITY

0.99+

10xQUANTITY

0.99+

U.S.LOCATION

0.99+

Washington D.C.LOCATION

0.99+

two daysQUANTITY

0.99+

EuropeLOCATION

0.99+

threeQUANTITY

0.99+

100xQUANTITY

0.99+

yesterdayDATE

0.99+

todayDATE

0.99+

oneQUANTITY

0.99+

three yearsQUANTITY

0.98+

Public Sector SummitEVENT

0.98+

E. coliOTHER

0.97+

Dr.PERSON

0.97+

fiveQUANTITY

0.96+

EbolaEVENT

0.96+

Amazon Web Services Public Sector SummitEVENT

0.96+

The Medicinal National University of SingaporeORGANIZATION

0.96+

outbreakEVENT

0.95+

theCUBEORGANIZATION

0.95+

Singapore Genomics InstituteORGANIZATION

0.94+

10,000, 20,000 strainsQUANTITY

0.94+

AWS Public Sector Summit 2018EVENT

0.94+

Amazons Web Services Public Sector SummitEVENT

0.94+

outbreaksEVENT

0.93+

firstQUANTITY

0.92+

five casesQUANTITY

0.91+

hundreds of infectionsQUANTITY

0.91+

MoorePERSON

0.91+

last four yearsDATE

0.87+

Group B StreptococcusOTHER

0.84+

200, 300 strainsQUANTITY

0.83+

next two yearsDATE

0.81+

single instituteQUANTITY

0.81+

StreptococcusOTHER

0.76+

Genome institute of SingaporeORGANIZATION

0.76+

10 years agoDATE

0.75+

Group BOTHER

0.67+

peopleQUANTITY

0.5+

ScientistPERSON

0.48+

Infectious DiseaseORGANIZATION

0.45+

bunchQUANTITY

0.38+

theCUBETITLE

0.37+

James Lowey, TGEN | Dell Technologies World 2018


 

>> Narrator: Live from Las Vegas, it's theCUBE, covering Dell Technologies World 2018. Brought to you by Dell EMC and its ecosystem partners. >> Welcome back to theCUBE. We are live in Las Vegas. Day two of Dell Technologies World. I am Lisa Martin with Stu Miniman, my cohost. And we're excited to welcome to theCUBE for the first time the CIO of TGen, Translational Genomics, James Lowey. James, welcome to theCUBE. >> Ah, thank you so much, it's great being here. >> So, genomics, really interesting topic that we want to get into and understand. How are you making IT and digital and workforce transformation real in it, but get give our viewers and overview of TGen. It started out about 16 years ago as a very collaborative effort within Arizona and really grew. Talk to us about that. >> Yeah, absolutely. So, TGen is a nonprofit biomedical research institute based in Phoenix, Arizona. As you mentioned, we've been around about 16 years. We were, the inception of the institute was really built around bringing biomedical technology into the sate of Arizona. And we're fortunate enough to have a really visionary and gifted leader in Dr. Jeffrey Trent, who is one of the original guys to sequence the human completely for the first time. So I don't know if you get any better street cred than that when it comes to genomics. >> And you mentioned, before we went live, give our viewers an overview of what it took to sequence the human genome in terms of time and money and now, how 15 years later, how fast it can be done. >> Yeah, so, you know we've moved from a point where it costs billions of dollars and took many years to complete the first sequence to today where it takes a little bit over a day and about $3 thousand. So it's really the democratization of the technology is driving clinical application, which, in turn, is going to benefit all of us. >> Yeah, James, genomics is one of those areas, when we talk about there is the opportunity of data, but there's also the challenge of data, because you've got to, I have to imagine, orders of magnitude more data than your typical company does, so talk to us a little bit about the role of data inside your organization. >> Well, data is our lifeblood. I mean, we've been generating terascale then petascale for many years now. And the fact is, is every time you sequence a patient you're generating about 4 terabytes of data for one patient. So if you're doing 100 patients, do the math, or you're doing a thousand patients. We're talking just an immense volume of data. And really, data is what drives us because that information that's encoded in our genome is nothing but data, right? It's turning our analog selves into a digital format that then we can interrogate to come up with better treatments to help patients. >> Can you bring this inside? When you talk about the infrastructure that enables that. You know, what I was teasing out with the last question, it's not just about storing data, you need to be able to access the data, you need to be able to share data. So as the CIO, what's your purview? Give us a little bit of a thumbnail sketch as to what your organization-- >> Oh yeah, yeah, no that's great. You know, so we've been a long time Isilon customer. The scale-out storage is what really has enabled us to be successful. Our partnership with Dell EMC has spanned many years and we're fortunate enough to have enough visibility within the organization to get early access to technologies. And really, that's really important because the science moves faster than the IT. So having things like scale-out, super fast flash, you know, having new Intel processors, all these things are what really enable us to do our job and to be successful. >> How have, you've been with TGen for a long time now, you've been the CIO for about three years. Talk to us about the transformation of the technology and how you've evolved it to not just facilitate digital transformation and IT transformation, but I imagine security transformation with human genetic data is of paramount importance. >> You know, that's a really good point. Security is always on my mind, for obvious reasons because I would say there's nothing more personally identifiable than your genome. There's the laws around these things still have not been totally codified. So we're sitting at a point today where we're still uncertain to how exactly best protect this very, very important data. But to that end, we tend to fail in the closed state of doing things, everything's encrypted. You know, we are big believers in identity management and making sure that the right people have access to the right data at the right time. We've utilized SecureWorks, for instance, for perimeter, logging, and to get their expertise. 'Cause one of the things I've learned in my tenure as CIO is that it's really all about the people and they're what drive your success. And so I'm fortunate enough to have a team that's amazing. These folks are some of the best people in their field and really do a great job at helping us, protect the data, get access to the data, as well as thinking about what the next iteration is going to look like. >> When you look at, just as a whole, the security and data protection, you think about everybody, if they get those home kits, or things like that, how has that evolved the last few years? I'm curious if that impacts your business. >> Well, I think it does impact our business insofar as it creates awareness. And you know, I think it's really fantastic when I attend a cocktail party or something and people come up and ask, say, "You know, should I get the 23andMe Ancestry?" And they're really engaged and interested and wanting to learn about these things. And I think that's going to spur questions to be asked when they go in to be treated by a physician. Which is really important. I think, I'm a believer that we should own our own data, especially our genomic data, because what's more personal than that? And so we have a lot of challenges ahead, I think, in IT in particular, in protecting, storing, and providing that data to patients. >> Just a quick followup, I'm sure you secure stuff. What's the cocktail answer for that? If, you know, should I get that? Can I trust this company? Is my insurance company and everybody else going to get that? What do you advise the average consumer? >> I would say read the terms of use agreement very carefully. >> so the theme of the event, James, make it real. You know, few things are more real than our own data, our own genomes, what does that theme mean to you from an application perspective? How are you making digital transformation real? And things like the alliance with City of Hope to impact disease study and cures? What is that reality component to you? >> Yeah, it has, you know, I really like the make it real theme, and I think it's something that we are doing every day. I think it just speaks to, you know, taking technology, applying it for meaningful use, to actually make a difference, and to do something that has real impact. And I think that at TGen, I've been empowered to build systems that can do that, that can help our scientists and ultimately help patients. You mentioned City of Hope. We're, our alignment with them is amazing. They have just hired a Chief Digital Officer as they go through a digital transformation of their own. And you know, we're on board in striving to help them go through this process because, as you might be aware, everything's about the data. And that's where we have to focus. >> James, if you go back, you talked about your scale-out architecture with Isilon. How do you report back to the business as to the results you're doing? What are the, do you have any hero metrics or things that you point out that says this is why we're successful. This is why we've made the right decision. This is why we should be doing this in the future. >> Well, I think we're especially fortunate that we can measure our success in people's lives. So, meeting a kid who's in full remission from brain cancer who was treated using drugs that were derived from being sequenced and run through our labs and then our computational infrastructure and having them say thank you, I think is pretty much a metric that I don't know how you can beat that. >> Talk about making it real. That's where it's really impactful. I'd love to understand your thoughts as you continue to evolve your transformation as a company. We've heard a lot about emerging technologies and what Dell EMC, Dell Technologies, is doing to enable organizations and customers to be able to realize what's possible with artificial intelligence, machine learning, IoT. What are your thoughts about weaving in those emerging technologies to make what TGen delivers even more impactful. >> Well you just said three of my favorite things that I'm spending a lot of time thinking about. You know, artificial intelligence is going to be absolutely, is required to interrogate the vast amounts of data that are being created. I mean, this is all unstructured data, so you have to have systems that can store and present that data in such a way that you're going to be able to do something meaningful. IoT is another area where we're spending a lot of time and energy in what we believe is like quantitative medicine. So basically taking measurements all the time to see about changes and then using that to hopefully gain insight into treatment of diseases. You know, machine learning and some of these technologies are also absolutely going to be critical, especially when we start building out drug databases and being able to match the patient with the drug. >> Yeah, James, bring us inside to your organization a little bit. What kind of skill sets do you have to have to architect, operate, a theme of this show, they've got Andy McAfee, who's from MIT, we've spoken to, it's about people and machines. You can't have one without the other. You need to be able to marry those two. How does an organization like yours get ready for that and move forward? >> Yeah, it's a really good point. I think the technology enables the people, and you have to have the right people to help make the decisions and what technologies you get and apply. And I think that the skill sets that we look for is generally people who have a broad view of the world. You know, people who are particular experts, at least in the IT side are of limited use, because we need people to be able to switch gears quickly and to think about problems holistically. So I'd say most of the IT folks are working several different disciplines and are really good at that. On the scientific side it's a little different. We're looking for data scientists all the time. So if anybody's watching and wants to come work for a great place, TGen, look us up. Because that's really where we're headed. You know, we have a lot of biologists, we have a lot of molecular biologists, we have people who do statistics, but it's not quite the same as data science. So that's kind of the new area that we're really focused on. >> All right, so James, one of the things I always love to ask when I get a CIO here is, when you're talking to your peers in the industry, how do you all see the role of the CIO changing? What are some of the biggest challenges that you're facing? >> So, yeah, it's a great question. I think the role's changing towards being empowered in the business. And I think that as that has to be part of the transformation. Is you have to be aligned completely with what your objectives are. And we're fortunate, you know, we are. And I feel very lucky to have a boss and a boss's boss who both understand the importance and the value that we bring to the organization. I also see that in the industry, especially in healthcare, a need for folks who are focused beyond just the EMR and daily IT things, to really start looking beyond maybe where you're comfortable. I know that I stretch my boundaries, and I think that in order to be successful as a CIO I think that's what you're going to have to do. I think you're going to have to push the envelope. You're going to have to look for new technologies and new ways to make a difference. >> So last question, big impact that TGen has made to the state of Arizona. I read on LinkedIn that you like building high-performance teams. What are some of the impacts that this has made for Arizona but also maybe as an example for other states to look to be inspired to set up something similar? >> That's really a great question. I think, you know, Arizona made an investment, and the way that it's easy to measure. So if you come down to the TGen building and realize that that building was the first building that is now surrounded by buildings, including a full-on cancer center, that's all in downtown Phoenix. And it's almost the if you build it they will come, but it's not just the infrastructure, it really is about the people and identifying the right folks to come in and help build that, to invest in them and to provide basically the opportunity for success. You know, Arizona has really been fortunate, I think, in being able to build out this amazing infrastructure around biotechnology. And you know, but we're just getting going. I mean, we are, we've only been doing this for about 16 years and I look forward to the next 16. >> Well thanks so much, James, for stopping by and talking about how you're applying technologies, not just from Dell EMC but others as well to make transformation real, to make it real across IT, digital, workforce, security, and doing something that's really literally has the opportunity to save lives. Thanks so much. >> Well thank you very much, it's been a pleasure. >> We want to thank you for watching theCUBE. I'm Lisa Martin, with my cohost Stu Miniman. We are live day two of Dell Technologies World. We'll be back after a lunch break. We'll see you then.

Published Date : May 1 2018

SUMMARY :

Brought to you by Dell EMC Welcome back to theCUBE. Ah, thank you so much, Talk to us about that. to sequence the human And you mentioned, before we went live, So it's really the democratization talk to us a little bit interrogate to come up with as to what your organization-- and to be successful. Talk to us about the protect the data, get access to the data, the security and data protection, And I think that's going to everybody else going to get that? I would say read the What is that reality component to you? and to do something that has real impact. as to the results you're doing? that I don't know how you can beat that. I'd love to understand your thoughts and being able to match You need to be able to marry those two. and to think about problems holistically. I also see that in the industry, I read on LinkedIn that you like And it's almost the if you has the opportunity to save lives. Well thank you very We want to thank you

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JamesPERSON

0.99+

Stu MinimanPERSON

0.99+

Lisa MartinPERSON

0.99+

Andy McAfeePERSON

0.99+

James LoweyPERSON

0.99+

ArizonaLOCATION

0.99+

Jeffrey TrentPERSON

0.99+

100 patientsQUANTITY

0.99+

Las VegasLOCATION

0.99+

TGenORGANIZATION

0.99+

Dell TechnologiesORGANIZATION

0.99+

Dell EMCORGANIZATION

0.99+

first sequenceQUANTITY

0.99+

LinkedInORGANIZATION

0.99+

twoQUANTITY

0.99+

about $3 thousandQUANTITY

0.99+

Phoenix, ArizonaLOCATION

0.99+

billions of dollarsQUANTITY

0.99+

first timeQUANTITY

0.99+

one patientQUANTITY

0.99+

todayDATE

0.98+

about three yearsQUANTITY

0.98+

bothQUANTITY

0.98+

oneQUANTITY

0.98+

Dell EMCORGANIZATION

0.98+

15 years laterDATE

0.98+

IsilonORGANIZATION

0.97+

City of HopeORGANIZATION

0.97+

Day twoQUANTITY

0.97+

Translational GenomicsORGANIZATION

0.96+

MITORGANIZATION

0.96+

about 16 yearsQUANTITY

0.96+

theCUBEORGANIZATION

0.96+

IntelORGANIZATION

0.95+

Dell Technologies World 2018EVENT

0.94+

first buildingQUANTITY

0.93+

day twoQUANTITY

0.91+

Dell Technologies WorldEVENT

0.9+

about 4 terabytesQUANTITY

0.89+

16 years agoDATE

0.87+

a thousand patientsQUANTITY

0.86+

over a dayQUANTITY

0.84+

TGenLOCATION

0.76+

TGenPERSON

0.76+

Dr.PERSON

0.72+

next 16DATE

0.72+

Dell Technologies WorldORGANIZATION

0.71+

three of my favoriteQUANTITY

0.68+

lastDATE

0.59+

PhoenixLOCATION

0.53+

TGENPERSON

0.52+

23andMeORGANIZATION

0.49+

SecureWorksORGANIZATION

0.45+

Zachary Bosin and Anna Simpson | Veritas Vision 2017


 

>> Announcer: Live from Las Vegas, it's theCube. Covering Veritas Vision 2017. Brought to you by Veritas. >> Welcome back to Las Vegas everybody, this is theCube, the leader in live tech coverage. This is day one of two day coverage of Veritas Vision #VtasVision. My name is Dave Vellante, and I'm here with my co-host Stu Miniman. Zach Bosin is here. He's the director of information governance solutions at Veritas. And Anna Simpson is a distinguished systems engineer at Veritas. Which Anna means you know where all the skeletons are buried and how to put the pieces back together again. Welcome to theCube, thanks for coming on. >> Thank You. >> Thank You. >> Let's start with, we've heard a little bit today about information governance, Zach we'll start with you. It's like every half a decade or so every decade, there's a new thing. And GDPR is now the new thing. What's the state of information governance today? How would you describe it? >> I think the primary problem that organizations are still trying to fight off, is exponential data growth. We release research every year called the Data Genomics Index, and what came back this past year is that data growth has continued to accelerate, as a matter of fact, 49% year over year. So this problem isn't going anywhere and now it's actually being magnified by the fact that data is being stored, not only in the data center on premises, but across the multi-cloud. So information governance, digital compliance is all about trying to understand that data, control that data, put the appropriate policies against it. And that's really what we try to do with helping customers. >> I always wonder how you even measure data. I guess you could measure capacity that leaves the factory. There's so much data that's created that's not even persistent. We don't even know, I think, how fast data is growing. And it feels like, and I wonder if you guys agree or have any data suggestions, it feels like the curve is reshaping. I remember when we were talking to McAfee and Brynjolfsson it feels the curve is just going even more exponential. What's your sense? >> That's typically what we see. And then you have IoT data coming online, faster and faster and it really is a vertical shot up. And all different types and new files types. One of the other really interesting insights, is that unknown file types jumped 30-40%. Things that we don't even recognize with our file analysis tools today, are jumping off the charts. >> It used to be that PST was the little nag, it looks trivial compared to what we face today, Anna. What's your role as a distinguished systems engineer? How do you spend your time? And what are you seeing out there? >> I definitely spend my time dealing with customers around the world. Speaking to them about information governance. Particularly around risk mitigation these day. In terms of the issues we see in information governance, data privacy is a big one. I'm sure you've been hearing about GDPR quite a bit today already. That's definitely a hot topic and something our customers are concerned about. >> Are they ringing you up saying, "Hey, get in here. "I need to talk you about GDPR?" Or is more you going in saying, "You ready for GDPR? How does that conversation go? >> It's definitely a combination between the two. I think there is definitely a lot of denial out there. A lot of people don't understand that it will apply to them. Obviously if they are storing or processing data which belongs to an EU resident, containing their personal data. I think organizations are either in that denial phase or otherwise they're probably too aware, so they've probably started a project, done some assessment, and then they're buried in the panic mode if we have to remediate all these issues before May next year. >> What's the bell curve look like? Let's make it simple. One is, "we got this nailed." That's got to be tiny. The fat middle which is "we get it, we know it's coming, "we got to allocate some budget, let's go." Versus kind of clueless. What's the bell curve look like? >> I would say that there's 2% of companies, maybe, that think they have it nailed. >> Definitely in single digits, a low single digits. >> I think maybe another 30% at least understand the implications and are trying to at least but a plan in place. And the rest, 66% or so, still aren't very aware of what GDPR means for their business. >> Dave: Wow. >> Can you take us inside? what's Veritas's role in helping customers get ready for GDPR? We talked to one of Veritas's consulting partners today and it's a big issue, it crosses five to ten different budget areas. So what's the piece that Veritas leads and what's the part that you need to pull in other partners for? >> Sure thing. So in terms of our approach, we have what we refer to as a wheel. Which sort of attacks different parts of the GDPR, so various articles step you through the processes you need to be compliant. Things like locating personal data, being able to search that data, minimizing what you have, because GDPR is really dictating you can no longer data hoard, because you can only keep data which has business value. Further downstream it's obviously protecting the data that has business value, and then monitoring that over time. From a Veritas approach perspective, we tying those articles obviously to some of our products, some of our solutions. There's also definitely a services component around that as well. When you think about e-discovery of regulatory requirements, when the regulators come in, generally they're not necessarily going to be questioning the tools, they're going to be questioning how you're using those tools to be compliant. It is sort of a combination between tools and services. And then we're also partnering with other consulting companies on that process piece, as well. Zach, at the keynote this morning, there was a lot of discussion about there's dark data out there, and we need to shine a light on it I have to imagine that's a big piece of this. Why don't you bring us up to speed. What are some of the new products that were announced that help with this whole GDRP problem. >> In to that point, 52% of data is dark, 33% is rot, 15% is mission critical. Today we announced 23 new connectors for the Veritas information map. This is our immersive visual data mapping tool, that really highlights where you're stale, and orphaned, and non-business critical data is across the entire enterprise. New connectors with Microsoft as your Google Cloud storage, Oracle databases, so forth and so on, there's quite a number that we're adding into the fold. That really gives organizations better visibility into where risk may be hiding, and allows you to shine that light and interrogate that data in ways you couldn't do previously because you didn't have those types of insights. >> Also we heard about Risk Analyzer? >> Yes, that's right. We just recently announced the Veritas Risk Analyzer, this is a free online tool, where anyone can go to Veritas.com/riskanalyzer, take a folder of their data, and try out our brand new integrated classification engine. We've got preset policies for GDPR, so you drop in your files, and we'll run the classification in record speed, and it will come back with where PII is, how risky that folder was, tons of great insights. >> So it's identifying the PII, and how much there is, and how siloed it is? Are you measuring that? What are you actually measuring there? >> We're actually giving you a risk score. When we're analyzing risk, you might find one individual piece of PII, or you might find much more dense PII. So depending on the number of files, and the types of files, we'll actually give you a different risk tolerance. What we're doing with the Risk Analyzer is giving you a preview, or just a snapshot of the types of capabilities that Veritas can bring to that discussion. >> Who do you typically talk to? Is it the GC, is it the head of compliance, chief risk officer, all of the above? >> Yeah, it's definitely all of the above-- >> Some person who has a combination of those responsibilities, right? >> Yeah, exactly. It's usually, if we're talking GDPR specifically, it's usually information security, compliance, legal, and particularly in organizations now, we're definitely seeing more data privacy officers. And they're the ones that truly understand what these issues are; GDPR or other personal data privacy regulations. >> Let's say I'm the head of compliance security risk information governance, I wear that hat. Say I'm new to the job, and I call you guys in and say, "I need help." Where do I start? Obviously you're going to start with some kind of assessment Maybe you have a partner to help you do that, I can run my little risk analyzer, sort of leech in machine, and that's good but that's just scratching the surface. I know I have a problem. Where do we start? What are the critical elements? And how long is it going to take me to get me where I need to be? >> I think visibility is obviously the first step, which Zach already spoke to. You really have to be able to understand what you have to then be able to make some educated decisions about that. Generally that's where we see the gap in most organizations today. And that's particularly around unstructured data. Because if it's structured, generally you have some sort of search tools that you can quickly identify what is within there. >> To add on to that, you actually have 24 hours. We can bring back one hundred million items using the information map, so you get a really clean snapshot in just one day to start to understand where some of that risk may be hiding. >> Let's unpack that a little bit. You're surveying all my data stores, and that's because you see that because you've got the back-up data, is that right? >> The backup data is one portion of it. The rest is really coming from these 23 new connectors into those different data stores and extracting and sweeping out that metadata, which allows us to make more impactful decisions about where we think personal data may be, and then you can take further downstream actions using the rest of our tool kit. >> And what about distributed data on laptops, mobile devices, IoT devices, is that part of the scope, or is that coming down the road, or is it a problem to be solved? >> It's a little out of scope for what we do. On the laptop/desktop side of things, we do have e-discovery platform, formally known as Clearwell, which does have the ability to go out and search those types of devices and then you could be doing some downstream review of that data, or potentially moving it elsewhere. It's definitely a place we don't really play right now. I don't know if you had other comments? >> You got to start somewhere. Start within your enterprise. This has always been a challenge. We were talking off camera about FRCP and email archiving. I always thought the backup ... The back company was in a good spot. They analyzed that data. But then there's the but. Even these are backed up, kind of, laptops and mobile devices. Do you see the risk and exposures in PII really at the corporate level, or are attorneys going to go after the processes around distributed data, and devices, and the like? >> I think anything is probably fair game at this point given that GDPR isn't being enforced yet. We'll have to see how that plays out. I think the biggest gap right now, or the biggest pain point for organizations, is on structured data. It kind of becomes a dumping ground and people come and go from organizations, and you just have no visibility into the data that's being stored there. And generally people like to store things on corporate networks because it gets backed up, because it doesn't get deleted, and it's usually things that probably should not be stored there. >> If I think back to 2006, 2007 time frame with Federal Rules of Civil Procedure, which basically said that electronic information is now admissible. And it was a high profile case, I don't want to name the name because I'll get it wrong, but they couldn't produce the data in court, the judge penalized them, but then they came back and said, "We found some more data. "We found some more data. "We found some more data." Just an embarrassment. It was one hundred million dollar fine. That hit the press. So what organizations did, and I'm sure Anna you could fill in the gaps, they basically said, "Listen, "it's an impossible problem so we're going to go after "email archiving. "We're going to put the finger in the dyke there, "and try to figure the rest of this stuff out later." What happened is plaintiff's attorney's would go after their processes and procedures, and attack those. And if you didn't have those in place, you were really in big trouble. So what people did is try to put those in place. With GDPR, I'm not sure that's going to fly. It's almost binary. If somebody says, "I want you to delete my data," you can't prove it, I guess that's process-wise, you're in trouble, in theory. We'll see how it holds up and what the fines look like, but it sounds like it's substantially more onerous, from what we understand. Is that right? >> Yes, I would 100% agree. From an e-discovery standpoint, there's proportionality and what's reasonable relative to the cost of the discovery and things like that. I actually don't think that that is going to come into play with GDPR because the fines are so substantial. I don't know what would be considered unreasonable to go out and locate data. >> Zach you have to help us end this on an up note. (group laughs) >> Dave: Wait, I wanted to keep going in to the abyss. (group laughs) We've talk about the exponential growth of data, and big data was supposed to be that bit-flip ... of turned it for, "Oh my God, I need to store it "and do everything, I need to be able to harness it "and take advantage of it" Is GDPR an opportunity for customers, to not only get their arms around information, but extract new value from it? >> Absolutely. It's all about good data hygiene. It's about good information governance. It's about understanding where your most valuable assets are, focusing on those assets, and getting the most value you can from them. Get rid of the junk, you don't need that. It's just going to get you into trouble and that's what Veritas can help you do. >> So a lot of unknowns. I guess the message is, get your house in order, call some experts. I'd call a lot of experts, obviously Veritas. We had PWC on earlier today, and a number of folks in your ecosystem I'm sure can help. Guys, thanks very much for coming on theCube and scaring the crap out of us. (group laughs) >> Thanks a lot. >> Alright, keep it right there buddy, we'll be back for our wrap, right after this short break. (light electronic music)

Published Date : Sep 20 2017

SUMMARY :

Brought to you by Veritas. and how to put the pieces back together again. And GDPR is now the new thing. is that data growth has continued to accelerate, And it feels like, and I wonder if you guys agree And then you have IoT data coming online, faster and faster And what are you seeing out there? In terms of the issues we see in information governance, "I need to talk you about GDPR?" It's definitely a combination between the two. What's the bell curve look like? that think they have it nailed. And the rest, 66% or so, still aren't very aware that you need to pull in other partners for? the processes you need to be compliant. into where risk may be hiding, and allows you to shine so you drop in your files, and we'll run the classification So depending on the number of files, and the types of files, And they're the ones that truly understand Say I'm new to the job, and I call you guys in and say, You really have to be able to understand what you have To add on to that, you actually have 24 hours. and that's because you see that may be, and then you can take further downstream actions the ability to go out and search those types of devices and the like? or the biggest pain point for organizations, And if you didn't have those in place, I actually don't think that that is going to come into play Zach you have to help us end this on an up note. "and do everything, I need to be able to harness it Get rid of the junk, you don't need that. I guess the message is, get your house in order, Alright, keep it right there buddy, we'll be back

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

Zach BosinPERSON

0.99+

DavePERSON

0.99+

Anna SimpsonPERSON

0.99+

VeritasORGANIZATION

0.99+

AnnaPERSON

0.99+

MicrosoftORGANIZATION

0.99+

fiveQUANTITY

0.99+

49%QUANTITY

0.99+

24 hoursQUANTITY

0.99+

100%QUANTITY

0.99+

ZachPERSON

0.99+

23 new connectorsQUANTITY

0.99+

GDPRTITLE

0.99+

Stu MinimanPERSON

0.99+

Las VegasLOCATION

0.99+

one hundred million dollarQUANTITY

0.99+

twoQUANTITY

0.99+

15%QUANTITY

0.99+

66%QUANTITY

0.99+

tenQUANTITY

0.99+

2%QUANTITY

0.99+

52%QUANTITY

0.99+

Zachary BosinPERSON

0.99+

Federal Rules of Civil ProcedureTITLE

0.99+

OracleORGANIZATION

0.99+

30%QUANTITY

0.99+

May next yearDATE

0.99+

first stepQUANTITY

0.99+

one dayQUANTITY

0.98+

two dayQUANTITY

0.98+

oneQUANTITY

0.98+

OneQUANTITY

0.98+

2017DATE

0.98+

TodayDATE

0.98+

one hundred million itemsQUANTITY

0.98+

33%QUANTITY

0.97+

todayDATE

0.97+

30-40%QUANTITY

0.96+

McAfeeORGANIZATION

0.95+

Veritas VisionORGANIZATION

0.94+

earlier todayDATE

0.92+

2007DATE

0.91+

Veritas Risk AnalyzerTITLE

0.91+

RiskTITLE

0.91+

past yearDATE

0.91+

one individual pieceQUANTITY

0.89+

PWCORGANIZATION

0.88+

this morningDATE

0.88+

Data Genomics IndexOTHER

0.84+

GoogleORGANIZATION

0.84+

half a decadeQUANTITY

0.83+

BrynjolfssonPERSON

0.83+

ClearwellORGANIZATION

0.83+

singleQUANTITY

0.83+

2006,DATE

0.77+

EUORGANIZATION

0.77+

AI for Good Panel - Precision Medicine - SXSW 2017 - #IntelAI - #theCUBE


 

>> Welcome to the Intel AI Lounge. Today, we're very excited to share with you the Precision Medicine panel discussion. I'll be moderating the session. My name is Kay Erin. I'm the general manager of Health and Life Sciences at Intel. And I'm excited to share with you these three panelists that we have here. First is John Madison. He is a chief information medical officer and he is part of Kaiser Permanente. We're very excited to have you here. Thank you, John. >> Thank you. >> We also have Naveen Rao. He is the VP and general manager for the Artificial Intelligence Solutions at Intel. He's also the former CEO of Nervana, which was acquired by Intel. And we also have Bob Rogers, who's the chief data scientist at our AI solutions group. So, why don't we get started with our questions. I'm going to ask each of the panelists to talk, introduce themselves, as well as talk about how they got started with AI. So why don't we start with John? >> Sure, so can you hear me okay in the back? Can you hear? Okay, cool. So, I am a recovering evolutionary biologist and a recovering physician and a recovering geek. And I implemented the health record system for the first and largest region of Kaiser Permanente. And it's pretty obvious that most of the useful data in a health record, in lies in free text. So I started up a natural language processing team to be able to mine free text about a dozen years ago. So we can do things with that that you can't otherwise get out of health information. I'll give you an example. I read an article online from the New England Journal of Medicine about four years ago that said over half of all people who have had their spleen taken out were not properly vaccinated for a common form of pneumonia, and when your spleen's missing, you must have that vaccine or you die a very sudden death with sepsis. In fact, our medical director in Northern California's father died of that exact same scenario. So, when I read the article, I went to my structured data analytics team and to my natural language processing team and said please show me everybody who has had their spleen taken out and hasn't been appropriately vaccinated and we ran through about 20 million records in about three hours with the NLP team, and it took about three weeks with a structured data analytics team. That sounds counterintuitive but it actually happened that way. And it's not a competition for time only. It's a competition for quality and sensitivity and specificity. So we were able to indentify all of our members who had their spleen taken out, who should've had a pneumococcal vaccine. We vaccinated them and there are a number of people alive today who otherwise would've died absent that capability. So people don't really commonly associate natural language processing with machine learning, but in fact, natural language processing relies heavily and is the first really, highly successful example of machine learning. So we've done dozens of similar projects, mining free text data in millions of records very efficiently, very effectively. But it really helped advance the quality of care and reduce the cost of care. It's a natural step forward to go into the world of personalized medicine with the arrival of a 100-dollar genome, which is actually what it costs today to do a full genome sequence. Microbiomics, that is the ecosystem of bacteria that are in every organ of the body actually. And we know now that there is a profound influence of what's in our gut and how we metabolize drugs, what diseases we get. You can tell in a five year old, whether or not they were born by a vaginal delivery or a C-section delivery by virtue of the bacteria in the gut five years later. So if you look at the complexity of the data that exists in the genome, in the microbiome, in the health record with free text and you look at all the other sources of data like this streaming data from my wearable monitor that I'm part of a research study on Precision Medicine out of Stanford, there is a vast amount of disparate data, not to mention all the imaging, that really can collectively produce much more useful information to advance our understanding of science, and to advance our understanding of every individual. And then we can do the mash up of a much broader range of science in health care with a much deeper sense of data from an individual and to do that with structured questions and structured data is very yesterday. The only way we're going to be able to disambiguate those data and be able to operate on those data in concert and generate real useful answers from the broad array of data types and the massive quantity of data, is to let loose machine learning on all of those data substrates. So my team is moving down that pathway and we're very excited about the future prospects for doing that. >> Yeah, great. I think that's actually some of the things I'm very excited about in the future with some of the technologies we're developing. My background, I started actually being fascinated with computation in biological forms when I was nine. Reading and watching sci-fi, I was kind of a big dork which I pretty much still am. I haven't really changed a whole lot. Just basically seeing that machines really aren't all that different from biological entities, right? We are biological machines and kind of understanding how a computer works and how we engineer those things and trying to pull together concepts that learn from biology into that has always been a fascination of mine. As an undergrad, I was in the EE, CS world. Even then, I did some research projects around that. I worked in the industry for about 10 years designing chips, microprocessors, various kinds of ASICs, and then actually went back to school, quit my job, got a Ph.D. in neuroscience, computational neuroscience, to specifically understand what's the state of the art. What do we really understand about the brain? And are there concepts that we can take and bring back? Inspiration's always been we want to... We watch birds fly around. We want to figure out how to make something that flies. We extract those principles, and then build a plane. Don't necessarily want to build a bird. And so Nervana's really was the combination of all those experiences, bringing it together. Trying to push computation in a new a direction. Now, as part of Intel, we can really add a lot of fuel to that fire. I'm super excited to be part of Intel in that the technologies that we were developing can really proliferate and be applied to health care, can be applied to Internet, can be applied to every facet of our lives. And some of the examples that John mentioned are extremely exciting right now and these are things we can do today. And the generality of these solutions are just really going to hit every part of health care. I mean from a personal viewpoint, my whole family are MDs. I'm sort of the black sheep of the family. I don't have an MD. And it's always been kind of funny to me that knowledge is concentrated in a few individuals. Like you have a rare tumor or something like that, you need the guy who knows how to read this MRI. Why? Why is it like that? Can't we encapsulate that knowledge into a computer or into an algorithm, and democratize it. And the reason we couldn't do it is we just didn't know how. And now we're really getting to a point where we know how to do that. And so I want that capability to go to everybody. It'll bring the cost of healthcare down. It'll make all of us healthier. That affects everything about our society. So that's really what's exciting about it to me. >> That's great. So, as you heard, I'm Bob Rogers. I'm chief data scientist for analytics and artificial intelligence solutions at Intel. My mission is to put powerful analytics in the hands of every decision maker and when I think about Precision Medicine, decision makers are not just doctors and surgeons and nurses, but they're also case managers and care coordinators and probably most of all, patients. So the mission is really to put powerful analytics and AI capabilities in the hands of everyone in health care. It's a very complex world and we need tools to help us navigate it. So my background, I started with a Ph.D. in physics and I was computer modeling stuff, falling into super massive black holes. And there's a lot of applications for that in the real world. No, I'm kidding. (laughter) >> John: There will be, I'm sure. Yeah, one of these days. Soon as we have time travel. Okay so, I actually, about 1991, I was working on my post doctoral research, and I heard about neural networks, these things that could compute the way the brain computes. And so, I started doing some research on that. I wrote some papers and actually, it was an interesting story. The problem that we solved that got me really excited about neural networks, which have become deep learning, my office mate would come in. He was this young guy who was about to go off to grad school. He'd come in every morning. "I hate my project." Finally, after two weeks, what's your project? What's the problem? It turns out he had to circle these little fuzzy spots on these images from a telescope. So they were looking for the interesting things in a sky survey, and he had to circle them and write down their coordinates all summer. Anyone want to volunteer to do that? No? Yeah, he was very unhappy. So we took the first two weeks of data that he created doing his work by hand, and we trained an artificial neural network to do his summer project and finished it in about eight hours of computing. (crowd laughs) And so he was like yeah, this is amazing. I'm so happy. And we wrote a paper. I was the first author of course, because I was the senior guy at age 24. And he was second author. His first paper ever. He was very, very excited. So we have to fast forward about 20 years. His name popped up on the Internet. And so it caught my attention. He had just won the Nobel Prize in physics. (laughter) So that's where artificial intelligence will get you. (laughter) So thanks Naveen. Fast forwarding, I also developed some time series forecasting capabilities that allowed me to create a hedge fund that I ran for 12 years. After that, I got into health care, which really is the center of my passion. Applying health care to figuring out how to get all the data from all those siloed sources, put it into the cloud in a secure way, and analyze it so you can actually understand those cases that John was just talking about. How do you know that that person had had a splenectomy and that they needed to get that pneumovax? You need to be able to search all the data, so we used AI, natural language processing, machine learning, to do that and then two years ago, I was lucky enough to join Intel and, in the intervening time, people like Naveen actually thawed the AI winter and we're really in a spring of amazing opportunities with AI, not just in health care but everywhere, but of course, the health care applications are incredibly life saving and empowering so, excited to be here on this stage with you guys. >> I just want to cue off of your comment about the role of physics in AI and health care. So the field of microbiomics that I referred to earlier, bacteria in our gut. There's more bacteria in our gut than there are cells in our body. There's 100 times more DNA in that bacteria than there is in the human genome. And we're now discovering a couple hundred species of bacteria a year that have never been identified under a microscope just by their DNA. So it turns out the person who really catapulted the study and the science of microbiomics forward was an astrophysicist who did his Ph.D. in Steven Hawking's lab on the collision of black holes and then subsequently, put the other team in a virtual reality, and he developed the first super computing center and so how did he get an interest in microbiomics? He has the capacity to do high performance computing and the kind of advanced analytics that are required to look at a 100 times the volume of 3.2 billion base pairs of the human genome that are represented in the bacteria in our gut, and that has unleashed the whole science of microbiomics, which is going to really turn a lot of our assumptions of health and health care upside down. >> That's great, I mean, that's really transformational. So a lot of data. So I just wanted to let the audience know that we want to make this an interactive session, so I'll be asking for questions in a little bit, but I will start off with one question so that you can think about it. So I wanted to ask you, it looks like you've been thinking a lot about AI over the years. And I wanted to understand, even though AI's just really starting in health care, what are some of the new trends or the changes that you've seen in the last few years that'll impact how AI's being used going forward? >> So I'll start off. There was a paper published by a guy by the name of Tegmark at Harvard last summer that, for the first time, explained why neural networks are efficient beyond any mathematical model we predict. And the title of the paper's fun. It's called Deep Learning Versus Cheap Learning. So there were two sort of punchlines of the paper. One is is that the reason that mathematics doesn't explain the efficiency of neural networks is because there's a higher order of mathematics called physics. And the physics of the underlying data structures determined how efficient you could mine those data using machine learning tools. Much more so than any mathematical modeling. And so the second thing that was a reel from that paper is that the substrate of the data that you're operating on and the natural physics of those data have inherent levels of complexity that determine whether or not a 12th layer of neural net will get you where you want to go really fast, because when you do the modeling, for those math geeks in the audience, a factorial. So if there's 12 layers, there's 12 factorial permutations of different ways you could sequence the learning through those data. When you have 140 layers of a neural net, it's a much, much, much bigger number of permutations and so you end up being hardware-bound. And so, what Max Tegmark basically said is you can determine whether to do deep learning or cheap learning based upon the underlying physics of the data substrates you're operating on and have a good insight into how to optimize your hardware and software approach to that problem. >> So another way to put that is that neural networks represent the world in the way the world is sort of built. >> Exactly. >> It's kind of hierarchical. It's funny because, sort of in retrospect, like oh yeah, that kind of makes sense. But when you're thinking about it mathematically, we're like well, anything... The way a neural can represent any mathematical function, therfore, it's fully general. And that's the way we used to look at it, right? So now we're saying, well actually decomposing the world into different types of features that are layered upon each other is actually a much more efficient, compact representation of the world, right? I think this is actually, precisely the point of kind of what you're getting at. What's really exciting now is that what we were doing before was sort of building these bespoke solutions for different kinds of data. NLP, natural language processing. There's a whole field, 25 plus years of people devoted to figuring out features, figuring out what structures make sense in this particular context. Those didn't carry over at all to computer vision. Didn't carry over at all to time series analysis. Now, with neural networks, we've seen it at Nervana, and now part of Intel, solving customers' problems. We apply a very similar set of techniques across all these different types of data domains and solve them. All data in the real world seems to be hierarchical. You can decompose it into this hierarchy. And it works really well. Our brains are actually general structures. As a neuroscientist, you can look at different parts of your brain and there are differences. Something that takes in visual information, versus auditory information is slightly different but they're much more similar than they are different. So there is something invariant, something very common between all of these different modalities and we're starting to learn that. And this is extremely exciting to me trying to understand the biological machine that is a computer, right? We're figurig it out, right? >> One of the really fun things that Ray Chrisfall likes to talk about is, and it falls in the genre of biomimmicry, and how we actually replicate biologic evolution in our technical solutions so if you look at, and we're beginning to understand more and more how real neural nets work in our cerebral cortex. And it's sort of a pyramid structure so that the first pass of a broad base of analytics, it gets constrained to the next pass, gets constrained to the next pass, which is how information is processed in the brain. So we're discovering increasingly that what we've been evolving towards, in term of architectures of neural nets, is approximating the architecture of the human cortex and the more we understand the human cortex, the more insight we get to how to optimize neural nets, so when you think about it, with millions of years of evolution of how the cortex is structured, it shouldn't be a surprise that the optimization protocols, if you will, in our genetic code are profoundly efficient in how they operate. So there's a real role for looking at biologic evolutionary solutions, vis a vis technical solutions, and there's a friend of mine who worked with who worked with George Church at Harvard and actually published a book on biomimmicry and they wrote the book completely in DNA so if all of you have your home DNA decoder, you can actually read the book on your DNA reader, just kidding. >> There's actually a start up I just saw in the-- >> Read-Write DNA, yeah. >> Actually it's a... He writes something. What was it? (response from crowd member) Yeah, they're basically encoding information in DNA as a storage medium. (laughter) The company, right? >> Yeah, that same friend of mine who coauthored that biomimmicry book in DNA also did the estimate of the density of information storage. So a cubic centimeter of DNA can store an hexabyte of data. I mean that's mind blowing. >> Naveen: Highly done soon. >> Yeah that's amazing. Also you hit upon a really important point there, that one of the things that's changed is... Well, there are two major things that have changed in my perception from let's say five to 10 years ago, when we were using machine learning. You could use data to train models and make predictions to understand complex phenomena. But they had limited utility and the challenge was that if I'm trying to build on these things, I had to do a lot of work up front. It was called feature engineering. I had to do a lot of work to figure out what are the key attributes of that data? What are the 10 or 20 or 100 pieces of information that I should pull out of the data to feed to the model, and then the model can turn it into a predictive machine. And so, what's really exciting about the new generation of machine learning technology, and particularly deep learning, is that it can actually learn from example data those features without you having to do any preprogramming. That's why Naveen is saying you can take the same sort of overall approach and apply it to a bunch of different problems. Because you're not having to fine tune those features. So at the end of the day, the two things that have changed to really enable this evolution is access to more data, and I'd be curious to hear from you where you're seeing data come from, what are the strategies around that. So access to data, and I'm talking millions of examples. So 10,000 examples most times isn't going to cut it. But millions of examples will do it. And then, the other piece is the computing capability to actually take millions of examples and optimize this algorithm in a single lifetime. I mean, back in '91, when I started, we literally would have thousands of examples and it would take overnight to run the thing. So now in the world of millions, and you're putting together all of these combinations, the computing has changed a lot. I know you've made some revolutionary advances in that. But I'm curious about the data. Where are you seeing interesting sources of data for analytics? >> So I do some work in the genomics space and there are more viable permutations of the human genome than there are people who have ever walked the face of the earth. And the polygenic determination of a phenotypic expression translation, what are genome does to us in our physical experience in health and disease is determined by many, many genes and the interaction of many, many genes and how they are up and down regulated. And the complexity of disambiguating which 27 genes are affecting your diabetes and how are they up and down regulated by different interventions is going to be different than his. It's going to be different than his. And we already know that there's four or five distinct genetic subtypes of type II diabetes. So physicians still think there's one disease called type II diabetes. There's actually at least four or five genetic variants that have been identified. And so, when you start thinking about disambiguating, particularly when we don't know what 95 percent of DNA does still, what actually is the underlining cause, it will require this massive capability of developing these feature vectors, sometimes intuiting it, if you will, from the data itself. And other times, taking what's known knowledge to develop some of those feature vectors, and be able to really understand the interaction of the genome and the microbiome and the phenotypic data. So the complexity is high and because the variation complexity is high, you do need these massive members. Now I'm going to make a very personal pitch here. So forgive me, but if any of you have any role in policy at all, let me tell you what's happening right now. The Genomic Information Nondiscrimination Act, so called GINA, written by a friend of mine, passed a number of years ago, says that no one can be discriminated against for health insurance based upon their genomic information. That's cool. That should allow all of you to feel comfortable donating your DNA to science right? Wrong. You are 100% unprotected from discrimination for life insurance, long term care and disability. And it's being practiced legally today and there's legislation in the House, in mark up right now to completely undermine the existing GINA legislation and say that whenever there's another applicable statute like HIPAA, that the GINA is irrelevant, that none of the fines and penalties are applicable at all. So we need a ton of data to be able to operate on. We will not be getting a ton of data to operate on until we have the kind of protection we need to tell people, you can trust us. You can give us your data, you will not be subject to discrimination. And that is not the case today. And it's being further undermined. So I want to make a plea to any of you that have any policy influence to go after that because we need this data to help the understanding of human health and disease and we're not going to get it when people look behind the curtain and see that discrimination is occurring today based upon genetic information. >> Well, I don't like the idea of being discriminated against based on my DNA. Especially given how little we actually know. There's so much complexity in how these things unfold in our own bodies, that I think anything that's being done is probably childishly immature and oversimplifying. So it's pretty rough. >> I guess the translation here is that we're all unique. It's not just a Disney movie. (laughter) We really are. And I think one of the strengths that I'm seeing, kind of going back to the original point, of these new techniques is it's going across different data types. It will actually allow us to learn more about the uniqueness of the individual. It's not going to be just from one data source. They were collecting data from many different modalities. We're collecting behavioral data from wearables. We're collecting things from scans, from blood tests, from genome, from many different sources. The ability to integrate those into a unified picture, that's the important thing that we're getting toward now. That's what I think is going to be super exciting here. Think about it, right. I can tell you to visual a coin, right? You can visualize a coin. Not only do you visualize it. You also know what it feels like. You know how heavy it is. You have a mental model of that from many different perspectives. And if I take away one of those senses, you can still identify the coin, right? If I tell you to put your hand in your pocket, and pick out a coin, you probably can do that with 100% reliability. And that's because we have this generalized capability to build a model of something in the world. And that's what we need to do for individuals is actually take all these different data sources and come up with a model for an individual and you can actually then say what drug works best on this. What treatment works best on this? It's going to get better with time. It's not going to be perfect, because this is what a doctor does, right? A doctor who's very experienced, you're a practicing physician right? Back me up here. That's what you're doing. You basically have some categories. You're taking information from the patient when you talk with them, and you're building a mental model. And you apply what you know can work on that patient, right? >> I don't have clinic hours anymore, but I do take care of many friends and family. (laughter) >> You used to, you used to. >> I practiced for many years before I became a full-time geek. >> I thought you were a recovering geek. >> I am. (laughter) I do more policy now. >> He's off the wagon. >> I just want to take a moment and see if there's anyone from the audience who would like to ask, oh. Go ahead. >> We've got a mic here, hang on one second. >> I have tons and tons of questions. (crosstalk) Yes, so first of all, the microbiome and the genome are really complex. You already hit about that. Yet most of the studies we do are small scale and we have difficulty repeating them from study to study. How are we going to reconcile all that and what are some of the technical hurdles to get to the vision that you want? >> So primarily, it's been the cost of sequencing. Up until a year ago, it's $1000, true cost. Now it's $100, true cost. And so that barrier is going to enable fairly pervasive testing. It's not a real competitive market becaue there's one sequencer that is way ahead of everybody else. So the price is not $100 yet. The cost is below $100. So as soon as there's competition to drive the cost down, and hopefully, as soon as we all have the protection we need against discrimination, as I mentioned earlier, then we will have large enough sample sizes. And so, it is our expectation that we will be able to pool data from local sources. I chair the e-health work group at the Global Alliance for Genomics and Health which is working on this very issue. And rather than pooling all the data into a single, common repository, the strategy, and we're developing our five-year plan in a month in London, but the goal is to have a federation of essentially credentialed data enclaves. That's a formal method. HHS already does that so you can get credentialed to search all the data that Medicare has on people that's been deidentified according to HIPPA. So we want to provide the same kind of service with appropriate consent, at an international scale. And there's a lot of nations that are talking very much about data nationality so that you can't export data. So this approach of a federated model to get at data from all the countries is important. The other thing is a block-chain technology is going to be very profoundly useful in this context. So David Haussler of UC Santa Cruz is right now working on a protocol using an open block-chain, public ledger, where you can put out. So for any typical cancer, you may have a half dozen, what are called sematic variance. Cancer is a genetic disease so what has mutated to cause it to behave like a cancer? And if we look at those biologically active sematic variants, publish them on a block chain that's public, so there's not enough data there to reidentify the patient. But if I'm a physician treating a woman with breast cancer, rather than say what's the protocol for treating a 50-year-old woman with this cell type of cancer, I can say show me all the people in the world who have had this cancer at the age of 50, wit these exact six sematic variants. Find the 200 people worldwide with that. Ask them for consent through a secondary mechanism to donate everything about their medical record, pool that information of the core of 200 that exactly resembles the one sitting in front of me, and find out, of the 200 ways they were treated, what got the best results. And so, that's the kind of future where a distributed, federated architecture will allow us to query and obtain a very, very relevant cohort, so we can basically be treating patients like mine, sitting right in front of me. Same thing applies for establishing research cohorts. There's some very exciting stuff at the convergence of big data analytics, machine learning, and block chaining. >> And this is an area that I'm really excited about and I think we're excited about generally at Intel. They actually have something called the Collaborative Cancer Cloud, which is this kind of federated model. We have three different academic research centers. Each of them has a very sizable and valuable collection of genomic data with phenotypic annotations. So you know, pancreatic cancer, colon cancer, et cetera, and we've actually built a secure computing architecture that can allow a person who's given the right permissions by those organizations to ask a specific question of specific data without ever sharing the data. So the idea is my data's really important to me. It's valuable. I want us to be able to do a study that gets the number from the 20 pancreatic cancer patients in my cohort, up to the 80 that we have in the whole group. But I can't do that if I'm going to just spill my data all over the world. And there are HIPAA and compliance reasons for that. There are business reasons for that. So what we've built at Intel is this platform that allows you to do different kinds of queries on this genetic data. And reach out to these different sources without sharing it. And then, the work that I'm really involved in right now and that I'm extremely excited about... This also touches on something that both of you said is it's not sufficient to just get the genome sequences. You also have to have the phenotypic data. You have to know what cancer they've had. You have to know that they've been treated with this drug and they've survived for three months or that they had this side effect. That clinical data also needs to be put together. It's owned by other organizations, right? Other hospitals. So the broader generalization of the Collaborative Cancer Cloud is something we call the data exchange. And it's a misnomer in a sense that we're not actually exchanging data. We're doing analytics on aggregated data sets without sharing it. But it really opens up a world where we can have huge populations and big enough amounts of data to actually train these models and draw the thread in. Of course, that really then hits home for the techniques that Nervana is bringing to the table, and of course-- >> Stanford's one of your academic medical centers? >> Not for that Collaborative Cancer Cloud. >> The reason I mentioned Standford is because the reason I'm wearing this FitBit is because I'm a research subject at Mike Snyder's, the chair of genetics at Stanford, IPOP, intrapersonal omics profile. So I was fully sequenced five years ago and I get four full microbiomes. My gut, my mouth, my nose, my ears. Every three months and I've done that for four years now. And about a pint of blood. And so, to your question of the density of data, so a lot of the problem with applying these techniques to health care data is that it's basically a sparse matrix and there's a lot of discontinuities in what you can find and operate on. So what Mike is doing with the IPOP study is much the same as you described. Creating a highly dense longitudinal set of data that will help us mitigate the sparse matrix problem. (low volume response from audience member) Pardon me. >> What's that? (low volume response) (laughter) >> Right, okay. >> John: Lost the school sample. That's got to be a new one I've heard now. >> Okay, well, thank you so much. That was a great question. So I'm going to repeat this and ask if there's another question. You want to go ahead? >> Hi, thanks. So I'm a journalist and I report a lot on these neural networks, a system that's beter at reading mammograms than your human radiologists. Or a system that's better at predicting which patients in the ICU will get sepsis. These sort of fascinating academic studies that I don't really see being translated very quickly into actual hospitals or clinical practice. Seems like a lot of the problems are regulatory, or liability, or human factors, but how do you get past that and really make this stuff practical? >> I think there's a few things that we can do there and I think the proof points of the technology are really important to start with in this specific space. In other places, sometimes, you can start with other things. But here, there's a real confidence problem when it comes to health care, and for good reason. We have doctors trained for many, many years. School and then residencies and other kinds of training. Because we are really, really conservative with health care. So we need to make sure that technology's well beyond just the paper, right? These papers are proof points. They get people interested. They even fuel entire grant cycles sometimes. And that's what we need to happen. It's just an inherent problem, its' going to take a while. To get those things to a point where it's like well, I really do trust what this is saying. And I really think it's okay to now start integrating that into our standard of care. I think that's where you're seeing it. It's frustrating for all of us, believe me. I mean, like I said, I think personally one of the biggest things, I want to have an impact. Like when I go to my grave, is that we used machine learning to improve health care. We really do feel that way. But it's just not something we can do very quickly and as a business person, I don't actually look at those use cases right away because I know the cycle is just going to be longer. >> So to your point, the FDA, for about four years now, has understood that the process that has been given to them by their board of directors, otherwise known as Congress, is broken. And so they've been very actively seeking new models of regulation and what's really forcing their hand is regulation of devices and software because, in many cases, there are black box aspects of that and there's a black box aspect to machine learning. Historically, Intel and others are making inroads into providing some sort of traceability and transparency into what happens in that black box rather than say, overall we get better results but once in a while we kill somebody. Right? So there is progress being made on that front. And there's a concept that I like to use. Everyone knows Ray Kurzweil's book The Singularity Is Near? Well, I like to think that diadarity is near. And the diadarity is where you have human transparency into what goes on in the black box and so maybe Bob, you want to speak a little bit about... You mentioned that, in a prior discussion, that there's some work going on at Intel there. >> Yeah, absolutely. So we're working with a number of groups to really build tools that allow us... In fact Naveen probably can talk in even more detail than I can, but there are tools that allow us to actually interrogate machine learning and deep learning systems to understand, not only how they respond to a wide variety of situations but also where are there biases? I mean, one of the things that's shocking is that if you look at the clinical studies that our drug safety rules are based on, 50 year old white guys are the peak of that distribution, which I don't see any problem with that, but some of you out there might not like that if you're taking a drug. So yeah, we want to understand what are the biases in the data, right? And so, there's some new technologies. There's actually some very interesting data-generative technologies. And this is something I'm also curious what Naveen has to say about, that you can generate from small sets of observed data, much broader sets of varied data that help probe and fill in your training for some of these systems that are very data dependent. So that takes us to a place where we're going to start to see deep learning systems generating data to train other deep learning systems. And they start to sort of go back and forth and you start to have some very nice ways to, at least, expose the weakness of these underlying technologies. >> And that feeds back to your question about regulatory oversight of this. And there's the fascinating, but little known origin of why very few women are in clinical studies. Thalidomide causes birth defects. So rather than say pregnant women can't be enrolled in drug trials, they said any woman who is at risk of getting pregnant cannot be enrolled. So there was actually a scientific meritorious argument back in the day when they really didn't know what was going to happen post-thalidomide. So it turns out that the adverse, unintended consequence of that decision was we don't have data on women and we know in certain drugs, like Xanax, that the metabolism is so much slower, that the typical dosing of Xanax is women should be less than half of that for men. And a lot of women have had very serious adverse effects by virtue of the fact that they weren't studied. So the point I want to illustrate with that is that regulatory cycles... So people have known for a long time that was like a bad way of doing regulations. It should be changed. It's only recently getting changed in any meaningful way. So regulatory cycles and legislative cycles are incredibly slow. The rate of exponential growth in technology is exponential. And so there's impedance mismatch between the cycle time for regulation cycle time for innovation. And what we need to do... I'm working with the FDA. I've done four workshops with them on this very issue. Is that they recognize that they need to completely revitalize their process. They're very interested in doing it. They're not resisting it. People think, oh, they're bad, the FDA, they're resisting. Trust me, there's nobody on the planet who wants to revise these review processes more than the FDA itself. And so they're looking at models and what I recommended is global cloud sourcing and the FDA could shift from a regulatory role to one of doing two things, assuring the people who do their reviews are competent, and assuring that their conflicts of interest are managed, because if you don't have a conflict of interest in this very interconnected space, you probably don't know enough to be a reviewer. So there has to be a way to manage the conflict of interest and I think those are some of the keypoints that the FDA is wrestling with because there's type one and type two errors. If you underregulate, you end up with another thalidomide and people born without fingers. If you overregulate, you prevent life saving drugs from coming to market. So striking that balance across all these different technologies is extraordinarily difficult. If it were easy, the FDA would've done it four years ago. It's very complicated. >> Jumping on that question, so all three of you are in some ways entrepreneurs, right? Within your organization or started companies. And I think it would be good to talk a little bit about the business opportunity here, where there's a huge ecosystem in health care, different segments, biotech, pharma, insurance payers, etc. Where do you see is the ripe opportunity or industry, ready to really take this on and to make AI the competitive advantage. >> Well, the last question also included why aren't you using the result of the sepsis detection? We do. There were six or seven published ways of doing it. We did our own data, looked at it, we found a way that was superior to all the published methods and we apply that today, so we are actually using that technology to change clinical outcomes. As far as where the opportunities are... So it's interesting. Because if you look at what's going to be here in three years, we're not going to be using those big data analytics models for sepsis that we are deploying today, because we're just going to be getting a tiny aliquot of blood, looking for the DNA or RNA of any potential infection and we won't have to infer that there's a bacterial infection from all these other ancillary, secondary phenomenon. We'll see if the DNA's in the blood. So things are changing so fast that the opportunities that people need to look for are what are generalizable and sustainable kind of wins that are going to lead to a revenue cycle that are justified, a venture capital world investing. So there's a lot of interesting opportunities in the space. But I think some of the biggest opportunities relate to what Bob has talked about in bringing many different disparate data sources together and really looking for things that are not comprehensible in the human brain or in traditional analytic models. >> I think we also got to look a little bit beyond direct care. We're talking about policy and how we set up standards, these kinds of things. That's one area. That's going to drive innovation forward. I completely agree with that. Direct care is one piece. How do we scale out many of the knowledge kinds of things that are embedded into one person's head and get them out to the world, democratize that. Then there's also development. The underlying technology's of medicine, right? Pharmaceuticals. The traditional way that pharmaceuticals is developed is actually kind of funny, right? A lot of it was started just by chance. Penicillin, a very famous story right? It's not that different today unfortunately, right? It's conceptually very similar. Now we've got more science behind it. We talk about domains and interactions, these kinds of things but fundamentally, the problem is what we in computer science called NP hard, it's too difficult to model. You can't solve it analytically. And this is true for all these kinds of natural sorts of problems by the way. And so there's a whole field around this, molecular dynamics and modeling these sorts of things, that are actually being driven forward by these AI techniques. Because it turns out, our brain doesn't do magic. It actually doesn't solve these problems. It approximates them very well. And experience allows you to approximate them better and better. Actually, it goes a little bit to what you were saying before. It's like simulations and forming your own networks and training off each other. There are these emerging dynamics. You can simulate steps of physics. And you come up with a system that's much too complicated to ever solve. Three pool balls on a table is one such system. It seems pretty simple. You know how to model that, but it actual turns out you can't predict where a balls going to be once you inject some energy into that table. So something that simple is already too complex. So neural network techniques actually allow us to start making those tractable. These NP hard problems. And things like molecular dynamics and actually understanding how different medications and genetics will interact with each other is something we're seeing today. And so I think there's a huge opportunity there. We've actually worked with customers in this space. And I'm seeing it. Like Rosch is acquiring a few different companies in space. They really want to drive it forward, using big data to drive drug development. It's kind of counterintuitive. I never would've thought it had I not seen it myself. >> And there's a big related challenge. Because in personalized medicine, there's smaller and smaller cohorts of people who will benefit from a drug that still takes two billion dollars on average to develop. That is unsustainable. So there's an economic imperative of overcoming the cost and the cycle time for drug development. >> I want to take a go at this question a little bit differently, thinking about not so much where are the industry segments that can benefit from AI, but what are the kinds of applications that I think are most impactful. So if this is what a skilled surgeon needs to know at a particular time to care properly for a patient, this is where most, this area here, is where most surgeons are. They are close to the maximum knowledge and ability to assimilate as they can be. So it's possible to build complex AI that can pick up on that one little thing and move them up to here. But it's not a gigantic accelerator, amplifier of their capability. But think about other actors in health care. I mentioned a couple of them earlier. Who do you think the least trained actor in health care is? >> John: Patients. >> Yes, the patients. The patients are really very poorly trained, including me. I'm abysmal at figuring out who to call and where to go. >> Naveen: You know as much the doctor right? (laughing) >> Yeah, that's right. >> My doctor friends always hate that. Know your diagnosis, right? >> Yeah, Dr. Google knows. So the opportunities that I see that are really, really exciting are when you take an AI agent, like sometimes I like to call it contextually intelligent agent, or a CIA, and apply it to a problem where a patient has a complex future ahead of them that they need help navigating. And you use the AI to help them work through. Post operative. You've got PT. You've got drugs. You've got to be looking for side effects. An agent can actually help you navigate. It's like your own personal GPS for health care. So it's giving you the inforamation that you need about you for your care. That's my definition of Precision Medicine. And it can include genomics, of course. But it's much bigger. It's that broader picture and I think that a sort of agent way of thinking about things and filling in the gaps where there's less training and more opportunity, is very exciting. >> Great start up idea right there by the way. >> Oh yes, right. We'll meet you all out back for the next start up. >> I had a conversation with the head of the American Association of Medical Specialties just a couple of days ago. And what she was saying, and I'm aware of this phenomenon, but all of the medical specialists are saying, you're killing us with these stupid board recertification trivia tests that you're giving us. So if you're a cardiologist, you have to remember something that happens in one in 10 million people, right? And they're saying that irrelevant anymore, because we've got advanced decision support coming. We have these kinds of analytics coming. Precisely what you're saying. So it's human augmentation of decision support that is coming at blazing speed towards health care. So in that context, it's much more important that you have a basic foundation, you know how to think, you know how to learn, and you know where to look. So we're going to be human-augmented learning systems much more so than in the past. And so the whole recertification process is being revised right now. (inaudible audience member speaking) Speak up, yeah. (person speaking) >> What makes it fathomable is that you can-- (audience member interjects inaudibly) >> Sure. She was saying that our brain is really complex and large and even our brains don't know how our brains work, so... are there ways to-- >> What hope do we have kind of thing? (laughter) >> It's a metaphysical question. >> It circles all the way down, exactly. It's a great quote. I mean basically, you can decompose every system. Every complicated system can be decomposed into simpler, emergent properties. You lose something perhaps with each of those, but you get enough to actually understand most of the behavior. And that's really how we understand the world. And that's what we've learned in the last few years what neural network techniques can allow us to do. And that's why our brain can understand our brain. (laughing) >> Yeah, I'd recommend reading Chris Farley's last book because he addresses that issue in there very elegantly. >> Yeah we're seeing some really interesting technologies emerging right now where neural network systems are actually connecting other neural network systems in networks. You can see some very compelling behavior because one of the things I like to distinguish AI versus traditional analytics is we used to have question-answering systems. I used to query a database and create a report to find out how many widgets I sold. Then I started using regression or machine learning to classify complex situations from this is one of these and that's one of those. And then as we've moved more recently, we've got these AI-like capabilities like being able to recognize that there's a kitty in the photograph. But if you think about it, if I were to show you a photograph that happened to have a cat in it, and I said, what's the answer, you'd look at me like, what are you talking about? I have to know the question. So where we're cresting with these connected sets of neural systems, and with AI in general, is that the systems are starting to be able to, from the context, understand what the question is. Why would I be asking about this picture? I'm a marketing guy, and I'm curious about what Legos are in the thing or what kind of cat it is. So it's being able to ask a question, and then take these question-answering systems, and actually apply them so that's this ability to understand context and ask questions that we're starting to see emerge from these more complex hierarchical neural systems. >> There's a person dying to ask a question. >> Sorry. You have hit on several different topics that all coalesce together. You mentioned personalized models. You mentioned AI agents that could help you as you're going through a transitionary period. You mentioned data sources, especially across long time periods. Who today has access to enough data to make meaningful progress on that, not just when you're dealing with an issue, but day-to-day improvement of your life and your health? >> Go ahead, great question. >> That was a great question. And I don't think we have a good answer to it. (laughter) I'm sure John does. Well, I think every large healthcare organization and various healthcare consortiums are working very hard to achieve that goal. The problem remains in creating semantic interoperatability. So I spent a lot of my career working on semantic interoperatability. And the problem is that if you don't have well-defined, or self-defined data, and if you don't have well-defined and documented metadata, and you start operating on it, it's real easy to reach false conclusions and I can give you a classic example. It's well known, with hundreds of studies looking at when you give an antibiotic before surgery and how effective it is in preventing a post-op infection. Simple question, right? So most of the literature done prosectively was done in institutions where they had small sample sizes. So if you pool that, you get a little bit more noise, but you get a more confirming answer. What was done at a very large, not my own, but a very large institution... I won't name them for obvious reasons, but they pooled lots of data from lots of different hospitals, where the data definitions and the metadata were different. Two examples. When did they indicate the antibiotic was given? Was it when it was ordered, dispensed from the pharmacy, delivered to the floor, brought to the bedside, put in the IV, or the IV starts flowing? Different hospitals used a different metric of when it started. When did surgery occur? When they were wheeled into the OR, when they were prepped and drapped, when the first incision occurred? All different. And they concluded quite dramatically that it didn't matter when you gave the pre-op antibiotic and whether or not you get a post-op infection. And everybody who was intimate with the prior studies just completely ignored and discounted that study. It was wrong. And it was wrong because of the lack of commonality and the normalization of data definitions and metadata definitions. So because of that, this problem is much more challenging than you would think. If it were so easy as to put all these data together and operate on it, normalize and operate on it, we would've done that a long time ago. It's... Semantic interoperatability remains a big problem and we have a lot of heavy lifting ahead of us. I'm working with the Global Alliance, for example, of Genomics and Health. There's like 30 different major ontologies for how you represent genetic information. And different institutions are using different ones in different ways in different versions over different periods of time. That's a mess. >> Our all those issues applicable when you're talking about a personalized data set versus a population? >> Well, so N of 1 studies and single-subject research is an emerging field of statistics. So there's some really interesting new models like step wedge analytics for doing that on small sample sizes, recruiting people asynchronously. There's single-subject research statistics. You compare yourself with yourself at a different point in time, in a different context. So there are emerging statistics to do that and as long as you use the same sensor, you won't have a problem. But people are changing their remote sensors and you're getting different data. It's measured in different ways with different sensors at different normalization and different calibration. So yes. It even persists in the N of 1 environment. >> Yeah, you have to get started with a large N that you can apply to the N of 1. I'm actually going to attack your question from a different perspective. So who has the data? The millions of examples to train a deep learning system from scratch. It's a very limited set right now. Technology such as the Collaborative Cancer Cloud and The Data Exchange are definitely impacting that and creating larger and larger sets of critical mass. And again, not withstanding the very challenging semantic interoperability questions. But there's another opportunity Kay asked about what's changed recently. One of the things that's changed in deep learning is that we now have modules that have been trained on massive data sets that are actually very smart as certain kinds of problems. So, for instance, you can go online and find deep learning systems that actually can recognize, better than humans, whether there's a cat, dog, motorcycle, house, in a photograph. >> From Intel, open source. >> Yes, from Intel, open source. So here's what happens next. Because most of that deep learning system is very expressive. That combinatorial mixture of features that Naveen was talking about, when you have all these layers, there's a lot of features there. They're actually very general to images, not just finding cats, dogs, trees. So what happens is you can do something called transfer learning, where you take a small or modest data set and actually reoptimize it for your specific problem very, very quickly. And so we're starting to see a place where you can... On one end of the spectrum, we're getting access to the computing capabilities and the data to build these incredibly expressive deep learning systems. And over here on the right, we're able to start using those deep learning systems to solve custom versions of problems. Just last weekend or two weekends ago, in 20 minutes, I was able to take one of those general systems and create one that could recognize all different kinds of flowers. Very subtle distinctions, that I would never be able to know on my own. But I happen to be able to get the data set and literally, it took 20 minutes and I have this vision system that I could now use for a specific problem. I think that's incredibly profound and I think we're going to see this spectrum of wherever you are in your ability to get data and to define problems and to put hardware in place to see really neat customizations and a proliferation of applications of this kind of technology. >> So one other trend I think, I'm very hopeful about it... So this is a hard problem clearly, right? I mean, getting data together, formatting it from many different sources, it's one of these things that's probably never going to happen perfectly. But one trend I think that is extremely hopeful to me is the fact that the cost of gathering data has precipitously dropped. Building that thing is almost free these days. I can write software and put it on 100 million cell phones in an instance. You couldn't do that five years ago even right? And so, the amount of information we can gain from a cell phone today has gone up. We have more sensors. We're bringing online more sensors. People have Apple Watches and they're sending blood data back to the phone, so once we can actually start gathering more data and do it cheaper and cheaper, it actually doesn't matter where the data is. I can write my own app. I can gather that data and I can start driving the correct inferences or useful inferences back to you. So that is a positive trend I think here and personally, I think that's how we're going to solve it, is by gathering from that many different sources cheaply. >> Hi, my name is Pete. I've very much enjoyed the conversation so far but I was hoping perhaps to bring a little bit more focus into Precision Medicine and ask two questions. Number one, how have you applied the AI technologies as you're emerging so rapidly to your natural language processing? I'm particularly interested in, if you look at things like Amazon Echo or Siri, or the other voice recognition systems that are based on AI, they've just become incredibly accurate and I'm interested in specifics about how I might use technology like that in medicine. So where would I find a medical nomenclature and perhaps some reference to a back end that works that way? And the second thing is, what specifically is Intel doing, or making available? You mentioned some open source stuff on cats and dogs and stuff but I'm the doc, so I'm looking at the medical side of that. What are you guys providing that would allow us who are kind of geeks on the software side, as well as being docs, to experiment a little bit more thoroughly with AI technology? Google has a free AI toolkit. Several other people have come out with free AI toolkits in order to accelerate that. There's special hardware now with graphics, and different processors, hitting amazing speeds. And so I was wondering, where do I go in Intel to find some of those tools and perhaps learn a bit about the fantastic work that you guys are already doing at Kaiser? >> Let me take that first part and then we'll be able to talk about the MD part. So in terms of technology, this is what's extremely exciting now about what Intel is focusing on. We're providing those pieces. So you can actually assemble and build the application. How you build that application specific for MDs and the use cases is up to you or the one who's filling out the application. But we're going to power that technology for multiple perspectives. So Intel is already the main force behind The Data Center, right? Cloud computing, all this is already Intel. We're making that extremely amenable to AI and setting the standard for AI in the future, so we can do that from a number of different mechanisms. For somebody who wants to develop an application quickly, we have hosted solutions. Intel Nervana is kind of the brand for these kinds of things. Hosted solutions will get you going very quickly. Once you get to a certain level of scale, where costs start making more sense, things can be bought on premise. We're supplying that. We're also supplying software that makes that transition essentially free. Then taking those solutions that you develop in the cloud, or develop in The Data Center, and actually deploying them on device. You want to write something on your smartphone or PC or whatever. We're actually providing those hooks as well, so we want to make it very easy for developers to take these pieces and actually build solutions out of them quickly so you probably don't even care what hardware it's running on. You're like here's my data set, this is what I want to do. Train it, make it work. Go fast. Make my developers efficient. That's all you care about, right? And that's what we're doing. We're taking it from that point at how do we best do that? We're going to provide those technologies. In the next couple of years, there's going to be a lot of new stuff coming from Intel. >> Do you want to talk about AI Academy as well? >> Yeah, that's a great segway there. In addition to this, we have an entire set of tutorials and other online resources and things we're going to be bringing into the academic world for people to get going quickly. So that's not just enabling them on our tools, but also just general concepts. What is a neural network? How does it work? How does it train? All of these things are available now and we've made a nice, digestible class format that you can actually go and play with. >> Let me give a couple of quick answers in addition to the great answers already. So you're asking why can't we use medical terminology and do what Alexa does? Well, no, you may not be aware of this, but Andrew Ian, who was the AI guy at Google, who was recruited by Google, they have a medical chat bot in China today. I don't speak Chinese. I haven't been able to use it yet. There are two similar initiatives in this country that I know of. There's probably a dozen more in stealth mode. But Lumiata and Health Cap are doing chat bots for health care today, using medical terminology. You have the compound problem of semantic normalization within language, compounded by a cross language. I've done a lot of work with an international organization called Snowmed, which translates medical terminology. So you're aware of that. We can talk offline if you want, because I'm pretty deep into the semantic space. >> Go google Intel Nervana and you'll see all the websites there. It's intel.com/ai or nervanasys.com. >> Okay, great. Well this has been fantastic. I want to, first of all, thank all the people here for coming and asking great questions. I also want to thank our fantastic panelists today. (applause) >> Thanks, everyone. >> Thank you. >> And lastly, I just want to share one bit of information. We will have more discussions on AI next Tuesday at 9:30 AM. Diane Bryant, who is our general manager of Data Centers Group will be here to do a keynote. So I hope you all get to join that. Thanks for coming. (applause) (light electronic music)

Published Date : Mar 12 2017

SUMMARY :

And I'm excited to share with you He is the VP and general manager for the And it's pretty obvious that most of the useful data in that the technologies that we were developing So the mission is really to put and analyze it so you can actually understand So the field of microbiomics that I referred to earlier, so that you can think about it. is that the substrate of the data that you're operating on neural networks represent the world in the way And that's the way we used to look at it, right? and the more we understand the human cortex, What was it? also did the estimate of the density of information storage. and I'd be curious to hear from you And that is not the case today. Well, I don't like the idea of being discriminated against and you can actually then say what drug works best on this. I don't have clinic hours anymore, but I do take care of I practiced for many years I do more policy now. I just want to take a moment and see Yet most of the studies we do are small scale And so that barrier is going to enable So the idea is my data's really important to me. is much the same as you described. That's got to be a new one I've heard now. So I'm going to repeat this and ask Seems like a lot of the problems are regulatory, because I know the cycle is just going to be longer. And the diadarity is where you have and deep learning systems to understand, And that feeds back to your question about regulatory and to make AI the competitive advantage. that the opportunities that people need to look for to what you were saying before. of overcoming the cost and the cycle time and ability to assimilate Yes, the patients. Know your diagnosis, right? and filling in the gaps where there's less training We'll meet you all out back for the next start up. And so the whole recertification process is being are there ways to-- most of the behavior. because he addresses that issue in there is that the systems are starting to be able to, You mentioned AI agents that could help you So most of the literature done prosectively So there are emerging statistics to do that that you can apply to the N of 1. and the data to build these And so, the amount of information we can gain And the second thing is, what specifically is Intel doing, and the use cases is up to you that you can actually go and play with. You have the compound problem of semantic normalization all the websites there. I also want to thank our fantastic panelists today. So I hope you all get to join that.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Diane BryantPERSON

0.99+

Bob RogersPERSON

0.99+

Kay ErinPERSON

0.99+

JohnPERSON

0.99+

David HausslerPERSON

0.99+

ChinaLOCATION

0.99+

sixQUANTITY

0.99+

Chris FarleyPERSON

0.99+

Naveen RaoPERSON

0.99+

100%QUANTITY

0.99+

BobPERSON

0.99+

10QUANTITY

0.99+

Ray KurzweilPERSON

0.99+

IntelORGANIZATION

0.99+

LondonLOCATION

0.99+

MikePERSON

0.99+

John MadisonPERSON

0.99+

American Association of Medical SpecialtiesORGANIZATION

0.99+

fourQUANTITY

0.99+

GoogleORGANIZATION

0.99+

three monthsQUANTITY

0.99+

HHSORGANIZATION

0.99+

Andrew IanPERSON

0.99+

20 minutesQUANTITY

0.99+

$100QUANTITY

0.99+

first paperQUANTITY

0.99+

CongressORGANIZATION

0.99+

95 percentQUANTITY

0.99+

second authorQUANTITY

0.99+

UC Santa CruzORGANIZATION

0.99+

100-dollarQUANTITY

0.99+

200 waysQUANTITY

0.99+

two billion dollarsQUANTITY

0.99+

George ChurchPERSON

0.99+

Health CapORGANIZATION

0.99+

NaveenPERSON

0.99+

25 plus yearsQUANTITY

0.99+

12 layersQUANTITY

0.99+

27 genesQUANTITY

0.99+

12 yearsQUANTITY

0.99+

KayPERSON

0.99+

140 layersQUANTITY

0.99+

first authorQUANTITY

0.99+

one questionQUANTITY

0.99+

200 peopleQUANTITY

0.99+

20QUANTITY

0.99+

FirstQUANTITY

0.99+

CIAORGANIZATION

0.99+

NLPORGANIZATION

0.99+

TodayDATE

0.99+

two questionsQUANTITY

0.99+

yesterdayDATE

0.99+

PetePERSON

0.99+

MedicareORGANIZATION

0.99+

LegosORGANIZATION

0.99+

Northern CaliforniaLOCATION

0.99+

EchoCOMMERCIAL_ITEM

0.99+

EachQUANTITY

0.99+

100 timesQUANTITY

0.99+

nervanasys.comOTHER

0.99+

$1000QUANTITY

0.99+

Ray ChrisfallPERSON

0.99+

NervanaORGANIZATION

0.99+

Data Centers GroupORGANIZATION

0.99+

Global AllianceORGANIZATION

0.99+

Global Alliance for Genomics and HealthORGANIZATION

0.99+

millionsQUANTITY

0.99+

intel.com/aiOTHER

0.99+

four yearsQUANTITY

0.99+

StanfordORGANIZATION

0.99+

10,000 examplesQUANTITY

0.99+

todayDATE

0.99+

one diseaseQUANTITY

0.99+

Two examplesQUANTITY

0.99+

Steven HawkingPERSON

0.99+

five years agoDATE

0.99+

firstQUANTITY

0.99+

two sortQUANTITY

0.99+

bothQUANTITY

0.99+

OneQUANTITY

0.99+

first timeQUANTITY

0.99+

Ron Bianchini | Google Next 2017


 

>> Is about what our youth is, and who we are today as a country, as a universe. >> Narrator: Congratulations, Reggie Jackson. You are Cube alumni. (gentle music) Live from Silicon Valley it's The Cube covering Google Cloud Next '17. (upbeat music) >> Hi, welcome back to The Cube's coverage of Google Next 2017 happening in San Francisco. We're shooting live from our 4,500 square foot here in Palo Alto in the heart of SiliconANGLE. Happy to welcome back to the program, I guess we haven't had him for a little while, but one that we know quite well, Ron Bianchini who's the CEO of Avere. Thanks for joining us. >> Thanks for having me. >> All right, so Ron, for our audience, why don't you give us the update? What's happening with Avere the company itself, and what brings you guys, which I think of you, no offense, you guys are infrastructure company I think of on there. How does cloud fit into the whole discussion you guys are having, and your customers? >> That's great, great segue. So, we started out as an infrastructure company and really what Avere learned to do, our whole IP, actually let me start this way. We started in 2008. Think about where the world was in 2008. People were trying to figure out how to get flash into the data center. And what we did is we came up with a storage system, a NAS server, that knew about two types of storage. It knew that there was very high performance nearby precious flash storage, but that big bulk storage, much cheaper, 1/10 the price disk was a high latency away. And we're able to take that solution, and we started out in the data center, we went after very high performance applications, but showed how you could do it at very low cost. >> It's great, nine years later, I mean, storage is infinite and free, right? >> That's right. The good news for us is the world is very much in the same place. The cost delta between flash and disk has stayed at 10 to one. Both have gotten a lot less expensive, but that difference between the two has stayed. It turns out a solution that knows how to use local high performance flash and store big bulk data at high latency away is an ideal solution for the cloud. And really what we're helping customers now is we're helping customers that are in the data center, in the Enterprise data center, we're helping them adopt cloud. And it works two ways. We support the gateway model where you can keep your compute on-prem and put your big bulk storage in the cloud, and we enable that model without seeing any delta, any change in performance or availability. But we also do the opposite of that, we enable customers to put their compute out in the cloud, and now the big bulk capacity could either stay in the cloud or could be on-prem. So really, I think about us as an enterprise data center play, just like you said, but now we're helping customers take baby steps and slowly adopt the cloud. >> All right, so terms I heard from Google this week, they talked about building the planetary scale computer. They talked about Google Spanner which gives us global, the time synchronization across the globe, the things that those of us with storage backgrounds, it's like, boy these are big, heavy challenges. >> Ron: Big words, right. >> Talking about some of the things, just physics that we try to figure all that. So, how do you guys fit into that? I mean, doesn't GFS, Google File System, solve all these issues for us? >> Right, great. So, one thing to understand is that enterprise storage uses a very different consistency model than Google Storage. So, there's a theorem called the CAP principle, C-A-P. It's consistency, availability, and partition tolerance. And basically what the CAP principle says is, of those three parameters, pick two, because it's impossible to build a storage system that does all three. And really, GFS is all about availability and partition tolerance, because they have big, scalable solutions. What it doesn't give you is exact consistency, and that's what NAS solutions do. NAS solutions are really the high consistency, still partition tolerant because you have big distributed scale systems, but you don't get that high availability piece. And it turns out, in the enterprise there are times when you need high availability storage, that's what you get from Google's file system, but then there are also times you need high consistency storage, that's what you get from an Avere solution. Imagine a bank account where you deposited a million dollars, and then you withdraw a million dollars from two locations, maybe 10 seconds apart. If you don't have a high consistency model, it might be possible to withdraw money from both places. That's what NAS guarantees. >> Ron, I want to get your viewpoint, I'm sure you talk to a lot of your customers. What's their mindset of cloud today, and what are the kinds of conversations that you're having with people stop by your booth at Moscone West. >> I think you said it right, Google is proposing big, scalable, huge features that the customers are trying to get access to, but moving everything from the data center into the cloud all at once to get them is a big, scary step. And so really what we enable people to do is to take baby steps. If you want to move a little bit of your capacity to the clouds, or petabytes of storage in the clouds, like one of our Genomics customers does, you could do that. Your compute and a lot of where they start in storage stays on-prem, but now they're leveraging the cloud for big, scalable capacity. Then we have other customers that want access to the compute and the performance of the scaling you can get. We allow them to get access to that as well. >> Any commentary on, I think about just the trend itself. There's no doubt how big cloud is and how fast it's growing. When we look on the data side Diane Green threw out a number that only 5% of the world's data sits in the public cloud, and that's going to shift. We know that there's a lot of compute-heavy workloads that really started out in those environments, or are leveraging that. So, there's a lot of kind of reasons why we haven't had the data there. We are starting to see some rapid acceleration. What do you see happening in the environment? >> I think that's right. I think the 5% number just gives us a window into how big this cloud movement is, how much is still left to be accomplished. We talk about cloud, cloud, cloud, as if it's already happened, but we're only at the cusp of what's possible. And that's really what we see as this next big phase of the cloud is ingest, is cloud adoption, it's migrating applications and storage into the cloud. >> Yeah, you said what, the future's already here, just unevenly distributed. >> That's right. It hasn't quite made it yet. >> You guys are headquartered in Pittsburgh. >> We are. >> I'm out of Boston. I always joke every time I come out here, it's like okay, I'm going to go spend a week in the Valley and in San Francisco, then I'm going to go back to the real world where I'm not seeing autonomous vehicles in front of me. You guys have some cool autonomous cars driving around Pittsburgh these days? >> Ron: We absolutely do. >> And not everybody is fully cloud-native, serverless and everything else like that. What are you seeing in the marketplace, what's interesting you these days? >> There's no doubt that in the future world all data, all applications, everything will be in the cloud unless there's a very important reason to have it nearby. We think with our Genomics customers, it has to start where the patients are. It has to start on-prem, and then it gets migrated to the cloud. But there's no doubt in the future all compute, especially the big, scalable things that we hear Google talk about will be there. The next five, seven years is all about how we get there from here. >> All right, and Ron, as people look at your company what should we be expecting kind of throughout the rest of this year as we look at you growing your future? >> It's all about making it easier to adopt the cloud. You're going to see higher levels of integration with our cloud partners, Google in particular. We do a lot of work with Google. You're going to see big steps as we move forward and make that integration better. >> You're working with the other cloud players, yes this is a Google show, but we want to talk about the environment. Lots of companies I talk to are like, "Look, yes Google's a player," but I talk to plenty of companies that, "Look, 3/4 of my customers are all on Amazon," and that's where a lot of the market is today. So, what's the breadth of the offering that you have to that? >> It is. We support all the three big cloud players, we support Amazon, Microsoft, and Google. What I will say is the Google team is very much focused on the enterprise, just like Avere is. And that actually helps us a lot. It's really helping us knock down customers and really helping get customers moved into the cloud. >> All right, Ron, I'll give you the final word. Takeaways for the week, anything else you want to share before we wrap. >> You know, it's exactly what you said. The cloud is coming, now it's just a matter of how we get there and watching the big momentum shift. >> I think Eric Schmidt said last year we were like kind of meet you were we are. This year it's, come on. It's, now's the time, we need to go. I think we understand how big cloud is going to be, it's one of the generational shifts that we're all going to be watching, and we're in the thick of it. So, thank you, Ron, for joining us, and we'll be back with lots more coverage here. We've got call-ins and people at the show itself doing dial-ins, pulling people in. Really broad community at this event, so stay tuned for lots more coverage, and you're watching The Cube. (upbeat music) (upbeat music)

Published Date : Mar 9 2017

SUMMARY :

and who we are today as Live from Silicon Valley it's The Cube but one that we know and what brings you guys, which and we started out in the data center, and now the big bulk capacity the things that those of us Talking about some of the things, and then you withdraw a million dollars and what are the kinds of conversations into the cloud all at once to get them that only 5% of the world's and storage into the cloud. the future's already here, That's right. headquartered in Pittsburgh. in the Valley and in San Francisco, What are you seeing in the marketplace, that in the future world and make that integration better. of the market is today. We support all the to share before we wrap. You know, it's exactly what you said. It's, now's the time, we need to go.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
AmazonORGANIZATION

0.99+

2008DATE

0.99+

MicrosoftORGANIZATION

0.99+

RonPERSON

0.99+

Ron BianchiniPERSON

0.99+

GoogleORGANIZATION

0.99+

Reggie JacksonPERSON

0.99+

Eric SchmidtPERSON

0.99+

PittsburghLOCATION

0.99+

San FranciscoLOCATION

0.99+

BostonLOCATION

0.99+

Palo AltoLOCATION

0.99+

last yearDATE

0.99+

10QUANTITY

0.99+

Diane GreenPERSON

0.99+

10 secondsQUANTITY

0.99+

two waysQUANTITY

0.99+

AvereORGANIZATION

0.99+

two locationsQUANTITY

0.99+

4,500 square footQUANTITY

0.99+

twoQUANTITY

0.99+

This yearDATE

0.99+

nine years laterDATE

0.99+

Silicon ValleyLOCATION

0.99+

5%QUANTITY

0.98+

BothQUANTITY

0.98+

Moscone WestLOCATION

0.98+

this weekDATE

0.97+

both placesQUANTITY

0.97+

oneQUANTITY

0.97+

GenomicsORGANIZATION

0.96+

CubeORGANIZATION

0.96+

todayDATE

0.96+

The CubeTITLE

0.96+

a million dollarsQUANTITY

0.96+

1/10QUANTITY

0.96+

2017DATE

0.92+

three big cloud playersQUANTITY

0.91+

3/4QUANTITY

0.91+

seven yearsQUANTITY

0.91+

a weekQUANTITY

0.9+

The CubeORGANIZATION

0.9+

SiliconANGLELOCATION

0.87+

one thingQUANTITY

0.87+

threeQUANTITY

0.84+

three parametersQUANTITY

0.82+

about two typesQUANTITY

0.82+

Google CloudTITLE

0.81+

C-A-POTHER

0.8+

this yearDATE

0.8+

Next '17DATE

0.76+

SpannerTITLE

0.74+

fiveQUANTITY

0.67+

Next 2017EVENT

0.65+

GFSTITLE

0.63+

CAP principleOTHER

0.6+

GFSOTHER

0.55+

FileTITLE

0.54+

petabytesQUANTITY

0.54+

deltaORGANIZATION

0.47+