Shruthi Murthy, St. Louis University & Venkat Krishnamachari, MontyCloud | AWS Startup Showcase
(gentle music) >> Hello and welcome today's session theCUBE presentation of AWS Startup Showcase powered by theCUBE, I'm John Furrier, for your host of theCUBE. This is a session on breaking through with DevOps data analytics tools, cloud management tools with MontyCloud and cloud management migration, I'm your host. Thanks for joining me, I've got two great guests. Venkat Krishnamachari who's the co-founder and CEO of MontyCloud and Shruthi Sreenivasa Murthy, solution architect research computing group St. Louis University. Thanks for coming on to talk about transforming IT, day one day two operations. Venkat, great to see you. >> Great to see you again, John. So in this session, I really want to get into this cloud powerhouse theme you guys were talking about before on our previous Cube Conversations and what it means for customers, because there is a real market shift happening here. And I want to get your thoughts on what solution to the problem is basically, that you guys are targeting. >> Yeah, John, cloud migration is happening rapidly. Not an option. It is the current and the immediate future of many IT departments and any type of computing workloads. And applications and services these days are better served by cloud adoption. This rapid acceleration is where we are seeing a lot of challenges and we've been helping customers with our platform so they can go focus on their business. So happy to talk more about this. >> Yeah and Shruthi if you can just explain your relationship with these guys, because you're a cloud architect, you can try to put this together. MontyCloud is your customer, talk about your solution. >> Yeah I work at the St. Louis University as the solutions architect for the office of Vice President of Research. We can address St. Louis University as SLU, just to keep it easy. SLU is a 200-year-old university with more focus on research. And our goal at the Research Computing Group is to help researchers by providing the right infrastructure and computing capabilities that help them to advance their research. So here in SLU research portfolio, it's quite diverse, right? So we do research on vaccines, economics, geospatial intelligence, and many other really interesting areas, and you know, it involves really large data sets. So one of the research computing groups' ambitious plan is to move as many high-end computation applications from on-prem to the AWS. And I lead all the cloud initiatives for the St. Louis university. >> Yeah Venkat and I, we've been talking, many times on theCUBE, previous interviews about, you know, the rapid agility that's happening with serverless and functions, and, you know, microservices start to see massive acceleration of how fast cloud apps are being built. It's put a lot of pressure on companies to hang on and manage all this. And whether you're a security group was trying to lock down something, or it's just, it's so fast, the cloud development scene is really fun and you're implementing it at a large scale. What's it like these days from a development standpoint? You've got all this greatness in the cloud. What's the DevOps mindset right now? >> SLU is slowly evolving itself as the AWS Center of Excellence here in St. Louis. And most of the workflows that we are trying to implement on AWS and DevOps and, you know, CICD Pipelines. And basically we want it ready and updated for the researchers where they can use it and not have to wait on any of the resources. So it has a lot of importance. >> Research as code, it's like the internet, infrastructure as code is DevOps' ethos. Venkat, let's get into where this all leads to because you're seeing a culture shift in companies as they start to realize if they don't move fast, and the blockers that get in the way of the innovation, you really can't get your arms around this growth as an opportunity to operationalize all the new technology, could you talk about the transformation goals that are going on with your customer base. What's going on in the market? Can you explain and unpack the high level market around what you guys are doing? >> Sure thing, John. Let's bring up the slide one. So they have some content that Act-On tabs. John, every legal application, commercial application, even internal IT departments, they're all transforming fast. Speed has never been more important in the era we are today. For example, COVID research, you know, analyzing massive data sets to come up with some recommendations. They don't demand a lot from the IT departments so that researchers and developers can move fast. And I need departments that are not only moving current workloads to the cloud they're also ensuring the cloud is being consumed the right way. So researchers can focus on what they do best, what we win, learning and working closely with customers and gathering is that there are three steps or three major, you know, milestone that we like to achieve. I would start the outcome, right? That the important milestone IT departments are trying to get to is transforming such that they're directly tied to the key business objectives. Everything they do has to be connected to the business objective, which means the time and you know, budget and everything's aligned towards what they want to deliver. IT departments we talk with have one common goal. They want to be experts in cloud operations. They want to deliver cloud operations excellence so that researchers and developers can move fast. But they're almost always under the, you know, they're time poor, right? And there is budget gaps and that is talent and tooling gap. A lot of that is what's causing the, you know, challenges on their path to journey. And we have taken a methodical and deliberate position in helping them get there. >> Shruthi hows your reaction to that? Because, I mean, you want it faster, cheaper, better than before. You don't want to have all the operational management hassles. You mentioned that you guys want to do this turnkey. Is that the use case that you're going after? Just research kind of being researchers having the access at their fingertips, all these resources? What's the mindset there, what's your expectation? >> Well, one of the main expectations is to be able to deliver it to the researchers as demand and need and, you know, moving from a traditional on-prem HBC to cloud would definitely help because, you know, we are able to give the right resources to the researchers and able to deliver projects in a timely manner, and, you know, with some additional help from MontyCloud data platform, we are able to do it even better. >> Yeah I like the onboarding thing and to get an easy and you get value quickly, that's the cloud business model. Let's unpack the platform, let's go into the hood. Venkat let's, if you can take us through the, some of the moving parts under the platform, then as you guys have it's up at the high level, the market's obvious for everyone out there watching Cloud ops, speed, stablism. But let's go look at the platform. Let's unpack that, do you mind pick up on slide two and let's go look at the what's going on in the platform. >> Sure. Let's talk about what comes out of the platform, right? They are directly tied to what the customers would like to have, right? Customers would like to fast track their day one activities. Solution architects, such as Shruthi, their role is to try and help get out of the way of the researchers, but we ubiquitous around delegating cloud solutions, right? Our platform acts like a seasoned cloud architect. It's as if you've instantly turned on a cloud solution architect that should, they can bring online and say, Hey, I want help here to go faster. Our lab then has capabilities that help customers provision a set of governance contracts, drive consumption in the right way. One of the key things about driving consumption the right way is to ensure that we prevent a security cost or compliance issues from happening in the first place, which means you're shifting a lot of the operational burden to left and make sure that when provisioning happens, you have a guard rails in place, we help with that, the platform solves a problem without writing code. And an important takeaway here, John is that a was built for architects and administrators who want to move fast without having to write a ton of code. And it is also a platform that they can bring online, autonomous bots that can solve problems. For example, when it comes to post provisioning, everybody is in the business of ensuring security because it's a shared model. Everybody has to keep an eye on compliance, that is also a shared responsibility, so is cost optimization. So we thought wouldn't it be awesome to have architects such as Shruthi turn on a compliance bot on the platform that gives them the peace of mind that somebody else and an autonomous bot is watching our 24 by 7 and make sure that these day two operations don't throw curve balls at them, right? That's important for agility. So platform solves that problem with an automation approach. Going forward on an ongoing basis, right, the operation burden is what gets IT departments. We've seen that happen repeatedly. Like IT department, you know, you know this, John, maybe you have some thoughts on this. You know, you know, if you have some comments on how IT can face this, then maybe that's better to hear from you. >> No, well first I want to unpack that platform because I think one of the advantages I see here and that people are talking about in the industry is not only is the technology's collision colliding between the security postures and rapid cloud development, because DevOps and cloud, folks, are moving super fast. They want things done at the point of coding and CICB pipeline, as well as any kind of changes, they want it fast, not weeks. They don't want to have someone blocking it like a security team, so automation with the compliance is beautiful because now the security teams can provide policies. Those policies can then go right into your platform. And then everyone's got the rules of the road and then anything that comes up gets managed through the policy. So I think this is a big trend that nobody's talking about because this allows the cloud to go faster. What's your reaction to that? Do you agree? >> No, precisely right. I'll let Shurthi jump on that, yeah. >> Yeah, you know, I just wanted to bring up one of the case studies that we read on cloud and use their compliance bot. So REDCap, the Research Electronic Data Capture also known as REDCap is a web application. It's a HIPAA web application. And while the flagship projects for the research group at SLU. REDCap was running on traditional on-prem infrastructure, so maintaining the servers and updating the application to its latest version was definitely a challenge. And also granting access to the researchers had long lead times because of the rules and security protocols in place. So we wanted to be able to build a secure and reliable enrollment on the cloud where we could just provision on demand and in turn ease the job of updating the application to its latest version without disturbing the production environment. Because this is a really important application, most of the doctors and researchers at St. Louis University and the School of Medicine and St. Louis University Hospital users. So given this challenge, we wanted to bring in MontyCloud's cloud ops and, you know, security expertise to simplify the provisioning. And that's when we implemented this compliance bot. Once it is implemented, it's pretty easy to understand, you know, what is compliant, what is noncompliant with the HIPAA standards and where it needs an remediation efforts and what we need to do. And again, that can also be automated. It's nice and simple, and you don't need a lot of cloud expertise to go through the compliance bot and come up with your remediation plan. >> What's the change in the outcome in terms of the speed turnaround time, the before and after? So before you're dealing with obviously provisioning stuff and lead time, but just a compliance closed loop, just to ask a question, do we have, you know, just, I mean, there's a lot of manual and also some, maybe some workflows in there, but not as not as cool as an instant bot that solve yes or no decision. And after MontyCloud, what are some of the times, can you share any data there just doing an order of magnitude. >> Yeah, definitely. So the provisioning was never simpler, I mean, we are able to provision with just one or two clicks, and then we have a better governance guardrail, like Venkat says, and I think, you know, to give you a specific data, it, the compliance bot does about more than 160 checks and it's all automated, so when it comes to security, definitely we have been able to save a lot of effort on that. And I can tell you that our researchers are able to be 40% more productive with the infrastructure. And our research computing group is able to kind of save the time and, you know, the security measures and the remediation efforts, because we get customized alerts and notifications and you just need to go in and, you know. >> So people are happier, right? People are getting along at the office or virtually, you know, no one is yelling at each other on Slack, hey, where's? Cause that's really the harmony here then, okay. This is like a, I'm joking aside. This is a real cultural issue between speed of innovation and the, what could be viewed as a block, or just the time that say security teams or other teams might want to get back to you, make sure things are compliant. So that could slow things down, that tension is real and there's some disconnects within companies. >> Yeah John, that's spot on, and that means we have to do a better job, not only solving the traditional problems and make them simple, but for the modern work culture of integrations. You know, it's not uncommon like you cut out for researchers and architects to talk in a Slack channel often. You say, Hey, I need this resource, or I want to reconfigure this. How do we make that collaboration better? How do you make the platform intelligent so that the platform can take off some of the burden off of people so that the platform can monitor, react, notify in a Slack channel, or if you should, the administrator say, Hey, next time, this happens automatically go create a ticket for me. If it happens next time in this environment automatically go run a playbook, that remediates it. That gives a lot of time back that puts a peace of mind and the process that an operating model that you have inherited and you're trying to deliver excellence and has more help, particularly because it is very dynamic footprint. >> Yeah, I think this whole guard rail thing is a really big deal, I think it's like a feature, but it's a super important outcome because if you can have policies that map into these bots that can check rules really fast, then developers will have the freedom to drive as fast as they want, and literally go hard and then shift left and do the coding and do all their stuff on the hygiene side from the day, one on security is really a big deal. Can we go back to this slide again for the other project? There's another project on that slide. You talked about RED, was it REDCap, was that one? >> Yeah. Yeah, so REDCap, what's the other project. >> So SCAER, the Sinfield Center for Applied Economic Research at SLU is also known as SCAER. They're pretty data intensive, and they're into some really sophisticated research. The Center gets daily dumps of sensitive geo data sensitive de-identified geo data from various sources, and it's a terabyte so every day, becomes petabytes. So you know, we don't get the data in workable formats for the researchers to analyze. So the first process is to convert this data into a workable format and keep an analysis ready and doing this at a large scale has many challenges. So we had to make this data available to a group of users too, and some external collaborators with ads, you know, more challenges again, because we also have to do this without compromising on the security. So to handle these large size data, we had to deploy compute heavy instances, such as, you know, R5, 12xLarge, multiple 12xLarge instances, and optimizing the cost and the resources deployed on the cloud again was a huge challenge. So that's when we had to take MontyCloud help in automating the whole process of ingesting the data into the infrastructure and then converting them into a workable format. And this was all automated. And after automating most of the efforts, we were able to bring down the data processing time from two weeks or more to three days, which really helped the researchers. So MontyCloud's data platform also helped us with automating the risk, you know, the resource optimization process and that in turn helped bring the costs down, so it's been pretty helpful then. >> That's impressive weeks to days, I mean, this is the theme Venkat speed, speed, speed, hybrid, hybrid. A lot of stuff happening. I mean, this is the new normal, this is going to make companies more productive if they can get the apps built faster. What do you see as the CEO and founder of the company you're out there, you know, you're forging new ground with this great product. What do you see as the blockers from customers? Is it cultural, is it lack of awareness? Why aren't people jumping all over this? >> Only people aren't, right. They go at it in so many different ways that, you know, ultimately be the one person IT team or massively well-funded IT team. Everybody wants to Excel at what they're delivering in cloud operations, the path to that as what, the challenging part, right? What are you seeing as customers are trying to build their own operating model and they're writing custom code, then that's a lot of need for provisioning, governance, security, compliance, and monitoring. So they start integrating point tools, then suddenly IT department is now having a, what they call a tax, right? They have to maintain the technical debt while cloud service moving fast. It's not uncommon for one of the developers or one of the projects to suddenly consume a brand new resource. And as you know, AWS throws up a lot more services every month, right? So suddenly you're not keeping up with that service. So what we've been able to look at this from a point of view of how do we get customers to focus on what they want to do and automate things that we can help them with? >> Let me, let me rephrase the question if you don't mind. Cause I I didn't want to give the impression that you guys aren't, you guys have a great solution, but I think when I see enterprises, you know, they're transforming, right? So it's not so much the cloud innovators, like you guys, it's really that it's like the mainstream enterprise, so I have to ask you from a customer standpoint, what's some of the cultural things are technical reasons why they're not going faster? Cause everyone's, maybe it's the pandemic's forcing projects to be double down on, or some are going to be cut, this common theme of making things available faster, cheaper, stronger, more secure is what cloud does. What are some of the enterprise challenges that they have? >> Yeah, you know, it might be money for right, there's some cultural challenges like Andy Jassy or sometimes it's leadership, right? You want top down leadership that takes a deterministic step towards transformation, then adequately funding the team with the right skills and the tools, a lot of that plays into it. And there's inertia typically in an existing process. And when you go to cloud, you can do 10X better, people see that it doesn't always percolate down to how you get there. So those challenges are compounded and digital transformation leaders have to, you know, make that deliberate back there, be more KPI-driven. One of the things we are seeing in companies that do well is that the leadership decides that here are our top business objectives and KPIs. Now if we want the software and the services and the cloud division to support those objectives when they take that approach, transformation happens. But that is a lot more easier said than done. >> Well you're making it really easy with your solution. And we've done multiple interviews. I've got to say you're really onto something really with this provisioning and the compliance bots. That's really strong, that the only goes stronger from there, with the trends with security being built in. Shruthi, got to ask you since you're the customer, what's it like working with MontyCloud? It sounds so awesome, you're customer, you're using it. What's your review, what's your- What's your, what's your take on them? >> Yeah they are doing a pretty good job in helping us automate most of our workflows. And when it comes to keeping a tab on the resources, the utilization of the resources, so we can keep a tab on the cost in turn, you know, their compliance bots, their cost optimization tab. It's pretty helpful. >> Yeah well you're knocking projects down from three weeks to days, looking good, I mean, looking real strong. Venkat this is the track record you want to see with successful projects. Take a minute to explain what else is going on with MontyCloud. Other use cases that you see that are really primed for MontyCloud's platform. >> Yeah, John, quick minute there. Autonomous cloud operations is the goal. It's never done, right? It there's always some work that you hands-on do. But if you set a goal such that customers need to have a solution that automates most of the routine operations, then they can focus on the business. So we are going to relentlessly focused on the fact that autonomous operations will have the digital transformation happen faster, and we can create a lot more value for customers if they deliver to their KPIs and objectives. So our investments in the platform are going more towards that. Today we already have a fully automated compliance bot, a security bot, a cost optimization recommendation engine, a provisioning and governance engine, where we're going is we are enhancing all of this and providing customers lot more fluidity in how they can use our platform Click to perform your routine operations, Click to set up rules based automatic escalation or remediation. Cut down the number of hops a particular process will take and foster collaboration. All of this is what our platform is going and enhancing more and more. We intend to learn more from our customers and deliver better for them as we move forward. >> That's a good business model, make things easier, reduce the steps it takes to do something, and save money. And you're doing all those things with the cloud and awesome stuff. It's really great to hear your success stories and the work you're doing over there. Great to see resources getting and doing their job faster. And it's good and tons of data. You've got petabytes of that's coming in. It's it's pretty impressive, thanks for sharing your story. >> Sounds good, and you know, one quick call out is customers can go to MontyCloud.com today. Within 10 minutes, they can get an account. They get a very actionable and valuable recommendations on where they can save costs, what is the security compliance issues they can fix. There's a ton of out-of-the-box reports. One click to find out whether you are having some data that is not encrypted, or if any of your servers are open to the world. A lot of value that customers can get in under 10 minutes. And we believe in that model, give the value to customers. They know what to do with that, right? So customers can go sign up for a free trial at MontyCloud.com today and get the value. >> Congratulations on your success and great innovation. A startup showcase here with theCUBE coverage of AWS Startup Showcase breakthrough in DevOps, Data Analytics and Cloud Management with MontyCloud. I'm John Furrier, thanks for watching. (gentle music)
SUMMARY :
the co-founder and CEO Great to see you again, John. It is the current and the immediate future you can just explain And I lead all the cloud initiatives greatness in the cloud. And most of the workflows that and the blockers that get in important in the era we are today. Is that the use case and need and, you know, and to get an easy and you get of the researchers, but we ubiquitous the cloud to go faster. I'll let Shurthi jump on that, yeah. and reliable enrollment on the cloud of the speed turnaround to kind of save the time and, you know, as a block, or just the off of people so that the and do the coding and do all Yeah, so REDCap, what's the other project. the researchers to analyze. of the company you're out there, of the projects to suddenly So it's not so much the cloud innovators, and the cloud division to and the compliance bots. the cost in turn, you know, to see with successful projects. So our investments in the platform reduce the steps it takes to give the value to customers. Data Analytics and Cloud
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
John | PERSON | 0.99+ |
Shruthi | PERSON | 0.99+ |
Andy Jassy | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Shruthi Murthy | PERSON | 0.99+ |
two weeks | QUANTITY | 0.99+ |
40% | QUANTITY | 0.99+ |
Sinfield Center for Applied Economic Research | ORGANIZATION | 0.99+ |
Venkat Krishnamachari | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
School of Medicine | ORGANIZATION | 0.99+ |
St. Louis | LOCATION | 0.99+ |
one | QUANTITY | 0.99+ |
Shruthi Sreenivasa Murthy | PERSON | 0.99+ |
SLU | ORGANIZATION | 0.99+ |
Venkat | PERSON | 0.99+ |
10X | QUANTITY | 0.99+ |
St. Louis University Hospital | ORGANIZATION | 0.99+ |
HIPAA | TITLE | 0.99+ |
MontyCloud | ORGANIZATION | 0.99+ |
two operations | QUANTITY | 0.99+ |
24 | QUANTITY | 0.99+ |
St. Louis University | ORGANIZATION | 0.99+ |
two clicks | QUANTITY | 0.99+ |
Today | DATE | 0.99+ |
three steps | QUANTITY | 0.99+ |
three days | QUANTITY | 0.99+ |
Excel | TITLE | 0.99+ |
10 minutes | QUANTITY | 0.99+ |
today | DATE | 0.98+ |
under 10 minutes | QUANTITY | 0.98+ |
200-year-old | QUANTITY | 0.98+ |
three weeks | QUANTITY | 0.98+ |
One | QUANTITY | 0.97+ |
first | QUANTITY | 0.97+ |
Research Computing Group | ORGANIZATION | 0.97+ |
MontyCloud.com | ORGANIZATION | 0.96+ |
Venkat | ORGANIZATION | 0.96+ |
first process | QUANTITY | 0.96+ |
AWS Center of Excellence | ORGANIZATION | 0.95+ |
Research Electronic Data Capture | ORGANIZATION | 0.95+ |
theCUBE | ORGANIZATION | 0.95+ |
two | QUANTITY | 0.95+ |
7 | QUANTITY | 0.94+ |
Shurthi | PERSON | 0.94+ |
about more than 160 checks | QUANTITY | 0.94+ |
one person | QUANTITY | 0.93+ |
St. Louis university | ORGANIZATION | 0.93+ |
two great guests | QUANTITY | 0.93+ |
One click | QUANTITY | 0.93+ |
Vice President | PERSON | 0.91+ |
one common goal | QUANTITY | 0.9+ |
pandemic | EVENT | 0.9+ |
three major | QUANTITY | 0.9+ |
DockerCon2021 Keynote
>>Individuals create developers, translate ideas to code, to create great applications and great applications. Touch everyone. A Docker. We know that collaboration is key to your innovation sharing ideas, working together. Launching the most secure applications. Docker is with you wherever your team innovates, whether it be robots or autonomous cars, we're doing research to save lives during a pandemic, revolutionizing, how to buy and sell goods online, or even going into the unknown frontiers of space. Docker is launching innovation everywhere. Join us on the journey to build, share, run the future. >>Hello and welcome to Docker con 2021. We're incredibly excited to have more than 80,000 of you join us today from all over the world. As it was last year, this year at DockerCon is 100% virtual and 100% free. So as to enable as many community members as possible to join us now, 100%. Virtual is also an acknowledgement of the continuing global pandemic in particular, the ongoing tragedies in India and Brazil, the Docker community is a global one. And on behalf of all Dr. Khan attendees, we are donating $10,000 to UNICEF support efforts to fight the virus in those countries. Now, even in those regions of the world where the pandemic is being brought under control, virtual first is the new normal. It's been a challenging transition. This includes our team here at Docker. And we know from talking with many of you that you and your developer teams are challenged by this as well. So to help application development teams better collaborate and ship faster, we've been working on some powerful new features and we thought it would be fun to start off with a demo of those. How about it? Want to have a look? All right. Then no further delay. I'd like to introduce Youi Cal and Ben, gosh, over to you and Ben >>Morning, Ben, thanks for jumping on real quick. >>Have you seen the email from Scott? The one about updates and the docs landing page Smith, the doc combat and more prominence. >>Yeah. I've got something working on my local machine. I haven't committed anything yet. I was thinking we could try, um, that new Docker dev environments feature. >>Yeah, that's cool. So if you hit the share button, what I should do is it will take all of your code and the dependencies and the image you're basing it on and wrap that up as one image for me. And I can then just monitor all my machines that have been one click, like, and then have it side by side, along with the changes I've been looking at as well, because I was also having a bit of a look and then I can really see how it differs to what I'm doing. Maybe I can combine it to do the best of both worlds. >>Sounds good. Uh, let me get that over to you, >>Wilson. Yeah. If you pay with the image name, I'll get that started up. >>All right. Sen send it over >>Cheesy. Okay, great. Let's have a quick look at what you he was doing then. So I've been messing around similar to do with the batter. I've got movie at the top here and I think it looks pretty cool. Let's just grab that image from you. Pick out that started on a dev environment. What this is doing. It's just going to grab the image down, which you can take all of the code, the dependencies only get brunches working on and I'll get that opened up in my idea. Ready to use. It's a here close. We can see our environment as my Molly image, just coming down there and I've got my new idea. >>We'll load this up and it'll just connect to my dev environment. There we go. It's connected to the container. So we're working all in the container here and now give it a moment. What we'll do is we'll see what changes you've been making as well on the code. So it's like she's been working on a landing page as well, and it looks like she's been changing the banner as well. So let's get this running. Let's see what she's actually doing and how it looks. We'll set up our checklist and then we'll see how that works. >>Great. So that's now rolling. So let's just have a look at what you use doing what changes she had made. Compare those to mine just jumped back into my dev container UI, see that I've got both of those running side by side with my changes and news changes. Okay. So she's put Molly up there rather than mobi or somebody had the same idea. So I think in a way I can make us both happy. So if we just jumped back into what we'll do, just add Molly and Moby and here I'll save that. And what we can see is, cause I'm just working within the container rather than having to do sort of rebuild of everything or serve, or just reload my content. No, that's straight the page. So what I can then do is I can come up with my browser here. Once that's all refreshed, refresh the page once hopefully, maybe twice, we should then be able to see your refresh it or should be able to see that we get Malia mobi come up. So there we go, got Molly mobi. So what we'll do now is we'll describe that state. It sends us our image and then we'll just create one of those to share with URI or share. And we'll get a link for that. I guess we'll send that back over to you. >>So I've had a look at what you were doing and I'm actually going to change. I think that might work for both of us. I wondered if you could take a look at it. If I send it over. >>Sounds good. Let me grab the link. >>Yeah, it's a dev environment link again. So if you just open that back in the doc dashboard, it should be able to open up the code that I've changed and then just run it in the same way you normally do. And that shouldn't interrupt what you're already working on because there'll be able to run side by side with your other brunch. You already got, >>Got it. Got it. Loading here. Well, that's great. It's Molly and movie together. I love it. I think we should ship it. >>Awesome. I guess it's chip it and get on with the rest of.com. Wasn't that cool. Thank you Joey. Thanks Ben. Everyone we'll have more of this later in the keynote. So stay tuned. Let's say earlier, we've all been challenged by this past year, whether the COVID pandemic, the complete evaporation of customer demand in many industries, unemployment or business bankruptcies, we all been touched in some way. And yet, even to miss these tragedies last year, we saw multiple sources of hope and inspiration. For example, in response to COVID we saw global communities, including the tech community rapidly innovate solutions for analyzing the spread of the virus, sequencing its genes and visualizing infection rates. In fact, if all in teams collaborating on solutions for COVID have created more than 1,400 publicly shareable images on Docker hub. As another example, we all witnessed the historic landing and exploration of Mars by the perseverance Rover and its ingenuity drone. >>Now what's common in these examples, these innovative and ambitious accomplishments were made possible not by any single individual, but by teams of individuals collaborating together. The power of teams is why we've made development teams central to Docker's mission to build tools and content development teams love to help them get their ideas from code to cloud as quickly as possible. One of the frictions we've seen that can slow down to them in teams is that the path from code to cloud can be a confusing one, riddle with multiple point products, tools, and images that need to be integrated and maintained an automated pipeline in order for teams to be productive. That's why a year and a half ago we refocused Docker on helping development teams make sense of all this specifically, our goal is to provide development teams with the trusted content, the sharing capabilities and the pipeline integrations with best of breed third-party tools to help teams ship faster in short, to provide a collaborative application development platform. >>Everything a team needs to build. Sharon run create applications. Now, as I noted earlier, it's been a challenging year for everyone on our planet and has been similar for us here at Docker. Our team had to adapt to working from home local lockdowns caused by the pandemic and other challenges. And despite all this together with our community and ecosystem partners, we accomplished many exciting milestones. For example, in open source together with the community and our partners, we open sourced or made major contributions to many projects, including OCI distribution and the composed plugins building on these open source projects. We had powerful new capabilities to the Docker product, both free and subscription. For example, support for WSL two and apple, Silicon and Docker, desktop and vulnerability scanning audit logs and image management and Docker hub. >>And finally delivering an easy to use well-integrated development experience with best of breed tools and content is only possible through close collaboration with our ecosystem partners. For example, this last year we had over 100 commercialized fees, join our Docker verified publisher program and over 200 open source projects, join our Docker sponsored open source program. As a result of these efforts, we've seen some exciting growth in the Docker community in the 12 months since last year's Docker con for example, the number of registered developers grew 80% to over 8 million. These developers created many new images increasing the total by 56% to almost 11 million. And the images in all these repositories were pulled by more than 13 million monthly active IP addresses totaling 13 billion pulls a month. Now while the growth is exciting by Docker, we're even more excited about the stories we hear from you and your development teams about how you're using Docker and its impact on your businesses. For example, cancer researchers and their bioinformatics development team at the Washington university school of medicine needed a way to quickly analyze their clinical trial results and then share the models, the data and the analysis with other researchers they use Docker because it gives them the ease of use choice of pipeline tools and speed of sharing so critical to their research. And most importantly to the lives of their patients stay tuned for another powerful customer story later in the keynote from Matt fall, VP of engineering at Oracle insights. >>So with this last year behind us, what's next for Docker, but challenge you this last year of force changes in how development teams work, but we felt for years to come. And what we've learned in our discussions with you will have long lasting impact on our product roadmap. One of the biggest takeaways from those discussions that you and your development team want to be quicker to adapt, to changes in your environment so you can ship faster. So what is DACA doing to help with this first trusted content to own the teams that can focus their energies on what is unique to their businesses and spend as little time as possible on undifferentiated work are able to adapt more quickly and ship faster in order to do so. They need to be able to trust other components that make up their app together with our partners. >>Docker is doubling down and providing development teams with trusted content and the tools they need to use it in their applications. Second, remote collaboration on a development team, asking a coworker to take a look at your code used to be as easy as swiveling their chair around, but given what's happened in the last year, that's no longer the case. So as you even been hinted in the demo at the beginning, you'll see us deliver more capabilities for remote collaboration within a development team. And we're enabling development team to quickly adapt to any team configuration all on prem hybrid, all work from home, helping them remain productive and focused on shipping third ecosystem integrations, those development teams that can quickly take advantage of innovations throughout the ecosystem. Instead of getting locked into a single monolithic pipeline, there'll be the ones able to deliver amps, which impact their businesses faster. >>So together with our ecosystem partners, we are investing in more integrations with best of breed tools, right? Integrated automated app pipelines. Furthermore, we'll be writing more public API APIs and SDKs to enable ecosystem partners and development teams to roll their own integrations. We'll be sharing more details about remote collaboration and ecosystem integrations. Later in the keynote, I'd like to take a moment to share with Docker and our partners are doing for trusted content, providing development teams, access to content. They can trust, allows them to focus their coding efforts on what's unique and differentiated to that end Docker and our partners are bringing more and more trusted content to Docker hub Docker official images are 160 images of popular upstream open source projects that serve as foundational building blocks for any application. These include operating systems, programming, languages, databases, and more. Furthermore, these are updated patch scan and certified frequently. So I said, no image is older than 30 days. >>Docker verified publisher images are published by more than 100 commercialized feeds. The image Rebos are explicitly designated verify. So the developers searching for components for their app know that the ISV is actively maintaining the image. Docker sponsored open source projects announced late last year features images for more than 200 open source communities. Docker sponsors these communities through providing free storage and networking resources and offering their community members unrestricted access repos for businesses allow businesses to update and share their apps privately within their organizations using role-based access control and user authentication. No, and finally, public repos for communities enable community projects to be freely shared with anonymous and authenticated users alike. >>And for all these different types of content, we provide services for both development teams and ISP, for example, vulnerability scanning and digital signing for enhanced security search and filtering for discoverability packaging and updating services and analytics about how these products are being used. All this trusted content, we make available to develop teams for them directly to discover poll and integrate into their applications. Our goal is to meet development teams where they live. So for those organizations that prefer to manage their internal distribution of trusted content, we've collaborated with leading container registry partners. We announced our partnership with J frog late last year. And today we're very pleased to announce our partnerships with Amazon and Miranda's for providing an integrated seamless experience for joint for our joint customers. Lastly, the container images themselves and this end to end flow are built on open industry standards, which provided all the teams with flexibility and choice trusted content enables development teams to rapidly build. >>As I let them focus on their unique differentiated features and use trusted building blocks for the rest. We'll be talking more about trusted content as well as remote collaboration and ecosystem integrations later in the keynote. Now ecosystem partners are not only integral to the Docker experience for development teams. They're also integral to a great DockerCon experience, but please join me in thanking our Dr. Kent on sponsors and checking out their talks throughout the day. I also want to thank some others first up Docker team. Like all of you this last year has been extremely challenging for us, but the Docker team rose to the challenge and worked together to continue shipping great product, the Docker community of captains, community leaders, and contributors with your welcoming newcomers, enthusiasm for Docker and open exchanges of best practices and ideas talker, wouldn't be Docker without you. And finally, our development team customers. >>You trust us to help you build apps. Your businesses rely on. We don't take that trust for granted. Thank you. In closing, we often hear about the tenant's developer capable of great individual feeds that can transform project. But I wonder if we, as an industry have perhaps gotten this wrong by putting so much emphasis on weight, on the individual as discussed at the beginning, great accomplishments like innovative responses to COVID-19 like landing on Mars are more often the results of individuals collaborating together as a team, which is why our mission here at Docker is delivered tools and content developers love to help their team succeed and become 10 X teams. Thanks again for joining us, we look forward to having a great DockerCon with you today, as well as a great year ahead of us. Thanks and be well. >>Hi, I'm Dana Lawson, VP of engineering here at get hub. And my job is to enable this rich interconnected community of builders and makers to build even more and hopefully have a great time doing it in order to enable the best platform for developers, which I know is something we are all passionate about. We need to partner across the ecosystem to ensure that developers can have a great experience across get hub and all the tools that they want to use. No matter what they are. My team works to build the tools and relationships to make that possible. I am so excited to join Scott on this virtual stage to talk about increasing developer velocity. So let's dive in now, I know this may be hard for some of you to believe, but as a former CIS admin, some 21 years ago, working on sense spark workstations, we've come such a long way for random scripts and desperate systems that we've stitched together to this whole inclusive developer workflow experience being a CIS admin. >>Then you were just one piece of the siloed experience, but I didn't want to just push code to production. So I created scripts that did it for me. I taught myself how to code. I was the model lazy CIS admin that got dangerous and having pushed a little too far. I realized that working in production and building features is really a team sport that we had the opportunity, all of us to be customer obsessed today. As developers, we can go beyond the traditional dev ops mindset. We can really focus on adding value to the customer experience by ensuring that we have work that contributes to increasing uptime via and SLS all while being agile and productive. We get there. When we move from a pass the Baton system to now having an interconnected developer workflow that increases velocity in every part of the cycle, we get to work better and smarter. >>And honestly, in a way that is so much more enjoyable because we automate away all the mundane and manual and boring tasks. So we get to focus on what really matters shipping, the things that humans get to use and love. Docker has been a big part of enabling this transformation. 10, 20 years ago, we had Tomcat containers, which are not Docker containers. And for y'all hearing this the first time go Google it. But that was the way we built our applications. We had to segment them on the server and give them resources. Today. We have Docker containers, these little mini Oasys and Docker images. You can do it multiple times in an orchestrated manner with the power of actions enabled and Docker. It's just so incredible what you can do. And by the way, I'm showing you actions in Docker, which I hope you use because both are great and free for open source. >>But the key takeaway is really the workflow and the automation, which you certainly can do with other tools. Okay, I'm going to show you just how easy this is, because believe me, if this is something I can learn and do anybody out there can, and in this demo, I'll show you about the basic components needed to create and use a package, Docker container actions. And like I said, you won't believe how awesome the combination of Docker and actions is because you can enable your workflow to do no matter what you're trying to do in this super baby example. We're so small. You could take like 10 seconds. Like I am here creating an action due to a simple task, like pushing a message to your logs. And the cool thing is you can use it on any the bit on this one. Like I said, we're going to use push. >>You can do, uh, even to order a pizza every time you roll into production, if you wanted, but at get hub, that'd be a lot of pizzas. And the funny thing is somebody out there is actually tried this and written that action. If you haven't used Docker and actions together, check out the docs on either get hub or Docker to get you started. And a huge shout out to all those doc writers out there. I built this demo today using those instructions. And if I can do it, I know you can too, but enough yapping let's get started to save some time. And since a lot of us are Docker and get hub nerds, I've already created a repo with a Docker file. So we're going to skip that step. Next. I'm going to create an action's Yammel file. And if you don't Yammer, you know, actions, the metadata defines my important log stuff to capture and the input and my time out per parameter to pass and puts to the Docker container, get up a build image from your Docker file and run the commands in a new container. >>Using the Sigma image. The cool thing is, is you can use any Docker image in any language for your actions. It doesn't matter if it's go or whatever in today's I'm going to use a shell script and an input variable to print my important log stuff to file. And like I said, you know me, I love me some. So let's see this action in a workflow. When an action is in a private repo, like the one I demonstrating today, the action can only be used in workflows in the same repository, but public actions can be used by workflows in any repository. So unfortunately you won't get access to the super awesome action, but don't worry in the Guild marketplace, there are over 8,000 actions available, especially the most important one, that pizza action. So go try it out. Now you can do this in a couple of ways, whether you're doing it in your preferred ID or for today's demo, I'm just going to use the gooey. I'm going to navigate to my actions tab as I've done here. And I'm going to in my workflow, select new work, hello, probably load some workflows to Claire to get you started, but I'm using the one I've copied. Like I said, the lazy developer I am in. I'm going to replace it with my action. >>That's it. So now we're going to go and we're going to start our commitment new file. Now, if we go over to our actions tab, we can see the workflow in progress in my repository. I just click the actions tab. And because they wrote the actions on push, we can watch the visualization under jobs and click the job to see the important stuff we're logging in the input stamp in the printed log. And we'll just wait for this to run. Hello, Mona and boom. Just like that. It runs automatically within our action. We told it to go run as soon as the files updated because we're doing it on push merge. That's right. Folks in just a few minutes, I built an action that writes an entry to a log file every time I push. So I don't have to do it manually. In essence, with automation, you can be kind to your future self and save time and effort to focus on what really matters. >>Imagine what I could do with even a little more time, probably order all y'all pieces. That is the power of the interconnected workflow. And it's amazing. And I hope you all go try it out, but why do we care about all of that? Just like in the demo, I took a manual task with both tape, which both takes time and it's easy to forget and automated it. So I don't have to think about it. And it's executed every time consistently. That means less time for me to worry about my human errors and mistakes, and more time to focus on actually building the cool stuff that people want. Obviously, automation, developer productivity, but what is even more important to me is the developer happiness tools like BS, code actions, Docker, Heroku, and many others reduce manual work, which allows us to focus on building things that are awesome. >>And to get into that wonderful state that we call flow. According to research by UC Irvine in Humboldt university in Germany, it takes an average of 23 minutes to enter optimal creative state. What we call the flow or to reenter it after distraction like your dog on your office store. So staying in flow is so critical to developer productivity and as a developer, it just feels good to be cranking away at something with deep focus. I certainly know that I love that feeling intuitive collaboration and automation features we built in to get hub help developer, Sam flow, allowing you and your team to do so much more, to bring the benefits of automation into perspective in our annual October's report by Dr. Nicole, Forsgren. One of my buddies here at get hub, took a look at the developer productivity in the stork year. You know what we found? >>We found that public GitHub repositories that use the Automational pull requests, merge those pull requests. 1.2 times faster. And the number of pooled merged pull requests increased by 1.3 times, that is 34% more poor requests merged. And other words, automation can con can dramatically increase, but the speed and quantity of work completed in any role, just like an open source development, you'll work more efficiently with greater impact when you invest the bulk of your time in the work that adds the most value and eliminate or outsource the rest because you don't need to do it, make the machines by elaborate by leveraging automation in their workflows teams, minimize manual work and reclaim that time for innovation and maintain that state of flow with development and collaboration. More importantly, their work is more enjoyable because they're not wasting the time doing the things that the machines or robots can do for them. >>And I remember what I said at the beginning. Many of us want to be efficient, heck even lazy. So why would I spend my time doing something I can automate? Now you can read more about this research behind the art behind this at October set, get hub.com, which also includes a lot of other cool info about the open source ecosystem and how it's evolving. Speaking of the open source ecosystem we at get hub are so honored to be the home of more than 65 million developers who build software together for everywhere across the globe. Today, we're seeing software development taking shape as the world's largest team sport, where development teams collaborate, build and ship products. It's no longer a solo effort like it was for me. You don't have to take my word for it. Check out this globe. This globe shows real data. Every speck of light you see here represents a contribution to an open source project, somewhere on earth. >>These arts reach across continents, cultures, and other divides. It's distributed collaboration at its finest. 20 years ago, we had no concept of dev ops, SecOps and lots, or the new ops that are going to be happening. But today's development and ops teams are connected like ever before. This is only going to continue to evolve at a rapid pace, especially as we continue to empower the next hundred million developers, automation helps us focus on what's important and to greatly accelerate innovation. Just this past year, we saw some of the most groundbreaking technological advancements and achievements I'll say ever, including critical COVID-19 vaccine trials, as well as the first power flight on Mars. This past month, these breakthroughs were only possible because of the interconnected collaborative open source communities on get hub and the amazing tools and workflows that empower us all to create and innovate. Let's continue building, integrating, and automating. So we collectively can give developers the experience. They deserve all of the automation and beautiful eye UIs that we can muster so they can continue to build the things that truly do change the world. Thank you again for having me today, Dr. Khan, it has been a pleasure to be here with all you nerds. >>Hello. I'm Justin. Komack lovely to see you here. Talking to developers, their world is getting much more complex. Developers are being asked to do everything security ops on goal data analysis, all being put on the rockers. Software's eating the world. Of course, and this all make sense in that view, but they need help. One team. I told you it's shifted all our.net apps to run on Linux from windows, but their developers found the complexity of Docker files based on the Linux shell scripts really difficult has helped make these things easier for your teams. Your ones collaborate more in a virtual world, but you've asked us to make this simpler and more lightweight. You, the developers have asked for a paved road experience. You want things to just work with a simple options to be there, but it's not just the paved road. You also want to be able to go off-road and do interesting and different things. >>Use different components, experiments, innovate as well. We'll always offer you both those choices at different times. Different developers want different things. It may shift for ones the other paved road or off road. Sometimes you want reliability, dependability in the zone for day to day work, but sometimes you have to do something new, incorporate new things in your pipeline, build applications for new places. Then you knew those off-road abilities too. So you can really get under the hood and go and build something weird and wonderful and amazing. That gives you new options. Talk as an independent choice. We don't own the roads. We're not pushing you into any technology choices because we own them. We're really supporting and driving open standards, such as ISEI working opensource with the CNCF. We want to help you get your applications from your laptops, the clouds, and beyond, even into space. >>Let's talk about the key focus areas, that frame, what DACA is doing going forward. These are simplicity, sharing, flexibility, trusted content and care supply chain compared to building where the underlying kernel primitives like namespaces and Seagraves the original Docker CLI was just amazing Docker engine. It's a magical experience for everyone. It really brought those innovations and put them in a world where anyone would use that, but that's not enough. We need to continue to innovate. And it was trying to get more done faster all the time. And there's a lot more we can do. We're here to take complexity away from deeply complicated underlying things and give developers tools that are just amazing and magical. One of the area we haven't done enough and make things magical enough that we're really planning around now is that, you know, Docker images, uh, they're the key parts of your application, but you know, how do I do something with an image? How do I, where do I attach volumes with this image? What's the API. Whereas the SDK for this image, how do I find an example or docs in an API driven world? Every bit of software should have an API and an API description. And our vision is that every container should have this API description and the ability for you to understand how to use it. And it's all a seamless thing from, you know, from your code to the cloud local and remote, you can, you can use containers in this amazing and exciting way. >>One thing I really noticed in the last year is that companies that started off remote fast have constant collaboration. They have zoom calls, apron all day terminals, shattering that always working together. Other teams are really trying to learn how to do this style because they didn't start like that. We used to walk around to other people's desks or share services on the local office network. And it's very difficult to do that anymore. You want sharing to be really simple, lightweight, and informal. Let me try your container or just maybe let's collaborate on this together. Um, you know, fast collaboration on the analysts, fast iteration, fast working together, and he wants to share more. You want to share how to develop environments, not just an image. And we all work by seeing something someone else in our team is doing saying, how can I do that too? I can, I want to make that sharing really, really easy. Ben's going to talk about this more in the interest of one minute. >>We know how you're excited by apple. Silicon and gravis are not excited because there's a new architecture, but excited because it's faster, cooler, cheaper, better, and offers new possibilities. The M one support was the most asked for thing on our public roadmap, EFA, and we listened and share that we see really exciting possibilities, usership arm applications, all the way from desktop to production. We know that you all use different clouds and different bases have deployed to, um, you know, we work with AWS and Azure and Google and more, um, and we want to help you ship on prime as well. And we know that you use huge number of languages and the containers help build applications that use different languages for different parts of the application or for different applications, right? You can choose the best tool. You have JavaScript hat or everywhere go. And re-ask Python for data and ML, perhaps getting excited about WebAssembly after hearing about a cube con, you know, there's all sorts of things. >>So we need to make that as easier. We've been running the whole month of Python on the blog, and we're doing a month of JavaScript because we had one specific support about how do I best put this language into production of that language into production. That detail is important for you. GPS have been difficult to use. We've added GPS suppose in desktop for windows, but we know there's a lot more to do to make the, how multi architecture, multi hardware, multi accelerator world work better and also securely. Um, so there's a lot more work to do to support you in all these things you want to do. >>How do we start building a tenor has applications, but it turns out we're using existing images as components. I couldn't assist survey earlier this year, almost half of container image usage was public images rather than private images. And this is growing rapidly. Almost all software has open source components and maybe 85% of the average application is open source code. And what you're doing is taking whole container images as modules in your application. And this was always the model with Docker compose. And it's a model that you're already et cetera, writing you trust Docker, official images. We know that they might go to 25% of poles on Docker hub and Docker hub provides you the widest choice and the best support that trusted content. We're talking to people about how to make this more helpful. We know, for example, that winter 69 four is just showing us as support, but the image doesn't yet tell you that we're working with canonical to improve messaging from specific images about left lifecycle and support. >>We know that you need more images, regularly updated free of vulnerabilities, easy to use and discover, and Donnie and Marie neuro, going to talk about that more this last year, the solar winds attack has been in the, in the news. A lot, the software you're using and trusting could be compromised and might be all over your organization. We need to reduce the risk of using vital open-source components. We're seeing more software supply chain attacks being targeted as the supply chain, because it's often an easier place to attack and production software. We need to be able to use this external code safely. We need to, everyone needs to start from trusted sources like photography images. They need to scan for known vulnerabilities using Docker scan that we built in partnership with sneak and lost DockerCon last year, we need just keep updating base images and dependencies, and we'll, we're going to help you have the control and understanding about your images that you need to do this. >>And there's more, we're also working on the nursery V2 project in the CNCF to revamp container signings, or you can tell way or software comes from we're working on tooling to make updates easier, and to help you understand and manage all the principals carrier you're using security is a growing concern for all of us. It's really important. And we're going to help you work with security. We can't achieve all our dreams, whether that's space travel or amazing developer products ever see without deep partnerships with our community to cloud is RA and the cloud providers aware most of you ship your occasion production and simple routes that take your work and deploy it easily. Reliably and securely are really important. Just get into production simply and easily and securely. And we've done a bunch of work on that. And, um, but we know there's more to do. >>The CNCF on the open source cloud native community are an amazing ecosystem of creators and lovely people creating an amazing strong community and supporting a huge amount of innovation has its roots in the container ecosystem and his dreams beyond that much of the innovation is focused around operate experience so far, but developer experience is really a growing concern in that community as well. And we're really excited to work on that. We also uses appraiser tool. Then we know you do, and we know that you want it to be easier to use in your environment. We just shifted Docker hub to work on, um, Kubernetes fully. And, um, we're also using many of the other projects are Argo from atheists. We're spending a lot of time working with Microsoft, Amazon right now on getting natural UV to ready to ship in the next few. That's a really detailed piece of collaboration we've been working on for a long term. Long time is really important for our community as the scarcity of the container containers and, um, getting content for you, working together makes us stronger. Our community is made up of all of you have. Um, it's always amazing to be reminded of that as a huge open source community that we already proud to work with. It's an amazing amount of innovation that you're all creating and where perhaps it, what with you and share with you as well. Thank you very much. And thank you for being here. >>Really excited to talk to you today and share more about what Docker is doing to help make you faster, make your team faster and turn your application delivery into something that makes you a 10 X team. What we're hearing from you, the developers using Docker everyday fits across three common themes that we hear consistently over and over. We hear that your time is super important. It's critical, and you want to move faster. You want your tools to get out of your way, and instead to enable you to accelerate and focus on the things you want to be doing. And part of that is that finding great content, great application components that you can incorporate into your apps to move faster is really hard. It's hard to discover. It's hard to find high quality content that you can trust that, you know, passes your test and your configuration needs. >>And it's hard to create good content as well. And you're looking for more safety, more guardrails to help guide you along that way so that you can focus on creating value for your company. Secondly, you're telling us that it's a really far to collaborate effectively with your team and you want to do more, to work more effectively together to help your tools become more and more seamless to help you stay in sync, both with yourself across all of your development environments, as well as with your teammates so that you can more effectively collaborate together. Review each other's work, maintain things and keep them in sync. And finally, you want your applications to run consistently in every single environment, whether that's your local development environment, a cloud-based development environment, your CGI pipeline, or the cloud for production, and you want that micro service to provide that consistent experience everywhere you go so that you have similar tools, similar environments, and you don't need to worry about things getting in your way, but instead things make it easy for you to focus on what you wanna do and what Docker is doing to help solve all of these problems for you and your colleagues is creating a collaborative app dev platform. >>And this collaborative application development platform consists of multiple different pieces. I'm not going to walk through all of them today, but the overall view is that we're providing all the tooling you need from the development environment, to the container images, to the collaboration services, to the pipelines and integrations that enable you to focus on making your applications amazing and changing the world. If we start zooming on a one of those aspects, collaboration we hear from developers regularly is that they're challenged in synchronizing their own setups across environments. They want to be able to duplicate the setup of their teammates. Look, then they can easily get up and running with the same applications, the same tooling, the same version of the same libraries, the same frameworks. And they want to know if their applications are good before they're ready to share them in an official space. >>They want to collaborate on things before they're done, rather than feeling like they have to officially published something before they can effectively share it with others to work on it, to solve this. We're thrilled today to announce Docker, dev environments, Docker, dev environments, transform how your team collaborates. They make creating, sharing standardized development environments. As simple as a Docker poll, they make it easy to review your colleagues work without affecting your own work. And they increase the reproducibility of your own work and decreased production issues in doing so because you've got consistent environments all the way through. Now, I'm going to pass it off to our principal product manager, Ben Gotch to walk you through more detail on Docker dev environments. >>Hi, I'm Ben. I work as a principal program manager at DACA. One of the areas that doc has been looking at to see what's hard today for developers is sharing changes that you make from the inner loop where the inner loop is a better development, where you write code, test it, build it, run it, and ultimately get feedback on those changes before you merge them and try and actually ship them out to production. Most amount of us build this flow and get there still leaves a lot of challenges. People need to jump between branches to look at each other's work. Independence. Dependencies can be different when you're doing that and doing this in this new hybrid wall of work. Isn't any easier either the ability to just save someone, Hey, come and check this out. It's become much harder. People can't come and sit down at your desk or take your laptop away for 10 minutes to just grab and look at what you're doing. >>A lot of the reason that development is hard when you're remote, is that looking at changes and what's going on requires more than just code requires all the dependencies and everything you've got set up and that complete context of your development environment, to understand what you're doing and solving this in a remote first world is hard. We wanted to look at how we could make this better. Let's do that in a way that let you keep working the way you do today. Didn't want you to have to use a browser. We didn't want you to have to use a new idea. And we wanted to do this in a way that was application centric. We wanted to let you work with all the rest of the application already using C for all the services and all those dependencies you need as part of that. And with that, we're excited to talk more about docket developer environments, dev environments are new part of the Docker experience that makes it easier you to get started with your whole inner leap, working inside a container, then able to share and collaborate more than just the code. >>We want it to enable you to share your whole modern development environment, your whole setup from DACA, with your team on any operating system, we'll be launching a limited beta of dev environments in the coming month. And a GA dev environments will be ID agnostic and supporting composts. This means you'll be able to use an extend your existing composed files to create your own development environment in whatever idea, working in dev environments designed to be local. First, they work with Docker desktop and say your existing ID, and let you share that whole inner loop, that whole development context, all of your teammates in just one collect. This means if you want to get feedback on the working progress change or the PR it's as simple as opening another idea instance, and looking at what your team is working on because we're using compose. You can just extend your existing oppose file when you're already working with, to actually create this whole application and have it all working in the context of the rest of the services. >>So it's actually the whole environment you're working with module one service that doesn't really understand what it's doing alone. And with that, let's jump into a quick demo. So you can see here, two dev environments up and running. First one here is the same container dev environment. So if I want to go into that, let's see what's going on in the various code button here. If that one open, I can get straight into my application to start making changes inside that dev container. And I've got all my dependencies in here, so I can just run that straight in that second application I have here is one that's opened up in compose, and I can see that I've also got my backend, my front end and my database. So I've got all my services running here. So if I want, I can open one or more of these in a dev environment, meaning that that container has the context that dev environment has the context of the whole application. >>So I can get back into and connect to all the other services that I need to test this application properly, all of them, one unit. And then when I've made my changes and I'm ready to share, I can hit my share button type in the refund them on to share that too. And then give that image to someone to get going, pick that up and just start working with that code and all my dependencies, simple as putting an image, looking ahead, we're going to be expanding development environments, more of your dependencies for the whole developer worst space. We want to look at backing up and letting you share your volumes to make data science and database setups more repeatable and going. I'm still all of this under a single workspace for your team containing images, your dev environments, your volumes, and more we've really want to allow you to create a fully portable Linux development environment. >>So everyone you're working with on any operating system, as I said, our MVP we're coming next month. And that was for vs code using their dev container primitive and more support for other ideas. We'll follow to find out more about what's happening and what's coming up next in the future of this. And to actually get a bit of a deeper dive in the experience. Can we check out the talk I'm doing with Georgie and girl later on today? Thank you, Ben, amazing story about how Docker is helping to make developer teams more collaborative. Now I'd like to talk more about applications while the dev environment is like the workbench around what you're building. The application itself has all the different components, libraries, and frameworks, and other code that make up the application itself. And we hear developers saying all the time things like, how do they know if their images are good? >>How do they know if they're secure? How do they know if they're minimal? How do they make great images and great Docker files and how do they keep their images secure? And up-to-date on every one of those ties into how do I create more trust? How do I know that I'm building high quality applications to enable you to do this even more effectively than today? We are pleased to announce the DACA verified polisher program. This broadens trusted content by extending beyond Docker official images, to give you more and more trusted building blocks that you can incorporate into your applications. It gives you confidence that you're getting what you expect because Docker verifies every single one of these publishers to make sure they are who they say they are. This improves our secure supply chain story. And finally it simplifies your discovery of the best building blocks by making it easy for you to find things that you know, you can trust so that you can incorporate them into your applications and move on and on the right. You can see some examples of the publishers that are involved in Docker, official images and our Docker verified publisher program. Now I'm pleased to introduce you to marina. Kubicki our senior product manager who will walk you through more about what we're doing to create a better experience for you around trust. >>Thank you, Dani, >>Mario Andretti, who is a famous Italian sports car driver. One said that if everything feels under control, you're just not driving. You're not driving fast enough. Maya Andretti is not a software developer and a software developers. We know that no matter how fast we need to go in order to drive the innovation that we're working on, we can never allow our applications to spin out of control and a Docker. As we continue talking to our, to the developers, what we're realizing is that in order to reach that speed, the developers are the, the, the development community is looking for the building blocks and the tools that will, they will enable them to drive at the speed that they need to go and have the trust in those building blocks. And in those tools that they will be able to maintain control over their applications. So as we think about some of the things that we can do to, to address those concerns, uh, we're realizing that we can pursue them in a number of different venues, including creating reliable content, including creating partnerships that expands the options for the reliable content. >>Um, in order to, in a we're looking at creating integrations, no link security tools, talk about the reliable content. The first thing that comes to mind are the Docker official images, which is a program that we launched several years ago. And this is a set of curated, actively maintained, open source images that, uh, include, uh, operating systems and databases and programming languages. And it would become immensely popular for, for, for creating the base layers of, of the images of, of the different images, images, and applications. And would we realizing that, uh, many developers are, instead of creating something from scratch, basically start with one of the official images for their basis, and then build on top of that. And this program has become so popular that it now makes up a quarter of all of the, uh, Docker poles, which essentially ends up being several billion pulse every single month. >>As we look beyond what we can do for the open source. Uh, we're very ability on the open source, uh, spectrum. We are very excited to announce that we're launching the Docker verified publishers program, which is continuing providing the trust around the content, but now working with, uh, some of the industry leaders, uh, in multiple, in multiple verticals across the entire technology technical spec, it costs entire, uh, high tech in order to provide you with more options of the images that you can use for building your applications. And it still comes back to trust that when you are searching for content in Docker hub, and you see the verified publisher badge, you know, that this is, this is the content that, that is part of the, that comes from one of our partners. And you're not running the risk of pulling the malicious image from an employee master source. >>As we look beyond what we can do for, for providing the reliable content, we're also looking at some of the tools and the infrastructure that we can do, uh, to create a security around the content that you're creating. So last year at the last ad, the last year's DockerCon, we announced partnership with sneak. And later on last year, we launched our DACA, desktop and Docker hub vulnerability scans that allow you the options of writing scans in them along multiple points in your dev cycle. And in addition to providing you with information on the vulnerability on, on the vulnerabilities, in, in your code, uh, it also provides you with a guidance on how to re remediate those vulnerabilities. But as we look beyond the vulnerability scans, we're also looking at some of the other things that we can do, you know, to, to, to, uh, further ensure that the integrity and the security around your images, your images, and with that, uh, later on this year, we're looking to, uh, launch the scope, personal access tokens, and instead of talking about them, I will simply show you what they look like. >>So if you can see here, this is my page in Docker hub, where I've created a four, uh, tokens, uh, read-write delete, read, write, read only in public read in public creeper read only. So, uh, earlier today I went in and I, I logged in, uh, with my read only token. And when you see, when I'm going to pull an image, it's going to allow me to pull an image, not a problem success. And then when I do the next step, I'm going to ask to push an image into the same repo. Uh, would you see is that it's going to give me an error message saying that they access is denied, uh, because there is an additional authentication required. So these are the things that we're looking to add to our roadmap. As we continue thinking about the things that we can do to provide, um, to provide additional building blocks, content, building blocks, uh, and, and, and tools to build the trust so that our DACA developer and skinned code faster than Mario Andretti could ever imagine. Uh, thank you to >>Thank you, marina. It's amazing what you can do to improve the trusted content so that you can accelerate your development more and move more quickly, move more collaboratively and build upon the great work of others. Finally, we hear over and over as that developers are working on their applications that they're looking for, environments that are consistent, that are the same as production, and that they want their applications to really run anywhere, any environment, any architecture, any cloud one great example is the recent announcement of apple Silicon. We heard from developers on uproar that they needed Docker to be available for that architecture before they could add those to it and be successful. And we listened. And based on that, we are pleased to share with you Docker, desktop on apple Silicon. This enables you to run your apps consistently anywhere, whether that's developing on your team's latest dev hardware, deploying an ARM-based cloud environments and having a consistent architecture across your development and production or using multi-year architecture support, which enables your whole team to collaborate on its application, using private repositories on Docker hub, and thrilled to introduce you to Hughie cower, senior director for product management, who will walk you through more of what we're doing to create a great developer experience. >>Senior director of product management at Docker. And I'd like to jump straight into a demo. This is the Mac mini with the apple Silicon processor. And I want to show you how you can now do an end-to-end arm workflow from my M one Mac mini to raspberry PI. As you can see, we have vs code and Docker desktop installed on a, my, the Mac mini. I have a small example here, and I have a raspberry PI three with an led strip, and I want to turn those LEDs into a moving rainbow. This Dockerfile here, builds the application. We build the image with the Docker, build X command to make the image compatible for all raspberry pies with the arm. 64. Part of this build is built with the native power of the M one chip. I also add the push option to easily share the image with my team so they can give it a try to now Dr. >>Creates the local image with the application and uploads it to Docker hub after we've built and pushed the image. We can go to Docker hub and see the new image on Docker hub. You can also explore a variety of images that are compatible with arm processors. Now let's go to the raspberry PI. I have Docker already installed and it's running Ubuntu 64 bit with the Docker run command. I can run the application and let's see what will happen from there. You can see Docker is downloading the image automatically from Docker hub and when it's running, if it's works right, there are some nice colors. And with that, if we have an end-to-end workflow for arm, where continuing to invest into providing you a great developer experience, that's easy to install. Easy to get started with. As you saw in the demo, if you're interested in the new Mac, mini are interested in developing for our platforms in general, we've got you covered with the same experience you've come to expect from Docker with over 95,000 arm images on hub, including many Docker official images. >>We think you'll find what you're looking for. Thank you again to the community that helped us to test the tech previews. We're so delighted to hear when folks say that the new Docker desktop for apple Silicon, it just works for them, but that's not all we've been working on. As Dani mentioned, consistency of developer experience across environments is so important. We're introducing composed V2 that makes compose a first-class citizen in the Docker CLI you no longer need to install a separate composed biter in order to use composed, deploying to production is simpler than ever with the new compose integration that enables you to deploy directly to Amazon ECS or Azure ACI with the same methods you use to run your application locally. If you're interested in running slightly different services, when you're debugging versus testing or, um, just general development, you can manage that all in one place with the new composed service to hear more about what's new and Docker desktop, please join me in the three 15 breakout session this afternoon. >>And now I'd love to tell you a bit more about bill decks and convince you to try it. If you haven't already it's our next gen build command, and it's no longer experimental as shown in the demo with built X, you'll be able to do multi architecture builds, share those builds with your team and the community on Docker hub. With build X, you can speed up your build processes with remote caches or build all the targets in your composed file in parallel with build X bake. And there's so much more if you're using Docker, desktop or Docker, CE you can use build X checkout tonus is talk this afternoon at three 45 to learn more about build X. And with that, I hope everyone has a great Dr. Khan and back over to you, Donnie. >>Thank you UA. It's amazing to hear about what we're doing to create a better developer experience and make sure that Docker works everywhere you need to work. Finally, I'd like to wrap up by showing you everything that we've announced today and everything that we've done recently to make your lives better and give you more and more for the single price of your Docker subscription. We've announced the Docker verified publisher program we've announced scoped personal access tokens to make it easier for you to have a secure CCI pipeline. We've announced Docker dev environments to improve your collaboration with your team. Uh, we shared with you Docker, desktop and apple Silicon, to make sure that, you know, Docker runs everywhere. You need it to run. And we've announced Docker compose version two, finally making it a first-class citizen amongst all the other great Docker tools. And we've done so much more recently as well from audit logs to advanced image management, to compose service profiles, to improve where you can run Docker more easily. >>Finally, as we look forward, where we're headed in the upcoming year is continuing to invest in these themes of helping you build, share, and run modern apps more effectively. We're going to be doing more to help you create a secure supply chain with which only grows more and more important as time goes on. We're going to be optimizing your update experience to make sure that you can easily understand the current state of your application, all its components and keep them all current without worrying about breaking everything as you're doing. So we're going to make it easier for you to synchronize your work. Using cloud sync features. We're going to improve collaboration through dev environments and beyond, and we're going to do make it easy for you to run your microservice in your environments without worrying about things like architecture or differences between those environments. Thank you so much. I'm thrilled about what we're able to do to help make your lives better. And now you're going to be hearing from one of our customers about what they're doing to launch their business with Docker >>I'm Matt Falk, I'm the head of engineering and orbital insight. And today I want to talk to you a little bit about data from space. So who am I like many of you, I'm a software developer and a software developer about seven companies so far, and now I'm a head of engineering. So I spend most of my time doing meetings, but occasionally I'll still spend time doing design discussions, doing code reviews. And in my free time, I still like to dabble on things like project oiler. So who's Oberlin site. What do we do? Portal insight is a large data supplier and analytics provider where we take data geospatial data anywhere on the planet, any overhead sensor, and translate that into insights for the end customer. So specifically we have a suite of high performance, artificial intelligence and machine learning analytics that run on this geospatial data. >>And we build them to specifically determine natural and human service level activity anywhere on the planet. What that really means is we take any type of data associated with a latitude and longitude and we identify patterns so that we can, so we can detect anomalies. And that's everything that we do is all about identifying those patterns to detect anomalies. So more specifically, what type of problems do we solve? So supply chain intelligence, this is one of the use cases that we we'd like to talk about a lot. It's one of our main primary verticals that we go after right now. And as Scott mentioned earlier, this had a huge impact last year when COVID hit. So specifically supply chain intelligence is all about identifying movement patterns to and from operating facilities to identify changes in those supply chains. How do we do this? So for us, we can do things where we track the movement of trucks. >>So identifying trucks, moving from one location to another in aggregate, same thing we can do with foot traffic. We can do the same thing for looking at aggregate groups of people moving from one location to another and analyzing their patterns of life. We can look at two different locations to determine how people are moving from one location to another, or going back and forth. All of this is extremely valuable for detecting how a supply chain operates and then identifying the changes to that supply chain. As I said last year with COVID, everything changed in particular supply chains changed incredibly, and it was hugely important for customers to know where their goods or their products are coming from and where they were going, where there were disruptions in their supply chain and how that's affecting their overall supply and demand. So to use our platform, our suite of tools, you can start to gain a much better picture of where your suppliers or your distributors are going from coming from or going to. >>So what's our team look like? So my team is currently about 50 engineers. Um, we're spread into four different teams and the teams are structured like this. So the first team that we have is infrastructure engineering and this team largely deals with deploying our Dockers using Kubernetes. So this team is all about taking Dockers, built by other teams, sometimes building the Dockers themselves and putting them into our production system, our platform engineering team, they produce these microservices. So they produce microservice, Docker images. They develop and test with them locally. Their entire environments are dockerized. They produce these doctors, hand them over to him for infrastructure engineering to be deployed. Similarly, our product engineering team does the same thing. They develop and test with Dr. Locally. They also produce a suite of Docker images that the infrastructure team can then deploy. And lastly, we have our R and D team, and this team specifically produces machine learning algorithms using Nvidia Docker collectively, we've actually built 381 Docker repositories and 14 million. >>We've had 14 million Docker pools over the lifetime of the company, just a few stats about us. Um, but what I'm really getting to here is you can see actually doctors becoming almost a form of communication between these teams. So one of the paradigms in software engineering that you're probably familiar with encapsulation, it's really helpful for a lot of software engineering problems to break the problem down, isolate the different pieces of it and start building interfaces between the code. This allows you to scale different pieces of the platform or different pieces of your code in different ways that allows you to scale up certain pieces and keep others at a smaller level so that you can meet customer demands. And for us, one of the things that we can largely do now is use Dockers as that interface. So instead of having an entire platform where all teams are talking to each other, and everything's kind of, mishmashed in a monolithic application, we can now say this team is only able to talk to this team by passing over a particular Docker image that defines the interface of what needs to be built before it passes to the team and really allows us to scalp our development and be much more efficient. >>Also, I'd like to say we are hiring. Um, so we have a number of open roles. We have about 30 open roles in our engineering team that we're looking to fill by the end of this year. So if any of this sounds really interesting to you, please reach out after the presentation. >>So what does our platform do? Really? Our platform allows you to answer any geospatial question, and we do this at three different inputs. So first off, where do you want to look? So we did this as what we call an AOI or an area of interest larger. You can think of this as a polygon drawn on the map. So we have a curated data set of almost 4 million AOIs, which you can go and you can search and use for your analysis, but you're also free to build your own. Second question is what you want to look for. We do this with the more interesting part of our platform of our machine learning and AI capabilities. So we have a suite of algorithms that automatically allow you to identify trucks, buildings, hundreds of different types of aircraft, different types of land use, how many people are moving from one location to another different locations that people in a particular area are moving to or coming from all of these different analyses or all these different analytics are available at the click of a button, and then determine what you want to look for. >>Lastly, you determine when you want to find what you're looking for. So that's just, uh, you know, do you want to look for the next three hours? Do you want to look for the last week? Do you want to look every month for the past two, whatever the time cadence is, you decide that you hit go and out pops a time series, and that time series tells you specifically where you want it to look what you want it to look for and how many, or what percentage of the thing you're looking for appears in that area. Again, we do all of this to work towards patterns. So we use all this data to produce a time series from there. We can look at it, determine the patterns, and then specifically identify the anomalies. As I mentioned with supply chain, this is extremely valuable to identify where things change. So we can answer these questions, looking at a particular operating facility, looking at particular, what is happening with the level of activity is at that operating facility where people are coming from, where they're going to, after visiting that particular facility and identify when and where that changes here, you can just see it's a picture of our platform. It's actually showing all the devices in Manhattan, um, over a period of time. And it's more of a heat map view. So you can actually see the hotspots in the area. >>So really the, and this is the heart of the talk, but what happened in 2020? So for men, you know, like many of you, 2020 was a difficult year COVID hit. And that changed a lot of what we're doing, not from an engineering perspective, but also from an entire company perspective for us, the motivation really became to make sure that we were lowering our costs and increasing innovation simultaneously. Now those two things often compete with each other. A lot of times you want to increase innovation, that's going to increase your costs, but the challenge last year was how to do both simultaneously. So here's a few stats for you from our team. In Q1 of last year, we were spending almost $600,000 per month on compute costs prior to COVID happening. That wasn't hugely a concern for us. It was a lot of money, but it wasn't as critical as it was last year when we really needed to be much more efficient. >>Second one is flexibility for us. We were deployed on a single cloud environment while we were cloud thought ready, and that was great. We want it to be more flexible. We want it to be on more cloud environments so that we could reach more customers. And also eventually get onto class side networks, extending the base of our customers as well from a custom analytics perspective. This is where we get into our traction. So last year, over the entire year, we computed 54,000 custom analytics for different users. We wanted to make sure that this number was steadily increasing despite us trying to lower our costs. So we didn't want the lowering cost to come as the sacrifice of our user base. Lastly, of particular percentage here that I'll say definitely needs to be improved is 75% of our projects never fail. So this is where we start to get into a bit of stability of our platform. >>Now I'm not saying that 25% of our projects fail the way we measure this is if you have a particular project or computation that runs every day and any one of those runs sale account, that is a failure because from an end-user perspective, that's an issue. So this is something that we know we needed to improve on and we needed to grow and make our platform more stable. I'm going to something that we really focused on last year. So where are we now? So now coming out of the COVID valley, we are starting to soar again. Um, we had, uh, back in April of last year, we had the entire engineering team. We actually paused all development for about four weeks. You had everyone focused on reducing our compute costs in the cloud. We got it down to 200 K over the period of a few months. >>And for the next 12 months, we hit that number every month. This is huge for us. This is extremely important. Like I said, in the COVID time period where costs and operating efficiency was everything. So for us to do that, that was a huge accomplishment last year and something we'll keep going forward. One thing I would actually like to really highlight here, two is what allowed us to do that. So first off, being in the cloud, being able to migrate things like that, that was one thing. And we were able to use there's different cloud services in a more particular, in a more efficient way. We had a very detailed tracking of how we were spending things. We increased our data retention policies. We optimized our processing. However, one additional piece was switching to new technologies on, in particular, we migrated to get lab CICB. >>Um, and this is something that the costs we use Docker was extremely, extremely easy. We didn't have to go build new new code containers or repositories or change our code in order to do this. We were simply able to migrate the containers over and start using a new CIC so much. In fact, that we were able to do that migration with three engineers in just two weeks from a cloud environment and flexibility standpoint, we're now operating in two different clouds. We were able to last night, I've over the last nine months to operate in the second cloud environment. And again, this is something that Docker helped with incredibly. Um, we didn't have to go and build all new interfaces to all new, different services or all different tools in the next cloud provider. All we had to do was build a base cloud infrastructure that ups agnostic the way, all the different details of the cloud provider. >>And then our doctors just worked. We can move them to another environment up and running, and our platform was ready to go from a traction perspective. We're about a third of the way through the year. At this point, we've already exceeded the amount of customer analytics we produce last year. And this is thanks to a ton more albums, that whole suite of new analytics that we've been able to build over the past 12 months and we'll continue to build going forward. So this is really, really great outcome for us because we were able to show that our costs are staying down, but our analytics and our customer traction, honestly, from a stability perspective, we improved from 75% to 86%, not quite yet 99 or three nines or four nines, but we are getting there. Um, and this is actually thanks to really containerizing and modularizing different pieces of our platform so that we could scale up in different areas. This allowed us to increase that stability. This piece of the code works over here, toxin an interface to the rest of the system. We can scale this piece up separately from the rest of the system, and that allows us much more easily identify issues in the system, fix those and then correct the system overall. So basically this is a summary of where we were last year, where we are now and how much more successful we are now because of the issues that we went through last year and largely brought on by COVID. >>But that this is just a screenshot of the, our, our solution actually working on supply chain. So this is in particular, it is showing traceability of a distribution warehouse in salt lake city. It's right in the center of the screen here. You can see the nice kind of orange red center. That's a distribution warehouse and all the lines outside of that, all the dots outside of that are showing where people are, where trucks are moving from that location. So this is really helpful for supply chain companies because they can start to identify where their suppliers are, are coming from or where their distributors are going to. So with that, I want to say, thanks again for following along and enjoy the rest of DockerCon.
SUMMARY :
We know that collaboration is key to your innovation sharing And we know from talking with many of you that you and your developer Have you seen the email from Scott? I was thinking we could try, um, that new Docker dev environments feature. So if you hit the share button, what I should do is it will take all of your code and the dependencies and Uh, let me get that over to you, All right. It's just going to grab the image down, which you can take all of the code, the dependencies only get brunches working It's connected to the container. So let's just have a look at what you use So I've had a look at what you were doing and I'm actually going to change. Let me grab the link. it should be able to open up the code that I've changed and then just run it in the same way you normally do. I think we should ship it. For example, in response to COVID we saw global communities, including the tech community rapidly teams make sense of all this specifically, our goal is to provide development teams with the trusted We had powerful new capabilities to the Docker product, both free and subscription. And finally delivering an easy to use well-integrated development experience with best of breed tools and content And what we've learned in our discussions with you will have long asking a coworker to take a look at your code used to be as easy as swiveling their chair around, I'd like to take a moment to share with Docker and our partners are doing for trusted content, providing development teams, and finally, public repos for communities enable community projects to be freely shared with anonymous Lastly, the container images themselves and this end to end flow are built on open industry standards, but the Docker team rose to the challenge and worked together to continue shipping great product, the again for joining us, we look forward to having a great DockerCon with you today, as well as a great year So let's dive in now, I know this may be hard for some of you to believe, I taught myself how to code. And by the way, I'm showing you actions in Docker, And the cool thing is you can use it on any And if I can do it, I know you can too, but enough yapping let's get started to save Now you can do this in a couple of ways, whether you're doing it in your preferred ID or for today's In essence, with automation, you can be kind to your future self And I hope you all go try it out, but why do we care about all of that? And to get into that wonderful state that we call flow. and eliminate or outsource the rest because you don't need to do it, make the machines Speaking of the open source ecosystem we at get hub are so to be here with all you nerds. Komack lovely to see you here. We want to help you get your applications from your laptops, And it's all a seamless thing from, you know, from your code to the cloud local And we all And we know that you use So we need to make that as easier. We know that they might go to 25% of poles we need just keep updating base images and dependencies, and we'll, we're going to help you have the control to cloud is RA and the cloud providers aware most of you ship your occasion production Then we know you do, and we know that you want it to be easier to use in your It's hard to find high quality content that you can trust that, you know, passes your test and your configuration more guardrails to help guide you along that way so that you can focus on creating value for your company. that enable you to focus on making your applications amazing and changing the world. Now, I'm going to pass it off to our principal product manager, Ben Gotch to walk you through more doc has been looking at to see what's hard today for developers is sharing changes that you make from the inner dev environments are new part of the Docker experience that makes it easier you to get started with your whole inner leap, We want it to enable you to share your whole modern development environment, your whole setup from DACA, So you can see here, So I can get back into and connect to all the other services that I need to test this application properly, And to actually get a bit of a deeper dive in the experience. Docker official images, to give you more and more trusted building blocks that you can incorporate into your applications. We know that no matter how fast we need to go in order to drive The first thing that comes to mind are the Docker official images, And it still comes back to trust that when you are searching for content in And in addition to providing you with information on the vulnerability on, So if you can see here, this is my page in Docker hub, where I've created a four, And based on that, we are pleased to share with you Docker, I also add the push option to easily share the image with my team so they can give it a try to now continuing to invest into providing you a great developer experience, a first-class citizen in the Docker CLI you no longer need to install a separate composed And now I'd love to tell you a bit more about bill decks and convince you to try it. image management, to compose service profiles, to improve where you can run Docker more easily. So we're going to make it easier for you to synchronize your work. And today I want to talk to you a little bit about data from space. What that really means is we take any type of data associated with a latitude So to use our platform, our suite of tools, you can start to gain a much better picture of where your So the first team that we have is infrastructure This allows you to scale different pieces of the platform or different pieces of your code in different ways that allows So if any of this sounds really interesting to you, So we have a suite of algorithms that automatically allow you to identify So you can actually see the hotspots in the area. the motivation really became to make sure that we were lowering our costs and increasing innovation simultaneously. of particular percentage here that I'll say definitely needs to be improved is 75% Now I'm not saying that 25% of our projects fail the way we measure this is if you have a particular And for the next 12 months, we hit that number every month. night, I've over the last nine months to operate in the second cloud environment. And this is thanks to a ton more albums, they can start to identify where their suppliers are, are coming from or where their distributors are going
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Mario Andretti | PERSON | 0.99+ |
Dani | PERSON | 0.99+ |
Matt Falk | PERSON | 0.99+ |
Dana Lawson | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Maya Andretti | PERSON | 0.99+ |
Donnie | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Mona | PERSON | 0.99+ |
Nicole | PERSON | 0.99+ |
UNICEF | ORGANIZATION | 0.99+ |
25% | QUANTITY | 0.99+ |
Germany | LOCATION | 0.99+ |
14 million | QUANTITY | 0.99+ |
75% | QUANTITY | 0.99+ |
Manhattan | LOCATION | 0.99+ |
Khan | PERSON | 0.99+ |
10 minutes | QUANTITY | 0.99+ |
last year | DATE | 0.99+ |
99 | QUANTITY | 0.99+ |
1.3 times | QUANTITY | 0.99+ |
1.2 times | QUANTITY | 0.99+ |
Claire | PERSON | 0.99+ |
Docker | ORGANIZATION | 0.99+ |
Scott | PERSON | 0.99+ |
Ben | PERSON | 0.99+ |
UC Irvine | ORGANIZATION | 0.99+ |
85% | QUANTITY | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
34% | QUANTITY | 0.99+ |
Justin | PERSON | 0.99+ |
Joey | PERSON | 0.99+ |
80% | QUANTITY | 0.99+ |
160 images | QUANTITY | 0.99+ |
2020 | DATE | 0.99+ |
$10,000 | QUANTITY | 0.99+ |
10 seconds | QUANTITY | 0.99+ |
23 minutes | QUANTITY | 0.99+ |
JavaScript | TITLE | 0.99+ |
April | DATE | 0.99+ |
two | QUANTITY | 0.99+ |
56% | QUANTITY | 0.99+ |
Python | TITLE | 0.99+ |
Molly | PERSON | 0.99+ |
Mac mini | COMMERCIAL_ITEM | 0.99+ |
Hughie cower | PERSON | 0.99+ |
two weeks | QUANTITY | 0.99+ |
100% | QUANTITY | 0.99+ |
Georgie | PERSON | 0.99+ |
Matt fall | PERSON | 0.99+ |
Mars | LOCATION | 0.99+ |
Second question | QUANTITY | 0.99+ |
Kubicki | PERSON | 0.99+ |
Moby | PERSON | 0.99+ |
India | LOCATION | 0.99+ |
DockerCon | EVENT | 0.99+ |
Youi Cal | PERSON | 0.99+ |
three nines | QUANTITY | 0.99+ |
J frog | ORGANIZATION | 0.99+ |
200 K | QUANTITY | 0.99+ |
apple | ORGANIZATION | 0.99+ |
Sharon | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
10 X | QUANTITY | 0.99+ |
COVID-19 | OTHER | 0.99+ |
windows | TITLE | 0.99+ |
381 | QUANTITY | 0.99+ |
Nvidia | ORGANIZATION | 0.99+ |
An Absolute Requirement for Precision Medicine Humanized Organ Study
>>Hello everybody. I am Toshihiko Nishimura from Stanford. University is there to TTT out here, super aging, global OMIM global transportation group about infections, uh, or major point of concerns. In addition, this year, we have the COVID-19 pandemic. As you can see here, while the why the new COVID-19 patients are still increasing, meanwhile, case count per day in the United state, uh, beginning to decrease this pandemic has changed our daily life to digital transformation. Even today, the micro segmentation is being conducted online and doctor and the nurse care, uh, now increase to telemedicine. Likewise, the drug development process is in need of major change paradigm shift, especially in vaccine in drug development for COVID-19 is, should be safe, effective, and faster >>In the >>Anastasia department, which is the biggest department in school of medicine. We have Stanford, a love for drug device development, regulatory science. So cold. Say the DDT RDS chairman is Ron Paul and this love leaderships are long mysel and stable shaper. In the drug development. We have three major pains, one exceedingly long duration that just 20 years huge budget, very low success rate general overview in the drug development. There are Discoverly but clinical clinical stage, as you see here, Tang. Yes. In clinical stage where we sit, say, what are the programs in D D D R S in each stages or mix program? Single cell programs, big data machine learning, deep learning, AI mathematics, statistics programs, humanized animal, the program SNS program engineering program. And we have annual symposium. Today's the, my talk, I do like to explain limitation of my science significance of humanized. My science out of separate out a program. I focused on humanized program. I believe this program is potent game changer for drug development mouse. When we think of animal experiment, many people think of immediately mouse. We have more than 30 kinds of inbred while the type such as chief 57, black KK yarrow, barber C white and so on using QA QC defined. Why did the type mice 18 of them gave him only one intervention using mouse, genomics analyzed, computational genetics. And then we succeeded to pick up fish one single gene in a week. >>We have another category of gene manipulated, mice transgenic, no clout, no Kamal's group. So far registered 40,000 kind as over today. Pretty critical requirement. Wrong FDA PMDA negative three sites are based on arteries. Two kinds of animal models, showing safety efficacy, combination of two animals and motel our mouse and the swine mouse and non-human primate. And so on mouse. Oh, Barry popular. Why? Because mouse are small enough, easy to handle big database we had and cost effective. However, it calls that low success rate. Why >>It, this issue speculation, low success rate came from a gap between preclinical the POC and the POC couldn't stay. Father divided into phase one. Phase two has the city FDA unsolved to our question. Speculation in nature biology using 7,372 new submissions, they found a 68 significant cradle out crazy too, to study approved by the process. And in total 90 per cent Radia in the clinical stages. What we can surmise from this study, FDA confirmed is that the big discrepancy between POC and clinical POC in another ward, any amount of data well, Ms. Representative for human, this nature bio report impacted our work significantly. >>What is a solution for this discrepancy? FDA standards require the people data from two species. One species is usually mice, but if the reported 90% in a preclinical data, then huge discrepancy between pretty critical POC in clinical POC. Our interpretation is data from mice, sometime representative, actually mice, and the humor of different especially immune system and the diva mice liver enzyme are missing, which human Liba has. This is one huge issue to be taught to overcome this problem. We started humanized mice program. What kind of human animals? We created one humanized, immune mice. The other is human eyes, DBA, mice. What is the definition of a humanized mice? They should have human gene or human cells or human tissues or human organs. Well, let me share one preclinical stages. Example of a humanized mouse that is polio receptor mice. This problem led by who was my mentor? Polio virus. Well, polio virus vaccine usually required no human primate to test in 13 years, collaboration with the FDA w H O polio eradication program. Finally FDA well as w H O R Purdue due to the place no human primate test to transgenic PVL. This is three. Our principle led by loss around the botch >>To move before this humanized mouse program, we need two other bonds donut outside your science, as well as the CPN mouse science >>human hormone, like GM CSF, Whoah, GCSF producing or human cytokine. those producing emoji mice are required in the long run. Two maintain human cells in their body under generation here, South the generation here, Dr. already created more than 100 kinds based on Z. The 100 kinds of Noe mice, we succeeded to create the human immune mice led the blood. The cell quite about the cell platelets are beautifully constituted in an mice, human and rebar MAs also succeeded to create using deparent human base. We have AGN diva, humanized mouse, American African human nine-thirty by mice co-case kitchen, humanized mice. These are Hennessy humanized, the immune and rebar model. On the other hand, we created disease rebar human either must to one example, congenital Liba disease, our guidance Schindel on patient model. >>The other model, we have infectious DDS and Waddell council Modell and GVH Modell. And so on creature stage or phase can a human itemize apply. Our objective is any stage. Any phase would be to, to propose. We propose experiment, pose a compound, which showed a huge discrepancy between. If Y you show the huge discrepancy, if Y is lucrative analog and the potent anti hepatitis B candidate in that predict clinical stage, it didn't show any toxicity in mice got dark and no human primate. On the other hand, weighing into clinical stage and crazy to October 15, salvage, five of people died and other 10 the show to very severe condition. >>Is that the reason why Nicole traditional the mice model is that throughout this, another mice Modell did not predict this severe side outcome. Why Zack humanized mouse, the Debar Modell demonstrate itself? Yes. Within few days that chemistry data and the puzzle physiology data phase two and phase the city requires huge number of a human subject. For example, COVID-19 vaccine development by Pfizer, AstraZeneca Moderna today, they are sample size are Southeast thousand vaccine development for COVID-19. She Novak UConn in China books for the us Erica Jones on the Johnson in unite United Kingdom. Well, there are now no box us Osaka Osaka, university hundred Japan. They are already in phase two industry discovery and predict clinical and regulatory stage foster in-app. However, clinical stage is a studious role because that phases required hugely number or the human subject 9,000 to 30,000. Even my conclusion, a humanized mouse model shortens the duration of drug development humanize, and most Isabel, uh, can be increase the success rate of drug development. Thank you for Ron Paul and to Steven YALI pelt at Stanford and and his team and or other colleagues. Thank you for listening.
SUMMARY :
case count per day in the United state, uh, beginning to decrease the drug development. our mouse and the swine mouse and non-human primate. is that the big discrepancy between POC and clinical What is the definition of a humanized mice? On the other hand, we created disease rebar human other 10 the show to very severe condition. that phases required hugely number or the human subject 9,000
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Ron Paul | PERSON | 0.99+ |
FDA | ORGANIZATION | 0.99+ |
October 15 | DATE | 0.99+ |
Pfizer | ORGANIZATION | 0.99+ |
Toshihiko Nishimura | PERSON | 0.99+ |
90% | QUANTITY | 0.99+ |
Two kinds | QUANTITY | 0.99+ |
9,000 | QUANTITY | 0.99+ |
two animals | QUANTITY | 0.99+ |
20 years | QUANTITY | 0.99+ |
One species | QUANTITY | 0.99+ |
two species | QUANTITY | 0.99+ |
100 kinds | QUANTITY | 0.99+ |
United Kingdom | LOCATION | 0.99+ |
7,372 new submissions | QUANTITY | 0.99+ |
13 years | QUANTITY | 0.99+ |
Steven YALI | PERSON | 0.99+ |
nine-thirty | QUANTITY | 0.99+ |
90 per cent | QUANTITY | 0.99+ |
Today | DATE | 0.99+ |
today | DATE | 0.98+ |
30,000 | QUANTITY | 0.98+ |
COVID-19 | OTHER | 0.98+ |
more than 30 kinds | QUANTITY | 0.98+ |
Barry | PERSON | 0.98+ |
this year | DATE | 0.98+ |
Two | QUANTITY | 0.98+ |
more than 100 kinds | QUANTITY | 0.98+ |
68 significant cradle | QUANTITY | 0.98+ |
Stanford | ORGANIZATION | 0.98+ |
pandemic | EVENT | 0.98+ |
one single gene | QUANTITY | 0.98+ |
Zack | PERSON | 0.98+ |
Single | QUANTITY | 0.97+ |
40,000 kind | QUANTITY | 0.97+ |
China | LOCATION | 0.97+ |
AstraZeneca Moderna | ORGANIZATION | 0.97+ |
hepatitis B | OTHER | 0.97+ |
five of people | QUANTITY | 0.96+ |
18 | QUANTITY | 0.96+ |
United state | LOCATION | 0.96+ |
Schindel | PERSON | 0.96+ |
Kamal | PERSON | 0.96+ |
one intervention | QUANTITY | 0.95+ |
COVID-19 pandemic | EVENT | 0.95+ |
polio virus | OTHER | 0.95+ |
two other bonds | QUANTITY | 0.95+ |
one example | QUANTITY | 0.94+ |
one | QUANTITY | 0.93+ |
Polio virus | OTHER | 0.93+ |
10 | QUANTITY | 0.93+ |
Anastasia | ORGANIZATION | 0.93+ |
Isabel | PERSON | 0.92+ |
Japan | LOCATION | 0.91+ |
three sites | QUANTITY | 0.91+ |
DDS | ORGANIZATION | 0.91+ |
Osaka | LOCATION | 0.88+ |
three | QUANTITY | 0.87+ |
GCSF | OTHER | 0.86+ |
phase two | QUANTITY | 0.86+ |
OMIM | ORGANIZATION | 0.86+ |
each stages | QUANTITY | 0.85+ |
a week | QUANTITY | 0.78+ |
57 | QUANTITY | 0.77+ |
Liba | OTHER | 0.76+ |
Tang | PERSON | 0.76+ |
Hennessy | PERSON | 0.76+ |
Nicole | PERSON | 0.75+ |
one huge issue | QUANTITY | 0.75+ |
Waddell council | ORGANIZATION | 0.73+ |
Precision Medicine | ORGANIZATION | 0.72+ |
DDT RDS | ORGANIZATION | 0.72+ |
American African | OTHER | 0.71+ |
Modell | COMMERCIAL_ITEM | 0.71+ |
GM CSF | OTHER | 0.71+ |
PVL | OTHER | 0.68+ |
Erica Jones | PERSON | 0.67+ |
FDA PMDA | ORGANIZATION | 0.66+ |
Phase two | QUANTITY | 0.61+ |
UConn | LOCATION | 0.61+ |
phase two | QUANTITY | 0.61+ |
Stanford | LOCATION | 0.59+ |
phase one | QUANTITY | 0.57+ |
GVH | ORGANIZATION | 0.57+ |
Novak | ORGANIZATION | 0.54+ |
A Cardiovascular Bio Digital Twin
>> Hello, welcome to the final day of the NTT Research Summit Upgrade 2020. My name is Joe Alexander and I belong to the Medical and Health Informatics lab, so-called MEI lab, and I lead the development of the bio digital twin. I'd like to give you a high level overview of what we mean by bio digital twin, what some of our immediate research targets are, and a description of our overall approach. You will note that my title is not simply bio digital twin, but more specifically a cardiovascular bio digital twin and you'll soon understand why. What do we mean by digital twin? For our project, we're taking the definition on approach used in commercial aviation, mostly for predictive maintenance of jet engines. A digital twin is an up-to-date virtual representation, an electronic replica if you will. Now, if anything which gives you real-time insight into the status of the real-world asset to enable better management and to inform decision-making. It aims to merge the real and the virtual world. It enables one to design, simulate, and verify products digitally, including mechanics and multi-physics. It allows integration of complex systems. It allows for predictive maintenance through direct real-time monitoring of the health and structure of the plane parts, mitigating danger. It enables monitoring of all machines anywhere at all times. This allows feeding back insights to continuously optimize the digital twin of the product, which in turn leads to continuous improvement of the product in the real world. A robust platform is needed for digital twins to live, learn and run. Because we aim to apply these concepts to biological systems for predictive maintenance of health, we use the term bio digital twin. We're aiming for a precision medicine and predictive health maintenance. And while ultimately we intend to represent multiple organ systems and the diseases affecting them, we will start with the cardiovascular system. When we revisit concepts from the last slide, there's the one-to-one mapping as you can see on this slide. A cardiovascular bio digital twin is an up-to-date virtual representation as well, but of a cardiovascular system, which gives you real-time insight into the status of the cardiovascular system of a real world patient to enable better care management and to inform clinical decision-making. It does so by merging the real and virtual world. It enables one to design, simulate, and verify drug and device treatments digitally, including cardiovascular mechanics and multi-physics. It allows integration of complex organ systems. It allows for predictive maintenance of health care through direct real-time monitoring of the health and functional integration, or excuse me, functional integrity of body parts, mitigating danger. It enables monitoring of all patients anywhere at all times. This allows feedback to continuously optimize the digital twins of subjects, which in turn leads to continuous improvements to the health of subjects in the real world. Also a robust platform is needed for digital twins to live, learn, and run. One platform under evaluation for us is called embodied bio-sciences. And it is a cloud-based platform leveraging AWS distributed computing database and cuing solutions. There are many cardiovascular diseases that might be targeted by cardiovascular bio digital twin. We have chosen to focus on the two most common forms of heart failure, and those are ischemic heart failure and hypertensive heart failure. Ischemic heart failure is usually due to coronary artery disease and hypertensive heart failure usually is secondary to high blood pressure. By targeting heart failure, number one, it forces us to automatically incorporate biological mechanisms, common to many other cardiovascular diseases. And two, heart failure is an area of significant unmet medical need, especially given the world's aging population. The prevalence of heart failure is estimated to be one to one and a half. I'm sorry, one to 5% in the general population. Heart failure is a common cause of hospitalization. The risk of heart failure increases with age. About a third to a half of the total number of patients diagnosed with heart failure, have a normal ejection fraction. Ischemic heart failure occurs in the setting of an insult to the coronary arteries causing atherosclerosis. The key physiologic mechanisms of ischemic heart failure are increased myocardial oxygen demand in the face of a limited myocardial oxygen supply. And hypertensive heart failure is usually characterized by complex myocardial alterations resulting from the response to stress imposed by the left ventricle by a chronic increase in blood pressure. In order to achieve precision medicine or optimized and individualized therapies for heart failure, we will develop three computational platforms over a five-year period. A neuro-hormonal regulation platform, a mechanical adaptation platform and an energetics platform. The neuro-hormonal platform is critical for characterizing a fundamental feature of chronic heart failure, which is neuro-humoral activation and alterations in regulatory control by the autonomic nervous system. We will also develop a mechanical adaptation and remodeling platform. Progressive changes in the mechanical structure of the heart, such as thickening or thinning a bit muscular walls in response to changes in workloads are directly related to future deterioration in cardiac performance and heart failure. And we'll develop an energetics platform, which includes the model of the coronary circulation, that is the blood vessels that supply the heart organ itself. And will thus provide a mechanism for characterizing the imbalances between the oxygen and metabolic requirements of cardiac tissues and their lack of availability due to neuro-hormonal activation and heart failure progression. We consider it the landscape of other organizations pursuing innovative solutions that may be considered as cardiovascular bio digital twins, according to a similar definition or conceptualization as ours. Some are companies like the UT Heart, Siemens Healthineers, Computational Life. Some are academic institutions like the Johns Hopkins Institute for Computational Medicine, the Washington University Cardiac Bio Electricity and Arrhythmia Center. And then some are consortia such as echos, which stands for enhanced cardiac care through extensive sensing. And that's a consortium of academic and industrial partners. These other organizations have different aims of course, but most are focused on cardiac electrophysiology and disorders of cardiac rhythm. Most use both physiologically based and data driven methods, such as artificial intelligence and deep learning. Most are focused on the heart itself without robust representations of the vascular load, and none implement neuro hormonal regulation or mechanical adaptation and remodeling, nor aim for the ultimate realization of close loop therapeutics. By autonomous closed loop therapeutics, I mean, using the cardiovascular bio digital twin, not only to predict cardiovascular events and determine optimal therapeutic interventions for maintenance of health or for disease management, but also to actually deliver those therapeutic interventions. This means not only the need for smart sensors, but also for smart actuators, smart robotics, and various nanotechnology devices. Going back to my earlier comparisons to commercial aviation, autonomous closed loop therapeutics means not only maintenance of the plane and its parts, but also the actual flying of the plane in autopilot. In the beginning, we'll include the physician pilots in the loop, but the ultimate goal is an autonomous bio digital twin system for the cardiovascular system. The goal of realizing autonomous closed loop therapeutics in humans is obviously a more longterm goal. We're expecting to demonstrate that first in animal models. And our initial thinking was that this demonstration would be possible by the year 2030, that is 10 years. As of this month, we were planning ways of reaching this target even sooner. Finally, I would also like to add that by setting our aims at such a high ambition target, we drive the quality and accuracy of old milestones along the way. Thank you. This concludes my presentation. I appreciate your interest and attention. Please enjoy the remaining sessions, thank you.
SUMMARY :
of the product in the real world.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
UT Heart | ORGANIZATION | 0.99+ |
Johns Hopkins Institute for Computational Medicine | ORGANIZATION | 0.99+ |
Joe Alexander | PERSON | 0.99+ |
one | QUANTITY | 0.99+ |
Washington University Cardiac Bio Electricity | ORGANIZATION | 0.99+ |
Siemens Healthineers | ORGANIZATION | 0.99+ |
Arrhythmia Center | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
10 years | QUANTITY | 0.99+ |
2030 | DATE | 0.99+ |
Computational Life | ORGANIZATION | 0.99+ |
One platform | QUANTITY | 0.98+ |
two | QUANTITY | 0.97+ |
both | QUANTITY | 0.97+ |
one and a half | QUANTITY | 0.97+ |
5% | QUANTITY | 0.95+ |
NTT Research Summit Upgrade 2020 | EVENT | 0.95+ |
three | QUANTITY | 0.94+ |
first | QUANTITY | 0.93+ |
this month | DATE | 0.92+ |
MEI | ORGANIZATION | 0.9+ |
About a third | QUANTITY | 0.78+ |
atherosclerosis | OTHER | 0.78+ |
a half | QUANTITY | 0.75+ |
two most common forms | QUANTITY | 0.73+ |
echos | ORGANIZATION | 0.72+ |
platforms | QUANTITY | 0.71+ |
twins | QUANTITY | 0.71+ |
a five-year period | QUANTITY | 0.7+ |
twin | OTHER | 0.68+ |
twin | COMMERCIAL_ITEM | 0.66+ |
bio | OTHER | 0.61+ |
Twin | COMMERCIAL_ITEM | 0.59+ |
number one | QUANTITY | 0.59+ |
twin | QUANTITY | 0.56+ |
and Health | ORGANIZATION | 0.49+ |
bio digital twin | OTHER | 0.48+ |
digital twin | COMMERCIAL_ITEM | 0.46+ |
Full Keynote Hour - DockerCon 2020
(water running) (upbeat music) (electric buzzing) >> Fuel up! (upbeat music) (audience clapping) (upbeat music) >> Announcer: From around the globe. It's the queue with digital coverage of DockerCon live 2020, brought to you by Docker and its ecosystem partners. >> Hello everyone, welcome to DockerCon 2020. I'm John Furrier with theCUBE I'm in our Palo Alto studios with our quarantine crew. We have a great lineup here for DockerCon 2020. Virtual event, normally it was in person face to face. I'll be with you throughout the day from an amazing lineup of content, over 50 different sessions, cube tracks, keynotes, and we've got two great co-hosts here with Docker, Jenny Burcio and Bret Fisher. We'll be with you all day today, taking you through the program, helping you navigate the sessions. I'm so excited. Jenny, this is a virtual event. We talk about this. Can you believe it? Maybe the internet gods be with us today and hope everyone's having-- >> Yes. >> Easy time getting in. Jenny, Bret, thank you for-- >> Hello. >> Being here. >> Hey. >> Hi everyone, so great to see everyone chatting and telling us where they're from. Welcome to the Docker community. We have a great day planned for you. >> Guys great job getting this all together. I know how hard it is. These virtual events are hard to pull off. I'm blown away by the community at Docker. The amount of sessions that are coming in the sponsor support has been amazing. Just the overall excitement around the brand and the opportunities given this tough times where we're in. It's super exciting again, made the internet gods be with us throughout the day, but there's plenty of content. Bret's got an amazing all day marathon group of people coming in and chatting. Jenny, this has been an amazing journey and it's a great opportunity. Tell us about the virtual event. Why DockerCon virtual. Obviously everyone's canceling their events, but this is special to you guys. Talk about DockerCon virtual this year. >> The Docker community shows up at DockerCon every year, and even though we didn't have the opportunity to do an in person event this year, we didn't want to lose the time that we all come together at DockerCon. The conversations, the amazing content and learning opportunities. So we decided back in December to make DockerCon a virtual event. And of course when we did that, there was no quarantine we didn't expect, you know, I certainly didn't expect to be delivering it from my living room, but we were just, I mean we were completely blown away. There's nearly 70,000 people across the globe that have registered for DockerCon today. And when you look at DockerCon of past right live events, really and we're learning are just the tip of the iceberg and so thrilled to be able to deliver a more inclusive global event today. And we have so much planned I think. Bret, you want to tell us some of the things that you have planned? >> Well, I'm sure I'm going to forget something 'cause there's a lot going on. But, we've obviously got interviews all day today on this channel with John and the crew. Jenny has put together an amazing set of all these speakers, and then you have the captain's on deck, which is essentially the YouTube live hangout where we just basically talk shop. It's all engineers, all day long. Captains and special guests. And we're going to be in chat talking to you about answering your questions. Maybe we'll dig into some stuff based on the problems you're having or the questions you have. Maybe there'll be some random demos, but it's basically not scripted, it's an all day long unscripted event. So I'm sure it's going to be a lot of fun hanging out in there. >> Well guys, I want to just say it's been amazing how you structured this so everyone has a chance to ask questions, whether it's informal laid back in the captain's channel or in the sessions, where the speakers will be there with their presentations. But Jenny, I want to get your thoughts because we have a site out there that's structured a certain way for the folks watching. If you're on your desktop, there's a main stage hero. There's then tracks and Bret's running the captain's tracks. You can click on that link and jump into his session all day long. He's got an amazing set of line of sleet, leaning back, having a good time. And then each of the tracks, you can jump into those sessions. It's on a clock, it'll be available on demand. All that content is available if you're on your desktop. If you're on your mobile, it's the same thing. Look at the calendar, find the session that you want. If you're interested in it, you could watch it live and chat with the participants in real time or watch it on demand. So there's plenty of content to navigate through. We do have it on a clock and we'll be streaming sessions as they happen. So you're in the moment and that's a great time to chat in real time. But there's more, Jenny, getting more out of this event. You guys try to bring together the stimulation of community. How does the participants get more out of the the event besides just consuming some of the content all day today? >> Yes, so first set up your profile, put your picture next to your chat handle and then chat. John said we have various setups today to help you get the most out of your experience are breakout sessions. The content is prerecorded, so you get quality content and the speakers and chat so you can ask questions the whole time. If you're looking for the hallway track, then definitely check out the captain's on deck channel. And then we have some great interviews all day on the queue. So set up your profile, join the conversation and be kind, right? This is a community event. Code of conduct is linked on every page at the top, and just have a great day. >> And Bret, you guys have an amazing lineup on the captain, so you have a great YouTube channel that you have your stream on. So the folks who were familiar with that can get that either on YouTube or on the site. The chat is integrated in, So you're set up, what do you got going on? Give us the highlights. What are you excited about throughout your day? Take us through your program on the captains. That's going to be probably pretty dynamic in the chat too. >> Yeah, so I'm sure we're going to have lots of, stuff going on in chat. So no cLancaerns there about, having crickets in the chat. But we're going to be basically starting the day with two of my good Docker captain friends, (murmurs) and Laura Taco. And we're going to basically start you out and at the end of this keynote, at the end of this hour and we're going to get you going and then you can maybe jump out and go to take some sessions. Maybe there's some stuff you want to check out and other sessions that you want to chat and talk with the instructors, the speakers there, and then you're going to come back to us, right? Or go over, check out the interviews. So the idea is you're hopping back and forth and throughout the day we're basically changing out every hour. We're not just changing out the guests basically, but we're also changing out the topics that we can cover because different guests will have different expertise. We're going to have some special guests in from Microsoft, talk about some of the cool stuff going on there, and basically it's captains all day long. And if you've been on my YouTube live show you've watched that, you've seen a lot of the guests we have on there. I'm lucky to just hang out with all these really awesome people around the world, so it's going to be fun. >> Awesome and the content again has been preserved. You guys had a great session on call for paper sessions. Jenny, this is good stuff. What other things can people do to make it interesting? Obviously we're looking for suggestions. Feel free to chirp on Twitter about ideas that can be new. But you guys got some surprises. There's some selfies, what else? What's going on? Any secret, surprises throughout the day. >> There are secret surprises throughout the day. You'll need to pay attention to the keynotes. Bret will have giveaways. I know our wonderful sponsors have giveaways planned as well in their sessions. Hopefully right you feel conflicted about what you're going to attend. So do know that everything is recorded and will be available on demand afterwards so you can catch anything that you miss. Most of them will be available right after they stream the initial time. >> All right, great stuff, so they've got the Docker selfie. So the Docker selfies, the hashtag is just DockerCon hashtag DockerCon. If you feel like you want to add some of the hashtag no problem, check out the sessions. You can pop in and out of the captains is kind of the cool kids are going to be hanging out with Bret and then all they'll knowledge and learning. Don't miss the keynote, the keynote should be solid. We've got chain Governor from red monk delivering a keynote. I'll be interviewing him live after his keynote. So stay with us. And again, check out the interactive calendar. All you got to do is look at the calendar and click on the session you want. You'll jump right in. Hop around, give us feedback. We're doing our best. Bret, any final thoughts on what you want to share to the community around, what you got going on the virtual event, just random thoughts? >> Yeah, so sorry we can't all be together in the same physical place. But the coolest thing about as business online, is that we actually get to involve everyone, so as long as you have a computer and internet, you can actually attend DockerCon if you've never been to one before. So we're trying to recreate that experience online. Like Jenny said, the code of conduct is important. So, we're all in this together with the chat, so try to be nice in there. These are all real humans that, have feelings just like me. So let's try to keep it cool. And, over in the Catherine's channel we'll be taking your questions and maybe playing some music, playing some games, giving away some free stuff, while you're, in between sessions learning, oh yeah. >> And I got to say props to your rig. You've got an amazing setup there, Bret. I love what your show, you do. It's really bad ass and kick ass. So great stuff. Jenny sponsors ecosystem response to this event has been phenomenal. The attendance 67,000. We're seeing a surge of people hitting the site now. So if you're not getting in, just, Wade's going, we're going to crank through the queue, but the sponsors on the ecosystem really delivered on the content side and also the sport. You want to share a few shout outs on the sponsors who really kind of helped make this happen. >> Yeah, so definitely make sure you check out the sponsor pages and you go, each page is the actual content that they will be delivering. So they are delivering great content to you. So you can learn and a huge thank you to our platinum and gold authors. >> Awesome, well I got to say, I'm super impressed. I'm looking forward to the Microsoft Amazon sessions, which are going to be good. And there's a couple of great customer sessions there. I tweeted this out last night and let them get you guys' reaction to this because there's been a lot of talk around the COVID crisis that we're in, but there's also a positive upshot to this is Cambridge and explosion of developers that are going to be building new apps. And I said, you know, apps aren't going to just change the world, they're going to save the world. So a lot of the theme here is the impact that developers are having right now in the current situation. If we get the goodness of compose and all the things going on in Docker and the relationships, this real impact happening with the developer community. And it's pretty evident in the program and some of the talks and some of the examples. how containers and microservices are certainly changing the world and helping save the world, your thoughts. >> Like you said, a number of sessions and interviews in the program today that really dive into that. And even particularly around COVID, Clement Beyondo is sharing his company's experience, from being able to continue operations in Italy when they were completely shut down beginning of March. We have also in theCUBE channel several interviews about from the national Institute of health and precision cancer medicine at the end of the day. And you just can really see how containerization and developers are moving in industry and really humanity forward because of what they're able to build and create, with advances in technology. >> Yeah and the first responders and these days is developers. Bret compose is getting a lot of traction on Twitter. I can see some buzz already building up. There's huge traction with compose, just the ease of use and almost a call for arms for integrating into all the system language libraries, I mean, what's going on with compose? I mean, what's the captain say about this? I mean, it seems to be really tracking in terms of demand and interest. >> I think we're over 700,000 composed files on GitHub. So it's definitely beyond just the standard Docker run commands. It's definitely the next tool that people use to run containers. Just by having that we just buy, and that's not even counting. I mean that's just counting the files that are named Docker compose YAML. So I'm sure a lot of you out there have created a YAML file to manage your local containers or even on a server with Docker compose. And the nice thing is is Docker is doubling down on that. So we've gotten some news recently, from them about what they want to do with opening the spec up, getting more companies involved because compose is already gathered so much interest from the community. You know, AWS has importers, there's Kubernetes importers for it. So there's more stuff coming and we might just see something here in a few minutes. >> All right, well let's get into the keynote guys, jump into the keynote. If you missing anything, come back to the stream, check out the sessions, check out the calendar. Let's go, let's have a great time. Have some fun, thanks and enjoy the rest of the day we'll see you soon. (upbeat music) (upbeat music) >> Okay, what is the name of that Whale? >> Molly. >> And what is the name of this Whale? >> Mobby. >> That's right, dad's got to go, thanks bud. >> Bye. >> Bye. Hi, I'm Scott Johnson, CEO of Docker and welcome to DockerCon 2020. This year DockerCon is an all virtual event with more than 60,000 members of the Docker Community joining from around the world. And with the global shelter in place policies, we're excited to offer a unifying, inclusive virtual community event in which anyone and everyone can participate from their home. As a company, Docker has been through a lot of changes since our last DockerCon last year. The most important starting last November, is our refocusing 100% on developers and development teams. As part of that refocusing, one of the big challenges we've been working on, is how to help development teams quickly and efficiently get their app from code to cloud And wouldn't it be cool, if developers could quickly deploy to the cloud right from their local environment with the commands and workflow they already know. We're excited to give you a sneak preview of what we've been working on. And rather than slides, we thought we jumped right into the product. And joining me demonstrate some of these cool new features, is enclave your DACA. One of our engineers here at Docker working on Docker compose. Hello Lanca. >> Hello. >> We're going to show how an application development team collaborates using Docker desktop and Docker hub. And then deploys the app directly from the Docker command line to the clouds in just two commands. A development team would use this to quickly share functional changes of their app with the product management team, with beta testers or other development teams. Let's go ahead and take a look at our app. Now, this is a web app, that randomly pulls words from the database, and assembles them into sentences. You can see it's a pretty typical three tier application with each tier implemented in its own container. We have a front end web service, a middle tier, which implements the logic to randomly pull the words from the database and assemble them and a backend database. And here you can see the database uses the Postgres official image from Docker hub. Now let's first run the app locally using Docker command line and the Docker engine in Docker desktop. We'll do a Doc compose up and you can see that it's pulling the containers from our Docker organization account. Wordsmith, inc. Now that it's up. Let's go ahead and look at local host and we'll confirm that the application is functioning as desired. So there's one sentence, let's pull and now you and you can indeed see that we are pulling random words and assembling into sentences. Now you can also see though that the look and feel is a bit dated. And so Lanca is going to show us how easy it is to make changes and share them with the rest of the team. Lanca, over to you. >> Thank you, so I have, the source code of our application on my machine and I have updated it with the latest team from DockerCon 2020. So before committing the code, I'm going to build the application locally and run it, to verify that indeed the changes are good. So I'm going to build with Docker compose the image for the web service. Now that the image has been built, I'm going to deploy it locally. Wait to compose up. We can now check the dashboard in a Docker desktop that indeed our containers are up and running, and we can access, we can open in the web browser, the end point for the web service. So as we can see, we have the latest changes in for our application. So as you can see, the application has been updated successfully. So now, I'm going to push the image that I have just built to my organization's shared repository on Docker hub. So I can do this with Docker compose push web. Now that the image has been updated in the Docker hub repository, or my teammates can access it and check the changes. >> Excellent, well, thank you Lanca. Now of course, in these times, video conferencing is the new normal, and as great as it is, video conferencing does not allow users to actually test the application. And so, to allow us to have our app be accessible by others outside organizations such as beta testers or others, let's go ahead and deploy to the cloud. >> Sure we, can do this by employing a context. A Docker context, is a mechanism that we can use to target different platforms for deploying containers. The context we hold, information as the endpoint for the platform, and also how to authenticate to it. So I'm going to list the context that I have set locally. As you can see, I'm currently using the default context that is pointing to my local Docker engine. So all the commands that I have issued so far, we're targeting my local engine. Now, in order to deploy the application on a cloud. I have an account in the Azure Cloud, where I have no resource running currently, and I have created for this account, dedicated context that will hold the information on how to connect it to it. So now all I need to do, is to switch to this context, with Docker context use, and the name of my cloud context. So all the commands that I'm going to run, from now on, are going to target the cloud platform. So we can also check very, more simpler, in a simpler way we can check the running containers with Docker PS. So as we see no container is running in my cloud account. Now to deploy the application, all I need to do is to run a Docker compose up. And this will trigger the deployment of my application. >> Thanks Lanca. Now notice that Lanca did not have to move the composed file from Docker desktop to Azure. Notice you have to make any changes to the Docker compose file, and nor did she change any of the containers that she and I were using locally in our local environments. So the same composed file, same images, run locally and upon Azure without changes. While the app is deploying to Azure, let's highlight some of the features in Docker hub that helps teams with remote first collaboration. So first, here's our team's account where it (murmurs) and you can see the updated container sentences web that Lanca just pushed a couple of minutes ago. As far as collaboration, we can add members using their Docker ID or their email, and then we can organize them into different teams depending on their role in the application development process. So and then Lancae they're organized into different teams, we can assign them permissions, so that teams can work in parallel without stepping on each other's changes accidentally. For example, we'll give the engineering team full read, write access, whereas the product management team will go ahead and just give read only access. So this role based access controls, is just one of the many features in Docker hub that allows teams to collaboratively and quickly develop applications. Okay Lanca, how's our app doing? >> Our app has been successfully deployed to the cloud. So, we can easily check either the Azure portal to verify the containers running for it or simpler we can run a Docker PS again to get the list with the containers that have been deployed for it. In the output from the Docker PS, we can see an end point that we can use to access our application in the web browser. So we can see the application running in clouds. It's really up to date and now we can take this particular endpoint and share it within our organization such that anybody can have a look at it. >> That's cool Onka. We showed how we can deploy an app to the cloud in minutes and just two commands, and using commands that Docker users already know, thanks so much. In that sneak preview, you saw a team developing an app collaboratively, with a tool chain that includes Docker desktop and Docker hub. And simply by switching Docker context from their local environment to the cloud, deploy that app to the cloud, to Azure without leaving the command line using Docker commands they already know. And in doing so, really simplifying for development team, getting their app from code to cloud. And just as important, what you did not see, was a lot of complexity. You did not see cloud specific interfaces, user management or security. You did not see us having to provision and configure compute networking and storage resources in the cloud. And you did not see infrastructure specific application changes to either the composed file or the Docker images. And by simplifying a way that complexity, these new features help application DevOps teams, quickly iterate and get their ideas, their apps from code to cloud, and helping development teams, build share and run great applications, is what Docker is all about. A Docker is able to simplify for development teams getting their app from code to cloud quickly as a result of standards, products and ecosystem partners. It starts with open standards for applications and application artifacts, and active open source communities around those standards to ensure portability and choice. Then as you saw in the demo, the Docker experience delivered by Docker desktop and Docker hub, simplifies a team's collaborative development of applications, and together with ecosystem partners provides every stage of an application development tool chain. For example, deploying applications to the cloud in two commands. What you saw on the demo, well that's an extension of our strategic partnership with Microsoft, which we announced yesterday. And you can learn more about our partnership from Amanda Silver from Microsoft later today, right here at DockerCon. Another tool chain stage, the capability to scan applications for security and vulnerabilities, as a result of our partnership with Sneak, which we announced last week. You can learn more about that partnership from Peter McKay, CEO Sneak, again later today, right here at DockerCon. A third example, development team can automate the build of container images upon a simple get push, as a result of Docker hub integrations with GitHub and Alaska and Bitbucket. As a final example of Docker and the ecosystem helping teams quickly build applications, together with our ISV partners. We offer in Docker hub over 500 official and verified publisher images of ready to run Dockerized application components such as databases, load balancers, programming languages, and much more. Of course, none of this happens without people. And I would like to take a moment to thank four groups of people in particular. First, the Docker team, past and present. We've had a challenging 12 months including a restructuring and then a global pandemic, and yet their support for each other, and their passion for the product, this community and our customers has never been stronger. We think our community, Docker wouldn't be Docker without you, and whether you're one of the 50 Docker captains, they're almost 400 meetup organizers, the thousands of contributors and maintainers. Every day you show up, you give back, you teach new support. We thank our users, more than six and a half million developers who have built more than 7 million applications and are then sharing those applications through Docker hub at a rate of more than one and a half billion poles per week. Those apps are then run, are more than 44 million Docker engines. And finally, we thank our customers, the over 18,000 docker subscribers, both individual developers and development teams from startups to large organizations, 60% of which are outside the United States. And they spend every industry vertical, from media, to entertainment to manufacturing. healthcare and much more. Thank you. Now looking forward, given these unprecedented times, we would like to offer a challenge. While it would be easy to feel helpless and miss this global pandemic, the challenge is for us as individuals and as a community to instead see and grasp the tremendous opportunities before us to be forces for good. For starters, look no further than the pandemic itself, in the fight against this global disaster, applications and data are playing a critical role, and the Docker Community quickly recognize this and rose to the challenge. There are over 600 COVID-19 related publicly available projects on Docker hub today, from data processing to genome analytics to data visualization folding at home. The distributed computing project for simulating protein dynamics, is also available on Docker hub, and it uses spirit compute capacity to analyze COVID-19 proteins to aid in the design of new therapies. And right here at DockerCon, you can hear how Clemente Biondo and his company engineering in Gagne area Informatica are using Docker in the fight with COVID-19 in Italy every day. Now, in addition to fighting the pandemic directly, as a community, we also have an opportunity to bridge the disruption the pandemic is wreaking. It's impacting us at work and at home in every country around the world and every aspect of our lives. For example, many of you have a student at home, whose world is going to be very different when they returned to school. As employees, all of us have experienced the stresses from working from home as well as many of the benefits and in fact 75% of us say that going forward, we're going to continue to work from home at least occasionally. And of course one of the biggest disruptions has been job losses, over 35 million in the United States alone. And we know that's affected many of you. And yet your skills are in such demand and so important now more than ever. And that's why here at DockerCon, we want to try to do our part to help, and we're promoting this hashtag on Twitter, hashtag DockerCon jobs, where job seekers and those offering jobs can reach out to one another and connect. Now, pandemics disruption is accelerating the shift of more and more of our time, our priorities, our dollars from offline to online to hybrid, and even online only ways of living. We need to find new ways to collaborate, new approaches to engage customers, new modes for education and much more. And what is going to fill the needs created by this acceleration from offline, online? New applications. And it's this need, this demand for all these new applications that represents a great opportunity for the Docker community of developers. The world needs us, needs you developers now more than ever. So let's seize this moment. Let us in our teams, go build share and run great new applications. Thank you for joining today. And let's have a great DockerCon. >> Okay, welcome back to the DockerCon studio headquarters in your hosts, Jenny Burcio and myself John Furrier. u@farrier on Twitter. If you want to tweet me anything @DockerCon as well, share what you're thinking. Great keynote there from Scott CEO. Jenny, demo DockerCon jobs, some highlights there from Scott. Yeah, I love the intro. It's okay I'm about to do the keynote. The little green room comes on, makes it human. We're all trying to survive-- >> Let me answer the reality of what we are all doing with right now. I had to ask my kids to leave though or they would crash the whole stream but yes, we have a great community, a large community gather gathered here today, and we do want to take the opportunity for those that are looking for jobs, are hiring, to share with the hashtag DockerCon jobs. In addition, we want to support direct health care workers, and Bret Fisher and the captains will be running a all day charity stream on the captain's channel. Go there and you'll get the link to donate to directrelief.org which is a California based nonprofit, delivering and aid and supporting health care workers globally response to the COVID-19 crisis. >> Okay, if you jumping into the stream, I'm John Farrie with Jenny Webby, your hosts all day today throughout DockerCon. It's a packed house of great content. You have a main stream, theCUBE which is the mainstream that we'll be promoting a lot of cube interviews. But check out the 40 plus sessions underneath in the interactive calendar on dockercon.com site. Check it out, they're going to be live on a clock. So if you want to participate in real time in the chat, jump into your session on the track of your choice and participate with the folks in there chatting. If you miss it, it's going to go right on demand right after sort of all content will be immediately be available. So make sure you check it out. Docker selfie is a hashtag. Take a selfie, share it. Docker hashtag Docker jobs. If you're looking for a job or have openings, please share with the community and of course give us feedback on what you can do. We got James Governor, the keynote coming up next. He's with Red monk. Not afraid to share his opinion on open source on what companies should be doing, and also the evolution of this Cambrin explosion of apps that are going to be coming as we come out of this post pandemic world. A lot of people are thinking about this, the crisis and following through. So stay with us for more and more coverage. Jenny, favorite sessions on your mind for people to pay attention to that they should (murmurs)? >> I just want to address a few things that continue to come up in the chat sessions, especially breakout sessions after they play live and the speakers in chat with you, those go on demand, they are recorded, you will be able to access them. Also, if the screen is too small, there is the button to expand full screen, and different quality levels for the video that you can choose on your end. All the breakout sessions also have closed captioning, so please if you would like to read along, turn that on so you can, stay with the sessions. We have some great sessions, kicking off right at 10:00 a.m, getting started with Docker. We have a full track really in the how to enhance on that you should check out devs in action, hear what other people are doing and then of course our sponsors are delivering great content to you all day long. >> Tons of content. It's all available. They'll always be up always on at large scale. Thanks for watching. Now we got James Governor, the keynote. He's with Red Monk, the analyst firm and has been tracking open source for many generations. He's been doing amazing work. Watch his great keynote. I'm going to be interviewing him live right after. So stay with us and enjoy the rest of the day. We'll see you back shortly. (upbeat music) >> Hi, I'm James Governor, one of the co-founders of a company called RedMonk. We're an industry research firm focusing on developer led technology adoption. So that's I guess why Docker invited me to DockerCon 2020 to talk about some trends that we're seeing in the world of work and software development. So Monk Chips, that's who I am. I spent a lot of time on Twitter. It's a great research tool. It's a great way to find out what's going on with keep track of, as I say, there's people that we value so highly software developers, engineers and practitioners. So when I started talking to Docker about this event and it was pre Rhona, should we say, the idea of a crowd wasn't a scary thing, but today you see something like this, it makes you feel uncomfortable. This is not a place that I want to be. I'm pretty sure it's a place you don't want to be. And you know, to that end, I think it's interesting quote by Ellen Powell, she says, "Work from home is now just work" And we're going to see more and more of that. Organizations aren't feeling the same way they did about work before. Who all these people? Who is my cLancaern? So GitHub says has 50 million developers right on its network. Now, one of the things I think is most interesting, it's not that it has 50 million developers. Perhaps that's a proxy for number of developers worldwide. But quite frankly, a lot of those accounts, there's all kinds of people there. They're just Selena's. There are data engineers, there are data scientists, there are product managers, there were tech marketers. It's a big, big community and it goes way beyond just software developers itself. Frankly for me, I'd probably be saying there's more like 20 to 25 million developers worldwide, but GitHub knows a lot about the world of code. So what else do they know? One of the things they know is that world of code software and opensource, is becoming increasingly global. I get so excited about this stuff. The idea that there are these different software communities around the planet where we're seeing massive expansions in terms of things like open source. Great example is Nigeria. So Nigeria more than 200 million people, right? The energy there in terms of events, in terms of learning, in terms of teaching, in terms of the desire to code, the desire to launch businesses, desire to be part of a global software community is just so exciting. And you know, these, this sort of energy is not just in Nigeria, it's in other countries in Africa, it's happening in Egypt. It's happening around the world. This energy is something that's super interesting to me. We need to think about that. We've got global that we need to solve. And software is going to be a big part of that. At the moment, we can talk about other countries, but what about frankly the gender gap, the gender issue that, you know, from 1984 onwards, the number of women taking computer science degrees began to, not track but to create in comparison to what men were doing. The tech industry is way too male focused, there are men that are dominant, it's not welcoming, we haven't found ways to have those pathways and frankly to drive inclusion. And the women I know in tech, have to deal with the massively disproportionate amount of stress and things like online networks. But talking about online networks and talking about a better way of living, I was really excited by get up satellite recently, was a fantastic demo by Alison McMillan and she did a demo of a code spaces. So code spaces is Microsoft online ID, new platform that they've built. And online IDs, we're never quite sure, you know, plenty of people still out there just using the max. But, visual studio code has been a big success. And so this idea of moving to one online IDE, it's been around that for awhile. What they did was just make really tight integration. So you're in your GitHub repo and just be able to create a development environment with effectively one click, getting rid of all of the act shaving, making it super easy. And what I loved was it the demo, what Ali's like, yeah cause this is great. One of my kids are having a nap, I can just start (murmurs) and I don't have to sort out all the rest of it. And to me that was amazing. It was like productivity as inclusion. I'm here was a senior director at GitHub. They're doing this amazing work and then making this clear statement about being a parent. And I think that was fantastic. Because that's what, to me, importantly just working from home, which has been so challenging for so many of us, began to open up new possibilities, and frankly exciting possibilities. So Alley's also got a podcast parent-driven development, which I think is super important. Because this is about men and women rule in this together show parenting is a team sport, same as software development. And the idea that we should be thinking about, how to be more productive, is super important to me. So I want to talk a bit about developer culture and how it led to social media. Because you know, your social media, we're in this ad bomb stage now. It's TikTok, it's like exercise, people doing incredible back flips and stuff like that. Doing a bunch of dancing. We've had the world of sharing cat gifts, Facebook, we sort of see social media is I think a phenomenon in its own right. Whereas the me, I think it's interesting because it's its progenitors, where did it come from? So here's (murmurs) So 1971, one of the features in the emergency management information system, that he built, which it's topical, it was for medical tracking medical information as well, medical emergencies, included a bulletin board system. So that it could keep track of what people were doing on a team and make sure that they were collaborating effectively, boom! That was the start of something big, obviously. Another day I think is worth looking at 1983, Sorania Pullman, spanning tree protocol. So at DEC, they were very good at distributed systems. And the idea was that you can have a distributed system and so much of the internet working that we do today was based on radius work. And then it showed that basically, you could span out a huge network so that everyone could collaborate. That is incredibly exciting in terms of the trends, that I'm talking about. So then let's look at 1988, you've got IRC. IRC what developer has not used IRC, right. Well, I guess maybe some of the other ones might not have. But I don't know if we're post IRC yet, but (murmurs) at a finished university, really nailed it with IRC as a platform that people could communicate effectively with. And then we go into like 1991. So we've had IRC, we've had finished universities, doing a lot of really fantastic work about collaboration. And I don't think it was necessarily an accident that this is where the line is twofold, announced Linux. So Linux was a wonderfully packaged, idea in terms of we're going to take this Unix thing. And when I say package, what a package was the idea that we could collaborate on software. So, it may have just been the work of one person, but clearly what made it important, made it interesting, was finding a social networking pattern, for software development so that everybody could work on something at scale. That was really, I think, fundamental and foundational. Now I think it's important, We're going to talk about Linus, to talk about some things that are not good about software culture, not good about open source culture, not good about hacker culture. And that's where I'm going to talk about code of conduct. We have not been welcoming to new people. We got the acronyms, JFTI, We call people news, that's super unhelpful. We've got to find ways to be more welcoming and more self-sustaining in our communities, because otherwise communities will fail. And I'd like to thank everyone that has a code of conduct and has encouraged others to have codes of conduct. We need to have codes of conduct that are enforced to ensure that we have better diversity at our events. And that's what women, underrepresented minorities, all different kinds of people need to be well looked off to and be in safe and inclusive spaces. And that's the online events. But of course it's also for all of our activities offline. So Linus, as I say, I'm not the most charming of characters at all time, but he has done some amazing technology. So we got to like 2005 the creation of GIT. Not necessarily the distributed version control system that would win. But there was some interesting principles there, and they'd come out of the work that he had done in terms of trying to build and sustain the Linux code base. So it was very much based on experience. He had an itch that he needed to scratch and there was a community that was this building, this thing. So what was going to be the option, came up with Git foundational to another huge wave of social change, frankly get to logical awesome. April 20 April, 2008 GitHub, right? GiHub comes up, they've looked at Git, they've packaged it up, they found a way to make it consumable so the teams could use it and really begin to take advantage of the power of that distributed version control model. Now, ironically enough, of course they centralized the service in doing so. So we have a single point of failure on GitHub. But on the other hand, the notion of the poll request, the primitives that they established and made usable by people, that changed everything in terms of software development. I think another one that I'd really like to look at is Slack. So Slack is a huge success used by all different kinds of businesses. But it began specifically as a pivot from a company called Glitch. It was a game company and they still wanted, a tool internally that was better than IRC. So they built out something that later became Slack. So Slack 2014, is established as a company and basically it was this Slack fit software engineering. The focus on automation, the conversational aspects, the asynchronous aspects. It really pulled things together in a way that was interesting to software developers. And I think we've seen this pattern in the world, frankly, of the last few years. Software developers are influences. So Slack first used by the engineering teams, later used by everybody. And arguably you could say the same thing actually happened with Apple. Apple was mainstreamed by developers adopting that platform. Get to 2013, boom again, Solomon Hikes, Docker, right? So Docker was, I mean containers were not new, they were just super hard to use. People found it difficult technology, it was Easter Terek. It wasn't something that they could fully understand. Solomon did an incredible job of understanding how containers could fit into modern developer workflows. So if we think about immutable images, if we think about the ability to have everything required in the package where you are, it really tied into what people were trying to do with CICD, tied into microservices. And certainly the notion of sort of display usability Docker nailed that, and I guess from this conference, at least the rest is history. So I want to talk a little bit about, scratching the itch. And particularly what has become, I call it the developer authentic. So let's go into dark mode now. I've talked about developers laying out these foundations and frameworks that, the mainstream, frankly now my son, he's 14, he (murmurs) at me if I don't have dark mode on in an application. And it's this notion that developers, they have an aesthetic, it does get adopted I mean it's quite often jokey. One of the things we've seen in the really successful platforms like GitHub, Docker, NPM, let's look at GitHub. Let's look at over that Playfulness. I think was really interesting. And that changes the world of work, right? So we've got the world of work which can be buttoned up, which can be somewhat tight. I think both of those companies were really influential, in thinking that software development, which is a profession, it's also something that can and is fun. And I think about how can we make it more fun? How can we develop better applications together? Takes me to, if we think about Docker talking about build, share and run, for me the key word is share, because development has to be a team sport. It needs to be sharing. It needs to be kind and it needs to bring together people to do more effective work. Because that's what it's all about, doing effective work. If you think about zoom, it's a proxy for collaboration in terms of its value. So we've got all of these airlines and frankly, add up that their share that add up their total value. It's currently less than Zoom. So video conferencing has become so much of how we live now on a consumer basis. But certainly from a business to business perspective. I want to talk about how we live now. I want to think about like, what will come out all of this traumatic and it is incredibly traumatic time? I'd like to say I'm very privileged. I can work from home. So thank you to all the frontline workers that are out there that they're not in that position. But overall what I'm really thinking about, there's some things that will come out of this that will benefit us as a culture. Looking at cities like Paris, Milan, London, New York, putting a new cycling infrastructure, so that people can social distance and travel outside because they don't feel comfortable on public transport. I think sort of amazing widening pavements or we can't do that. All these cities have done it literally overnight. This sort of changes is exciting. And what does come off that like, oh there are some positive aspects of the current issues that we face. So I've got a conference or I've got a community that may and some of those, I've been working on. So Katie from HashiCorp and Carla from container solutions basically about, look, what will the world look like in developer relations? Can we have developer relations without the air miles? 'Cause developer advocates, they do too much travel ends up, you know, burning them out, develop relations. People don't like to say no. They may have bosses that say, you know, I was like, Oh that corporates went great. Now we're going to roll it out worldwide to 47 cities. That's stuff is terrible. It's terrible from a personal perspective, it's really terrible from an environmental perspective. We need to travel less. Virtual events are crushing it. Microsoft just at build, right? Normally that'd be just over 10,000 people, they had 245,000 plus registrations. 40,000 of them in the last day, right? Red Hat summit, 80,000 people, IBM think 90,000 people, GitHub Crushed it as well. Like this is a more inclusive way people can dip in. They can be from all around the world. I mentioned Nigeria and how fantastic it is. Very often Nigerian developers and advocates find it hard to get visas. Why should they be shut out of events? Events are going to start to become remote first because frankly, look at it, if you're turning in those kinds of numbers, and Microsoft was already doing great online events, but they absolutely nailed it. They're going to have to ask some serious questions about why everybody should get back on a plane again. So if you're going to do remote, you've got to be intentional about it. It's one thing I've learned some exciting about GitLab. GitLab's culture is amazing. Everything is documented, everything is public, everything is transparent. Think that really clear and if you look at their principles, everything, you can't have implicit collaboration models. Everything needs to be documented and explicit, so that anyone can work anywhere and they can still be part of the team. Remote first is where we're at now, Coinbase, Shopify, even Barkley says the not going to go back to having everybody in offices in the way they used to. This is a fundamental shift. And I think it's got significant implications for all industries, but definitely for software development. Here's the thing, the last 20 years were about distributed computing, microservices, the cloud, we've got pretty good at that. The next 20 years will be about distributed work. We can't have everybody living in San Francisco and London and Berlin. The talent is distributed, the talent is elsewhere. So how are we going to build tools? Who is going to scratch that itch to build tools to make them more effective? Who's building the next generation of apps, you are, thanks.
SUMMARY :
It's the queue with digital coverage Maybe the internet gods be with us today Jenny, Bret, thank you for-- Welcome to the Docker community. but this is special to you guys. of the iceberg and so thrilled to be able or the questions you have. find the session that you want. to help you get the most out of your So the folks who were familiar with that and at the end of this keynote, Awesome and the content attention to the keynotes. and click on the session you want. in the same physical place. And I got to say props to your rig. the sponsor pages and you go, So a lot of the theme here is the impact and interviews in the program today Yeah and the first responders And the nice thing is is Docker of the day we'll see you soon. got to go, thanks bud. of the Docker Community from the Docker command line to the clouds So I'm going to build with Docker compose And so, to allow us to So all the commands that I'm going to run, While the app is deploying to Azure, to get the list with the containers the capability to scan applications Yeah, I love the intro. and Bret Fisher and the captains of apps that are going to be coming in the how to enhance on the rest of the day. in terms of the desire to code,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Ellen Powell | PERSON | 0.99+ |
Alison McMillan | PERSON | 0.99+ |
Peter McKay | PERSON | 0.99+ |
Jenny Burcio | PERSON | 0.99+ |
Jenny | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
Italy | LOCATION | 0.99+ |
Carla | PERSON | 0.99+ |
Scott Johnson | PERSON | 0.99+ |
Amanda Silver | PERSON | 0.99+ |
Bret | PERSON | 0.99+ |
Egypt | LOCATION | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
London | LOCATION | 0.99+ |
Apple | ORGANIZATION | 0.99+ |
Bret Fisher | PERSON | 0.99+ |
Milan | LOCATION | 0.99+ |
Paris | LOCATION | 0.99+ |
RedMonk | ORGANIZATION | 0.99+ |
John Farrie | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Africa | LOCATION | 0.99+ |
Clement Beyondo | PERSON | 0.99+ |
California | LOCATION | 0.99+ |
Shopify | ORGANIZATION | 0.99+ |
Jenny Webby | PERSON | 0.99+ |
75% | QUANTITY | 0.99+ |
Berlin | LOCATION | 0.99+ |
Katie | PERSON | 0.99+ |
December | DATE | 0.99+ |
60% | QUANTITY | 0.99+ |
1983 | DATE | 0.99+ |
1984 | DATE | 0.99+ |
two | QUANTITY | 0.99+ |
14 | QUANTITY | 0.99+ |
United States | LOCATION | 0.99+ |
GitHub | ORGANIZATION | 0.99+ |
New York | LOCATION | 0.99+ |
Nigeria | LOCATION | 0.99+ |
2005 | DATE | 0.99+ |
San Francisco | LOCATION | 0.99+ |
Docker | ORGANIZATION | 0.99+ |
DockerCon | EVENT | 0.99+ |
more than 44 million | QUANTITY | 0.99+ |
100% | QUANTITY | 0.99+ |
Laura Taco | PERSON | 0.99+ |
40,000 | QUANTITY | 0.99+ |
47 cities | QUANTITY | 0.99+ |
April 20 April, 2008 | DATE | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Wade | PERSON | 0.99+ |
Coinbase | ORGANIZATION | 0.99+ |
Gagne | LOCATION | 0.99+ |
last week | DATE | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
James Governor | PERSON | 0.99+ |
Sorania Pullman | PERSON | 0.99+ |
last November | DATE | 0.99+ |
50 million developers | QUANTITY | 0.99+ |
First | QUANTITY | 0.99+ |
Clemente Biondo | PERSON | 0.99+ |
10:00 a.m | DATE | 0.99+ |
Scott | PERSON | 0.99+ |
DockerCon 2020 Kickoff
>>From around the globe. It's the queue with digital coverage of DockerCon live 2020 brought to you by Docker and its ecosystem partners. >>Hello everyone. Welcome to Docker con 2020 I'm John furrier with the cube. I'm in our Palo Alto studios with our quarantine crew. We have a great lineup here for DockerCon con 2020 virtual event. Normally it was in person face to face. I'll be with you throughout the day from an amazing lineup of content over 50 different sessions, cube tracks, keynotes, and we've got two great co-hosts here with Docker, Jenny Marcio and Brett Fisher. We'll be with you all day, all day today, taking you through the program, helping you navigate the sessions. I'm so excited, Jenny. This is a virtual event. We talk about this. Can you believe it? We're, you know, may the internet gods be with us today and hope everyone's having an easy time getting in. Jenny, Brett, thank you for being here. Hey, >>Yeah. Hi everyone. Uh, so great to see everyone chatting and telling us where they're from. Welcome to the Docker community. We have a great day planned for you >>Guys. Great job. I'm getting this all together. I know how hard it is. These virtual events are hard to pull off. I'm blown away by the community at Docker. The amount of sessions that are coming in the sponsor support has been amazing. Just the overall excitement around the brand and the, and the opportunities given this tough times where we're in. Um, it's super exciting. Again, made the internet gods be with us throughout the day, but there's plenty of content. Uh, Brett's got an amazing all day marathon group of people coming in and chatting. Jenny, this has been an amazing journey and it's a great opportunity. Tell us about the virtual event. Why DockerCon virtual. Obviously everyone's cancelling their events, but this is special to you guys. Talk about Docker con virtual this year. >>Yeah. You know, the Docker community shows up at DockerCon every year and even though we didn't have the opportunity to do an in person event this year, we didn't want to lose the time that we all come together at DockerCon. The conversations, the amazing content and learning opportunities. So we decided back in December to make Docker con a virtual event. And of course when we did that, there was no quarantine. Um, we didn't expect, you know, I certainly didn't expect to be delivering it from my living room, but we were just, I mean we were completely blown away. There's nearly 70,000 people across the globe that have registered for Docker con today. And when you look at backer cons of past right live events, really and more learning are just the tip of the iceberg. And so thrilled to be able to deliver a more inclusive vocal event today. And we have so much planned. Uh, I think Brett, you want to tell us some of the things that you have planned? >>Well, I'm sure I'm going to forget something cause there's a lot going on. But, uh, we've obviously got interviews all day today on this channel with John the crew. Um, Jenny has put together an amazing set of all these speakers all day long in the sessions. And then you have a captain's on deck, which is essentially the YouTube live hangout where we just basically talk shop. Oh, it's all engineers, all day long, captains and special guests. And we're going to be in chat talking to you about answering your questions. Maybe we'll dig into some stuff based on the problems you're having or the questions you have. Maybe there'll be some random demos, but it's basically, uh, not scripted. It's an all day long unscripted event, so I'm sure it's going to be a lot of fun hanging out in there. >>Well guys, I want to just say it's been amazing how you structured this so everyone has a chance to ask questions, whether it's informal laid back in the captain's channel or in the sessions where the speakers will be there with their, with their presentations. But Jenny, I want to get your thoughts because we have a site out there that's structured a certain way for the folks watching. If you're on your desktop, there's a main stage hero. There's then tracks and Brett's running the captain's tracks. You can click on that link and jump into his session all day long. He's got an amazing set of line of sleet, leaning back, having a good time. And then each of the tracks, you can jump into those sessions. It's on a clock. It'll be available on demand. All that content is available if you're on your desktop, if you're on your mobile, it's the same thing. >>Look at the calendar, find the session that you want. If you're interested in it, you could watch it live and chat with the participants in real time or watch it on demand. So there's plenty of content to navigate through. We do have it on a clock and we'll be streaming sessions as they happen. So you're in the moment and that's a great time to chat in real time. But there's more, Jenny, you're getting more out of this event. We, you guys try to bring together the stimulation of community. How does the participants get more out of the the event besides just consuming some of the content all day today? >>Yeah. So first set up your profile, put your picture next to your chat handle and then chat. We have like, uh, John said we have various setups today to help you get the most out of your experience are breakout sessions. The content is prerecorded so you get quality content and the speakers and chat. So you can ask questions the whole time. Um, if you're looking for the hallway track, then definitely check out the captain's on deck channel. Uh, and then we have some great interviews all day on the queue so that up your profile, join the conversation and be kind, right. This is a community event. Code of conduct is linked on every page at the top and just have a great day. >>And Brett, you guys have an amazing lineup on the captain, so you have a great YouTube channel that you have your stream on. So the folks who were familiar with that can get that either on YouTube or on the site. The chat is integrated in, so you're set up, what do you got going on? Give us the highlights. What are you excited about throughout your day? Take us through your program on the captains. That's going to be probably pretty dynamic in the chat too. >>Yeah. Yeah. So, uh, I'm sure we're going to have less, uh, lots of, lots of stuff going on in chat. So no concerns there about, uh, having crickets in the, in the chat. But we're going to, uh, basically starting the day with two of my good Docker captain friends, uh, Nirmal Mehta and Laura taco. And we're going to basically start you out and at the end of this keynote, at the end of this hour, and we're going to get you going. And then you can maybe jump out and go to take some sessions. Maybe there's some cool stuff you want to check out in other sessions that are, you want to chat and talk with the, the instructors, the speakers there, and then you're going to come back to us, right? Or go over, check out the interview. So the idea is you're hopping back and forth and throughout the day we're basically changing out every hour. >>We're not just changing out the, uh, the guests basically, but we're also changing out the topics that we can cover because different guests will have different expertise. We're going to have some special guests in from Microsoft, talk about some of the cool stuff going on there. And basically it's captains all day long. And, uh, you know, if you've been on my YouTube live show you, you've watched that, you've seen a lot of the guests we have on there. I'm lucky to just hang out with all these really awesome people around the world, so it's going to be fun. >>Awesome. And the content again has been preserved. You guys had a great session on call for paper sessions. Jenny, this is good stuff. What are the things can people do to make it interesting? Obviously we're looking for suggestions. Feel free to, to chirp on Twitter about ideas that can be new. But you guys got some surprises. There's some selfies. What else? What's going on? Any secret, uh, surprises throughout the day. >>There are secret surprises throughout the day. You'll need to pay attention to the keynotes. Brett will have giveaways. I know our wonderful sponsors have giveaways planned as well in their sessions. Uh, hopefully right you, you feel conflicted about what you're going to attend. So do know that everything is recorded and will be available on demand afterwards so you can catch anything that you miss. Most of them will be available right after they stream the initial time. >>All right, great stuff. So they've got the Docker selfie. So the Docker selfies, the hashtag is just Docker con hashtag Docker con. If you feel like you want to add some of the hashtag no problem, check out the sessions. You can pop in and out of the captains is kind of the cool, cool. Kids are going to be hanging out with Brett and then all they'll knowledge and learning. Don't miss the keynote. The keynote should be solid. We got changed governor from red monk delivering a keynote. I'll be interviewing him live after his keynote. So stay with us and again, check out the interactive calendar. All you gotta do is look at the calendar and click on the session you want. You'll jump right in. Hop around, give us feedback. We're doing our best. Um, Brett, any final thoughts on what you want to share to the community around, uh, what you got going on the virtual event? Just random thoughts. >>Yeah. Uh, so sorry, we can't all be together in the same physical place. But the coolest thing about as business online is that we actually get to involve everyone. So as long as you have a computer and internet, you can actually attend DockerCon if you've never been to one before. So we're trying to recreate that experience online. Um, like Jenny said, the code of conduct is important. So, you know, we're all in this together with the chat, so try to try to be nice in there. These are all real humans that, uh, have feelings just like me. So let's, let's try to keep it cool and, uh, over in the Catherine's channel be taking your questions and maybe playing some music, playing some games, giving away some free stuff. Um, while you're, you know, in between sessions learning. Oh yeah. >>And I gotta say props to your rig. You've got an amazing setup there, Brett. I love what your show you do. It's really bad ass and kick ass. So great stuff. Jenny sponsors ecosystem response to this event has been phenomenal. The attendance 67,000. We're seeing a surge of people hitting the site now. So, um, if you're not getting in, just, you know, just wait going, we're going to crank through the queue, but the sponsors on the ecosystem really delivered on the content side and also the sport. You want to share a few shout outs on the sponsors who really kind of helped make this happen. >>Yeah, so definitely make sure you check out the sponsor pages and you go, each page is the actual content that they will be delivering. So they are delivering great content to you. Um, so you can learn and a huge thank you to our platinum and gold authors. >>Awesome. Well I got to say, I'm super impressed. I'm looking forward to the Microsoft Amazon sessions, which are going to be good. And there's a couple of great customer sessions there and you know, I tweeted this out last night and let them get you guys' reaction to this because you know, there's been a lot of talk around the covert crisis that we're in, but there's also a positive upshot to this is Cambridge and explosion of developers that are going to be building new apps. And I said, you know, apps apps aren't going to just change the world. They're gonna save the world. So a lot of the theme years, the impact that developers are having right now in the current situation, you know, if we get the goodness of compose and all the things going on in Docker and the relationships, this real impact happening with the developer community. And it's pretty evident in the program and some of the talks and some of the examples how containers and microservices are certainly changing the world and helping save the world. Your thoughts. >>Yeah. So if you, I think we have a, like you said, a number of sessions and interviews in the program today that really dive into that. And even particularly around coven, um, Clemente is sharing his company's experience, uh, from being able to continue operations in Italy when they were completely shut down. Uh, beginning of March, we have also in the cube channel several interviews about from the national Institute of health and precision cancer medicine at the end of the day. And you just can really see how containerization and, uh, developers are moving in industry and, and really humanity forward because of what they're able to build and create, uh, with advances in technology. Yeah. >>And first responders and these days is developers. Brett compose is getting a lot of traction on Twitter. I can see some buzz already building up. There's huge traction with compose, just the ease of use and almost a call for arms for integrating into all the system language libraries. I mean, what's going on with compose? I mean, what's the captain say about this? I mean, it seems to be really tracking in terms of demand and interest. >>Yeah, it's, it's a, I think we're over 700,000 composed files on GitHub. Um, so it's definitely beyond just the standard Docker run commands. It's definitely the next tool that people use to run containers. Um, just by having that we just by, and that's not even counting. I mean, that's just counting the files that are named Docker compose Yammel so I'm sure a lot of you out there have created a gamma file to manage your local containers or even on a server with Docker compose. And the nice thing is, is Docker is doubling down on that. So we've gotten some news recently, um, from them about what they want to do with opening the spec up, getting more companies involved, because compose is already gathered so much interest from the community. You know, AWS has importers, there's Kubernetes importers for it. So there's more stuff coming and we might just see something here in a few minutes. >>Well, let's get into the keynote. Guys, jump into the keynote. If you missed anything, come back to the stream, check out the sessions, check out the calendar. Let's go. Let's have a great time. Have some fun. Thanks for enjoy the rest of the day. We'll see you soon..
SUMMARY :
It's the queue with digital coverage of DockerCon I'll be with you throughout the day from an amazing lineup of content over 50 different We have a great day planned for you Obviously everyone's cancelling their events, but this is special to you guys. have the opportunity to do an in person event this year, we didn't want to lose the And we're going to be in chat talking to you about answering your questions. And then each of the tracks, you can jump into those sessions. Look at the calendar, find the session that you want. So you can ask questions the whole time. So the folks who were familiar with that can get that either on YouTube or on the site. the end of this keynote, at the end of this hour, and we're going to get you going. And, uh, you know, if you've been on my YouTube live show you, you've watched that, you've seen a lot of the What are the things can people do to make it interesting? you can catch anything that you miss. click on the session you want. So as long as you have a computer and internet, And I gotta say props to your rig. Um, so you can learn and a huge thank you in the current situation, you know, if we get the goodness of compose and all the things going on in Docker and the relationships, medicine at the end of the day. just the ease of use and almost a call for arms for integrating into all the system language libraries. I mean, that's just counting the files that are named Docker compose Yammel so I'm sure a lot of you out there have created a gamma file check out the sessions, check out the calendar.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Jenny | PERSON | 0.99+ |
Clemente | PERSON | 0.99+ |
Brett | PERSON | 0.99+ |
Italy | LOCATION | 0.99+ |
John | PERSON | 0.99+ |
Brett Fisher | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
December | DATE | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Jenny Marcio | PERSON | 0.99+ |
two | QUANTITY | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
DockerCon | EVENT | 0.99+ |
Laura | PERSON | 0.99+ |
each | QUANTITY | 0.99+ |
Docker | ORGANIZATION | 0.99+ |
67,000 | QUANTITY | 0.99+ |
YouTube | ORGANIZATION | 0.99+ |
each page | QUANTITY | 0.99+ |
DockerCon con 2020 | EVENT | 0.99+ |
Docker con | EVENT | 0.98+ |
today | DATE | 0.98+ |
Nirmal Mehta | PERSON | 0.98+ |
Catherine | PERSON | 0.98+ |
Docker con 2020 | EVENT | 0.97+ |
first | QUANTITY | 0.97+ |
Brett compose | PERSON | 0.97+ |
over 50 different sessions | QUANTITY | 0.96+ |
this year | DATE | 0.96+ |
last night | DATE | 0.96+ |
Docker | TITLE | 0.96+ |
over 700,000 composed files | QUANTITY | 0.96+ |
Amazon | ORGANIZATION | 0.95+ |
ORGANIZATION | 0.95+ | |
nearly 70,000 people | QUANTITY | 0.95+ |
GitHub | ORGANIZATION | 0.94+ |
DockerCon live 2020 | EVENT | 0.94+ |
Institute of health and precision cancer medicine | ORGANIZATION | 0.91+ |
DockerCon 2020 Kickoff | EVENT | 0.89+ |
John furrier | PERSON | 0.89+ |
Cambridge | LOCATION | 0.88+ |
Kubernetes | TITLE | 0.87+ |
two great co-hosts | QUANTITY | 0.84+ |
first responders | QUANTITY | 0.79+ |
this year | DATE | 0.78+ |
one | QUANTITY | 0.75+ |
them | QUANTITY | 0.7+ |
national | ORGANIZATION | 0.7+ |
beginning of March | DATE | 0.68+ |
every year | QUANTITY | 0.5+ |
Docker con. | EVENT | 0.49+ |
red monk | PERSON | 0.43+ |
Yammel | PERSON | 0.34+ |
John Barker, Versatile | CUBEConversation, August 2019
>> from the Silicon Angle Media Office in Boston, Massachusetts. It's the cue now Here's your host Day Volonte. >> Hi, buddy. Welcome to the Special Cube conversation. My name is David Dante, and this is our series on partners. How partners and the Channel is adding value to help customers create business capabilities in this digital world. I'm here with John Barker, and he is the co founder and CEO of a company called Versatile Local New England partner of H P Ease. This is sponsored by HP and versatile John. Welcome to the Cube. Thanks for coming >> on. Well, thanks, David. Appreciate you having me here today. >> So tell us more about versatile. You've been business for a couple of decades. Plus, you have, ah, deep background. Tell us about versatile and your background. >> Your, uh, be happy to do that versa was found in 25 years ago. Said 25th anniversary and I probably would be lying if I didn't tell you. I probably would've been out of this business 25 years later, but it's a great business. It was fun about my partner and I, Kevin Meaney, like a lot of those stories have started on a picnic bench in a basement. Actually, so we've grown the company over the years. I think one of the things that's important, I have something important to our customers always had a great basin infrastructure. We've always had a great deal of engineering support from our company. From a company perspective, we feel like it was 25 years ago today. Making sure you've got a complete infrastructure in place for our customers is very important that you can layer on top of those applications that those customs need to run their business. >> Well, you've seen the waves. I mean, we kind of started in the business around the same time, and we sort of we watched that PC era when everything was PC centric. It was all about personal productivity. And we saw the Internet wave. And obviously, now you know, the cloud has been this huge disrupter. And now we've got this digital wave. What are the big trends that you're seeing in the marketplace with your customer? >> Well, clearly, you know, Cloud is not new in anywhere. It has certainly been here for several years now, but in a lot of cases, they're they're they're certainly companies were born in the clown who have gone there 100% right out of the gate. But in a lot of cases are more traditional business. A lot of our customers are taking steps to get there or to build further down there, take advantage of what can be some certainly cost saving opportunities, some convenience aspects associated with the cloud. But I think from a customer's perspective, there's a lot of new technology out there, and not that it wasn't true 10 years ago. But there's so much to understand and understand what makes the most sense for my firm. Might my operation. Can I securely move to the club, right? Can I can I adequately support all of my customers? And I think that's really where a lot of customers are at. They're really looking for guidance. They're trying to understand what all the choices are. How do I move there and who can help me get there? >> Yeah, and the pendulum swings. I mean, after the you know the dot com bust, everybody was focused on cutting costs. You know, the post wide to K of situation. It feels like now people are trying to figure out. How do I get competitive advantage? They C i t. And data a differentiator, and they don't want to get disrupted. They don't want to get uber rise. That's kind of the bromide. Presumably, you see that as well. How our customers looking at that cloud, both public cloud and hybrid cloud as a differentiator. Are they looking at it to cut costs? Are they looking at it, too? Support new APS and be more agile. What are you saying? >> I think it's a lot of those things, you know, I think throughout our history it was all about putting that kind of base infrastructure together and storing a lot of data in a lot of places, making sure it's secure to have a have a proper disaster recovery plan in place. There wasn't a whole lot of thought back then. What is all this data we're storing and how do we take advantage of it? That clearly is changing, right. So with the advent of analytics, we happen to a lot of work in the health care space, which really there's a treasure trove of data out there to kind of help in that space. I think It's from a health care perspective. Technology will be the savior. Despite the fact that I think most doctors certainly conditions that you would talk to you today, almost look at this a burden and that needs to change needs to move forward. >> Well, health care is a real challenge. I mean, obviously you have, you know, hippa considerations. You've got all its highly regulated industry. As you point out, docks have never really embraced technology in a big way. But now you've got you got machine to machine intelligence. You got all kinds of embedded stuff and medical devices, and I think doctors are realizing that while machines can actually help us make better diagnoses, and it's an industry that's ripe for disruption, it really hasn't been heavily disrupted yet. But it's coming, isn't it? >> Definitely is coming on Dhe and again, she only at the hospital level. I think that they're a little bit ahead of the game in terms of how they manage their resource. Is the data the applications down the clinician level? You know much like yourself. I'm sure if you had to visit it, I had an issue related to some kind of elements or injury. A lot of it's not going to hospitals anymore. We're going to clinics. Minute clinics were going to see our doctors and a lot of cases. Those facilities haven't necessarily benefited by technology refreshes over over the last several years. And so they're really right to come into the kind of the 21st century here, along with things like Tele Medicine. So you talk about from a physician standpoint who struggled with just any HR application, which continues to be somewhat of a burden for a lot of folks. Now they've got compliancy issues they need to worry about. They've got to be offering new service is to their customers into their patients like telemedicine creates. Even Maur issues on the back end. From a data perspective, storage perspective, compliance, accessibility and ease of use don't necessarily go together, right? Tough balance, right? And so I think that, you know, from an enforcement perspective, it's only really starting to start in the health care space where is maybe the commercial? Certainly the financial markets have had no choice over the last 10 to 12 years to really hard down their facilities, their applications and their access to data. This is a whole new challenge for the health care space to tackle here. Going forward. >> So versatile are experts at at infrastructure and architecture and architectures obviously changed a lot over the past 25 years, right? Usedto have a nap. And you, you'd put down infrastructure might have been, you know, Unix or a V M s or whatever it was. You build a hardened system around that security and boom. There was your your stovepipe. It worked. It was rock solid. How are architectures, you know, changing today? How would you describe that today? Today's architecture? >> Well, way we do a lot of work with Hewlett Packard Enterprise. We've been a platinum partner. There's for close to 20 years. And so we certainly gotten very engaged with them on their product sets around how they could manage data and certainly in the storage space around their intelligence data platform, which makes ah great deal of sense for us for our customers. We do several things in terms of how we manage data. We also do private cloud hosting for medical applications use. You know as well as we obviously put together solutions for our customers to be in the club, and so making sure that we're securing those those platforms in >> putting the proper >> infrastructure in place from storage perspective from a compute perspective than honesty from a network accessibility perspective is really, really quite important. I think in a lot of cases, both commercial and in the healthcare space especially there are so many new technologies that can saved customers money and provide better security over what they may have been doing in the past and sometimes in health care is not alone. Some of those changes are taking longer than they probably should. And that kind of the promise of what technique you can do to get to deliver to those verticals is here. It just takes a great deal of time to some degree to sit down on. The customers have to understand what your options are. What makes the most sense get them comfortable, obviously that the decisions they make the date is gonna be available, it's gonna be secure. It's gonna be easily accessible. >> So you guys come in with a holistic whole house view, obviously, so you're trying to help a customer achieve an outcome. So my question is what are you looking for? From, Ah, storage system partner. What? What's the ideal storage infrastructure? What do you need from storage? >> Sure. I mean, really, I think Intelligent analytics, which is really obviously something that Hewlett Packard enterprise has been, has really come on strong with especially were initially engaged with that animal product line. Which is to say that the machine itself is starting to take care of a lot of the things you would expect for your I t. Folks to have tea, either worry about or manage on. I think, part of the problem for all our customers. There's so many data points now. We talked a little bit earlier about the fact that the coyote is everywhere, whether it's commercial or in health care. You've got all sorts of devices. Now they're on the network that are providing some level of data back somewhere. How do you manage all that? And I think with info site tools from from HP Enterprise in the storage side, you're starting to get some analysts that they're taking it a much more proactive look at what the infrastructure is doing. Potential issues where you can make intelligent changes to improve performance obviously keep things secure. Those kinds of technologies really are gonna be the I think that a bit of the hope for if you will, whether it's health care, commercial, the amount of one I t cost I t personnel, they're very expensive. Obviously those resource is. And so if you could get intelligent deployments of solution, she's like that, then it can kind of take a huge bird. Enough of the I T department. They could go about working on project worked to a to a man to a woman. All the customers that we work with always feel like they're spending too much time kind of managing their infrastructure on. I do think that we're finally getting to the point where we've got tools that can help us really do that and reduce the amount of effort and somewhat costs that goes into that. ONDA also allow those resource is to start to work in the more strategic projects for the company's right. You know where the activity should be spent trying to either improve patient care and the health care side improved profitability in the commercial space. This is really you know, this is groundbreaking kind of tools that we just haven't seen in this industry. >> Yeah, this is key. I mean, 10 years ago, people were afraid a lot of this automation, I often joke, but it's really not a joke. If your expertise is managing lungs, you probably want to rethink your career. And so but But again, 10 years ago, people were afraid that that the automation was gonna take their jobs. We think today they realized, Wow, this train of digital transformation is left the station, and they want to shift their activities from things that air, not adding value to the business to your point, things that are more strategic. So from an infrastructure standpoint, how are you helping customers? You achieve those outcomes? >> I think from our perspective, we take a very consultative approach, right? And often times I think sometimes you can't see the forest through the trees. And a lot of these organizations, right? The too busy in their day to day jobs, trying to manage the day to day efforts to actually take a strategic view of you know what I got here? How do I improve all this? What kinds of technology should I really be? looking at, I think it's almost impossible, right? You know, we had a lot of very high end engineers who a lot of cases, wouldn't be comfortable going to a small or medium business to spend their career there because it would be that only set of infrastructure they would set up and then manage right. It becomes boring for those guys. A lot of cases, a lot of the ways that we've been able to retain our talents because we're looking at noon challenges every day. New companies with new challenges for for, for their corporations, for their health care organizations to kind of understand one of the issues. How do we come up with some solutions? How to implement a phased approach to get them where they need to be? >> You're talking really about your partnership with HP Previously, HP What is it about that partnership that is unique? How do you guys differentiate in the marketplace on why HP? >> Well, I think for us it was an easy decision. You know, HP Enterprise has always been very partner friendly, which is important. We've worked together for about 20 years on dhe, certainly from a technology perspective and I think for our customers there's a bit of leapfrogging that goes all of all of these vendors, right? So to some degree of somebody might have the best d'oh gizmo for this year, and someone's gonna have something six months later. But there's consistency there. The strategic kind of view of of how they see the world unraveling and how we how we support I t going forward is really, I think, a notch above some of their competitors. I think hybrid is very important. Everybody you know, I mentioned early there, some certainly some companies that make sense that could really almost go completely club. But in most cases, it's just not possible several several certainly of our customer base. That is not gonna be comfortable ever to some degree putting everything in the cloud, but the ability to take advantage of the cloud and keep their their some of their I p, if you will locally to them make some sense. And so I think, you know, for for hybrid cloud in hybrid storage and compute HPD really got advance HB Well, >> in a lot of that to John, I think, is bringing the cloud operating model to your data wherever it exists, especially in health care. People aren't just gonna throw all the healthcare data into the cloud. I mean, there's so many issues they're not, not the least of which is. There's a lot of data on Prem that you just don't want to move into the clouds. Too expensive is too time consuming. So then to me and I look youto comment on this, a lot of that is around the simplicity of managing that infrastructure and three part kind of years ago said a gold standard on simplicity. And now Nimble comes in with a lot of intelligent automation. Your thoughts on being able to bring that cloud model to on Prem or in a hybrid situation, Is that a sort of valid way to think about? >> Oh, absolutely, I think it is. And I think again I go back to health care a little bit. But every 18 months there's storage requirements double on top of that because of compliancy issues, they have to hang on to the data indefinitely. I mean, that's gotta be a frightening aspect for any storage manager who's trying to manage Ah health care organization, a large health care organization. I need to hang on absolutely everything. Email all my files. It's not 10 years, 15 years, it's indefinitely. So that's a a major, a major undertaking in terms of Hattaway. Manage all that, right? So So H P certainly got an array of ways. Thio help with that, whether it's all flash right for the applications that require that kind of speed, this multi multi layers of storage of deployment, backup solutions, right and D r options that obviously a lot to take advantage of cloud where it makes a lot of sense. So there's a multitude of things that they need to think about on. I do believe HP is addressing those quite well. >> How are you changing the way in which you're hiring people today versus you know of 10 15 years ago? What's the skill set profile today? >> It really has changed and, you know, as we talked about earlier, we've been in business for 25 years, and and I think our ability to stay in business for that long has really been our ability to adapt and change on your right. You are hiring practices and who we hire is very different than it was maybe even five years ago. Where I've got to get cloud level architects involved. Expensive but very worthwhile resource is to be able to help customers with all of this. I do think what we get to deliver to our customers, the fact that we've got a multitude hundreds and hundreds of customers and experiences that go along with that that we could bring to the table it just couldn't possibly do in their own. It's quite impossible mission in the largest of the largest organizations. You're not going to expose the kinds of challenges in putting together kinds of solutions that gonna solve customers problems without doing that. So it's been quite a different higher than it has been in the past. >> My last question for you. Think of a healthcare use case or any any customer. So they're struggling. They've got, you know, everybody's got budget constraints. The market's moving super fast. You got this cloud thing coming, Adam The edge I ot you know, machine intelligence A. I a same time they they've got an existing business to runner and 80% of their time, and their investment is on keeping the lights on. We hear that all the time. What's your advice to the customer? I'm sure this is a common story. They want to go from point A to point B transformed their business. They don't want to go broke doing it. They might not have. The resource is so what do you D'oh, How would you advise them? >> Well, look, I think and we struggle like a lot of use. A lot of partners in this world in this country, right? Even in this region. And so trying to differentiate yourself. And we like to think that we're better than everybody else and so does the other two or 3000. Probably surrounded here in the 50 mile radius is really do need to find a trusted advisor that can help you through that. I think one of the places that we start there are there's opportunity to get some fairly immediate return on investment. I think that's important because to your point there were challenges, their their budget constraints. How am I gonna do all that? That those two things kind of go in two different directions. But there are many of our customers, really, Whether it's in health care and even the commercial side who may be doing some old things, some old I t. Things that could be replaced, including the cloud in terms of how they may be. They may be using an old disaster recovery of method, right that you're paying a lot of money for lease lines. It's really kind of a cold site, you know. They might go there once a year to try to see if they can recreate all their applications and get the thing up and running. There's clearly a cloud opportunity in there to save them. >> A lot of money >> reinvest that. Maybe not sit on idle equipment that obviously costs money is under some kind of maintenance, and you need to obviously resource to sport that. So I think that's a good conversation. When you guys get in with a customer and start to talk about Look, there's probably some areas here. We could save you money. So, yes, we're gonna charge you some money to get there. But the return on that is gonna be gonna be much better than where you want today. >> I love that answer. So look, look for quick hits. Try to demonstrate some some savings and generate some cash. If you will think like a business person, use that as a gain share approach. Maybe go to the CFO and say, Hey, if we can save this money can be reinvested in innovation. Drive more business value than you get that flywheel effect and you can build up credibility in your organization. And that's how you get from Point A to point B. Without going broke, he actually can make money for the organization that >> absolutely it's a very good point because, you know, we talked about earlier. You know, I t has been under constraint for quite a while, right? And so again, back to the ability for those people to think and have enough time to get into shitty strategic conversations all by themselves. It's difficult, if not impossible. So they need. They need help, They need consultants and they need trusted advisors. But obviously you need to prove your worth. I do think if you could start someplace where you can demonstrate Look, we could save you some real money here over the next year. 18 months, Two years is a great place to start. >> John, thanks so much for coming in and sharing your insights and best of luck out there. >> Well, thank you. I appreciate it very much. >> You're welcome. All right. Thank you for watching everybody. This is Dave Volante with the Cube. Will see you next time.
SUMMARY :
It's the cue Welcome to the Special Cube conversation. Appreciate you having me here today. Plus, you have, ah, deep background. I have something important to our customers always had a great basin infrastructure. And obviously, now you know, the cloud has been this huge disrupter. Can I securely move to the club, I mean, after the you know the dot com bust, everybody was focused on cutting costs. I think it's a lot of those things, you know, I think throughout our history it was all about putting I mean, obviously you have, you know, hippa considerations. And so I think that, you know, from an enforcement perspective, it's only really starting How are architectures, you know, changing today? There's for close to 20 years. And that kind of the promise of what technique So you guys come in with a holistic whole house view, obviously, so you're trying to help are gonna be the I think that a bit of the hope for if you will, left the station, and they want to shift their activities from things that air, A lot of cases, a lot of the ways that we've been able to retain our talents because but the ability to take advantage of the cloud and keep their their some of their I p, in a lot of that to John, I think, is bringing the cloud operating model to your data wherever And I think again I go back to health care a little bit. and and I think our ability to stay in business for that long has really been our The resource is so what do you D'oh, I think that's important because to your point there were challenges, their their budget constraints. better than where you want today. And that's how you get from Point A to point B. Without going broke, he actually can make money for the organization that I do think if you could start someplace where you can demonstrate Look, we could save you some real money here over I appreciate it very much. Thank you for watching everybody.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
David | PERSON | 0.99+ |
Kevin Meaney | PERSON | 0.99+ |
John | PERSON | 0.99+ |
John Barker | PERSON | 0.99+ |
David Dante | PERSON | 0.99+ |
100% | QUANTITY | 0.99+ |
ONDA | ORGANIZATION | 0.99+ |
August 2019 | DATE | 0.99+ |
15 years | QUANTITY | 0.99+ |
21st century | DATE | 0.99+ |
25 years | QUANTITY | 0.99+ |
Dave Volante | PERSON | 0.99+ |
H P Ease | ORGANIZATION | 0.99+ |
two | QUANTITY | 0.99+ |
10 years | QUANTITY | 0.99+ |
Hewlett Packard | ORGANIZATION | 0.99+ |
HP | ORGANIZATION | 0.99+ |
80% | QUANTITY | 0.99+ |
Hewlett Packard Enterprise | ORGANIZATION | 0.99+ |
Adam | PERSON | 0.99+ |
3000 | QUANTITY | 0.99+ |
50 mile | QUANTITY | 0.99+ |
six months later | DATE | 0.99+ |
Boston, Massachusetts | LOCATION | 0.99+ |
25 years later | DATE | 0.99+ |
one | QUANTITY | 0.99+ |
25 years ago | DATE | 0.99+ |
today | DATE | 0.99+ |
hundreds | QUANTITY | 0.99+ |
10 years ago | DATE | 0.98+ |
about 20 years | QUANTITY | 0.98+ |
both | QUANTITY | 0.98+ |
five years ago | DATE | 0.98+ |
Two years | QUANTITY | 0.98+ |
HP Enterprise | ORGANIZATION | 0.98+ |
Today | DATE | 0.98+ |
next year | DATE | 0.98+ |
18 months | QUANTITY | 0.97+ |
Day Volonte | PERSON | 0.97+ |
Silicon Angle Media Office | ORGANIZATION | 0.97+ |
Cube | ORGANIZATION | 0.97+ |
two things | QUANTITY | 0.96+ |
25th anniversary | QUANTITY | 0.95+ |
Hattaway | ORGANIZATION | 0.94+ |
10 15 years ago | DATE | 0.94+ |
once a year | QUANTITY | 0.92+ |
Versatile Local | ORGANIZATION | 0.91+ |
close to 20 years | QUANTITY | 0.91+ |
Prem | ORGANIZATION | 0.91+ |
Nimble | ORGANIZATION | 0.89+ |
two different directions | QUANTITY | 0.89+ |
Internet | EVENT | 0.89+ |
point B | OTHER | 0.89+ |
18 months | QUANTITY | 0.87+ |
12 years | QUANTITY | 0.83+ |
HPD | ORGANIZATION | 0.82+ |
Unix | ORGANIZATION | 0.82+ |
this year | DATE | 0.81+ |
three part | QUANTITY | 0.79+ |
waves | EVENT | 0.78+ |
decades | QUANTITY | 0.76+ |
point A | OTHER | 0.71+ |
past 25 years | DATE | 0.7+ |
double | QUANTITY | 0.68+ |
M | COMMERCIAL_ITEM | 0.66+ |
uber | ORGANIZATION | 0.63+ |
last several years | DATE | 0.62+ |
Point A | OTHER | 0.61+ |
Medicine | TITLE | 0.61+ |
Dhe | PERSON | 0.6+ |
New England | LOCATION | 0.56+ |
years | DATE | 0.56+ |
Versatile | PERSON | 0.55+ |
couple | QUANTITY | 0.54+ |
of | DATE | 0.54+ |
10 | QUANTITY | 0.53+ |
Dr Prakriteswar Santikary, ERT | MIT CDOIQ 2018
>> Live from the MIT campus in Cambridge, Massachusetts, it's the Cube, covering the 12th Annual MIT Chief Data Officer and Information Quality Symposium. Brought to you by SiliconANGLE Media. >> Welcome back to the Cube's coverage of MITCDOIQ here in Cambridge, Massachusetts. I'm your host, Rebecca Knight, along with my co-host, Peter Burris. We're joined by Dr. Santikary, he is the vice-president and chief data officer at ERT. Thanks so much for coming on the show. >> Thanks for inviting me. >> We're going to call you Santi, that's what you go by. So, start by telling our viewers a little bit about ERT. What you do, and what kind of products you deliver to clients. >> I'll be happy to do that. The ERT is a clinical trial small company and we are a global data and technology company that minimizes risks and uncertainties within clinical trials for our customers. Our customers are top pharma companies, biotechnologic companies, medical device companies and they trust us to run their clinical trials so that they can bring their life-saving drugs to the market on time and every time. So we have a huge responsibility in that regard, because they put their trust in us, so we serve as their custodians of data and the processes, and the therapeutic experience that you bring to the table as well as compliance-related expertise that we have. So not only do we provide data and technology expertise, we also provide science expertise, regulatory expertise, so that's one of the reasons they trust us. And we also have been around since 1977, so it's almost over 50 years, so we have this collective wisdom that we have gathered over the years. And we have really earned trust in this past and because we deal with safety and efficacy of drugs and these are the two big components that help MDA, or any regulatory authority for that matter, to approve the drugs. So we have a huge responsibility in this regard, as well. In terms of product, as I said, we are in the safety and efficacy side of the clinical trial process, and as part of that, we have multiple product lines. We have respiratory product lines, we have cardiac safety product lines, we have imaging. As you know, imaging is becoming more and more so important for every clinical trial and particularly on oncology space for sure. To measure the growth of the tumor and that kind of things. So we have a business that focuses exclusively on the imaging side. And then we have data and analytics side of the house, because we provide real-time information about the trial itself, so that our customers can really measure risks and uncertainties before they become a problem. >> At this symposium, you're going to be giving a talk about clinical trials and the problems of, the missteps that can happen when the data is not accurate. Lay out the problem for our viewers, and then we're going to talk about the best practices that have emerged. >> I think that clinical trial space is very complex by its own nature, and the process itself is very lengthy. If you know one of the statistics, for example, it takes about 10 to 15 years to really develop and commercialize a drug. And it usually costs about $2.5 to 3 billion. Per drug. So think about the enormity of this. So the challenges are too many. One is data collection itself. Your clinical trials are becoming more and more complex. Becoming more and more global. Getting patients to the sites is another problem. Patient selection and retention, another one. Regulatory guidelines is another big issue because not every regulated authority follows the same sets of rules and regulations. And cost. Cost is a big imperative to the whole thing, because the development life-cycle of a drug is so lengthy. And as I said, it takes about $3 billion to commercialize a drug and that cost comes down to the consumers. That means patients. So the cost of the health care is growing, is sky-rocketing. And in terms of data collection, there are lots of devices in the field, as you know. Wearables, mobile helds, so the data volume is a tremendous problem. And the vendors. Each pharmaceutical companies use so many vendors to run their trials. CRO's. The Clinical Research Organizations. They have EDC systems, they can have labs. You name it. So they outsource all these to different vendors. Now, how do you coordinate and how do you make them to collaborate? And that's where the data plays a big role because now the data is everywhere across different systems, and those systems don't talk to each other. So how do you really make real-time decisioning when you don't know where your data is? And data is the primary ingredient that you use to make decisions? So that's where data and analytics, and bringing that data in real-time, is a very, very critical service that we provide to our customers. >> When you look at medicine, obviously, the whole notion of evidence-based medicine has been around for 15 years now, and it's becoming a seminal feature of how we think about the process of delivering medical services and ultimately paying it forward to everything else, and partly that's because doctors are scientists and they have an affinity for data. But if we think about going forward, it seems to me as though learning more about the genome and genomics is catalyzing additional need and additional understanding of the role that drugs play in the human body and it almost becomes an information problem, where the drug, I don't want to say that a drug is software, but a drug is delivering something that, ultimately, is going to get known at a genomic level. So does that catalyze additional need for data? is that changing the way we think about clinical trials? Especially when we think about, as you said, it's getting more complex because we have to make sure that a drug has the desired effect with men and women, with people from here, people from there. Are we going to push the data envelope even harder over the next few years? >> Oh, you bet. And that's where the real world evidence is playing a big role. So, instead of patients coming to the clinical trials, clinical trial is going to the patient. It is becoming more and more patient-centric. >> Interesting. >> And the early part of protocol design, for example, the study design, that is step one. So more and more the real world evidence data is being used to design the protocol. The very first stage of the clinical trial. Another thing that is pushing the envelope is artificial intelligence and other data mining techniques and now people can be used to really mine that data, the MAR data, prescription data, claims data. Those are real evidence data coming from the real patients. So now you can use these artificial intelligence and mission learning techniques to mine that data then to really design the protocol and the study design instead of flipping through the year MAR data manually. So patient collection, for example, is no patients, no trials, right? So gathering patients, and the right set of patients, is one of the big problems. It takes a lot of that time to bring those patients and even more troublesome is to retain those patients over time. These, too, are big, big things that take a long time and site selection, as well. Which site is going to really be able to bring the right patients for the right trials? >> So, two quick comments on that. One of the things, when you say the patients, when someone has a chronic problem, a chronic disease, when they start to feel better as a consequence of taking the drug, they tend to not take the drug anymore. And that creates this ongoing cycle. But going back to what you're saying, does it also mean that clinical trial processes, because we can gather data more successfully over time, it used to be really segmented. We did the clinical trial and it stopped. Then the drug went into production and maybe we caught some data. But now because we can do a better job with data, the clinical trial concept can be sustained a little bit more. That data becomes even more valuable over time and we can add additional volumes of data back in, to improve the process. >> Is that shortening clinical trials? Tell us a little bit about that. >> Yes, as I said, it takes 10 to 15 years if we follow the current process, like Phase One, Phase Two, Phase Three. And then post-marketing, that is Phase Four. I'm not taking the pre-clinical side of these trials in the the picture. That's about 10 to 15 years, about $3 billion kind of thing. So when you use these kind of AI techniques and the real world evidence data and all this, the projection is that it will reduce the cycle by 60 to 70%. >> Wow. >> The whole study, beginning to end time. >> So from 15 down to four or five? >> Exactly. So think about, there are two advantages. One is obviously, you are creating efficiency within the system, and this drug industry and drug discovery industry is rife for disruption. Because it has been using that same process over and over for a long time. It's like, it is working, so why fix it? But unfortunately, it's not working. Because the health care cost has sky-rocketed. So these inefficiencies are going to get solved when we employ real world evidencing into the mixture. Real-time decision making. Risks analysis before they become risks. Instead of spending one year to recruit patients, you use AI techniques to get to the right patients in minutes, so think about the efficiency again. And also, the home monitoring, or mHealth type of program, where the patients don't need to come to the sites, the clinical sites, for check-up anymore. You can wear wearables that are MDA regulated and approved and then, they're going to do all the work from within the comfort of their home. So think about that. And the other thing is, very, terminally sick patients, for example. They don't have time, nor do they have the energy, to come to the clinical site for check-up. Because every day is important to them. So, this is the paradigm shift that is going on. Instead of patients coming to the clinical trials, clinical trials are coming to the patients. And that shift, that's a paradigm shift and that is happening because of these AI techniques. Blockchain. Precision Medicine is another one. You don't run a big clinical trial anymore. You just go micro-trial, you just group small number of patients. You don't run a trial on breast cancer anymore, you just say, breast cancer for these patients, so it's micro-trials. And that needs -- >> Well that can still be aggregated. >> Exactly. It still needs to be aggregated, but you can get the RTD's quickly, so that you can decide whether you need to keep investing in that trial, or not. Instead of waiting 10 years, only to find out that your trial is going to fail. So you are wasting not only your time, but also preventing patients from getting the right medicine on time. So you have that responsibility as a pharmaceutical company, as well. So yes, it is a paradigm shift and this whole industry is rife for disruption and ERT is right at the center. We have not only data and technology experience, but as I said, we have deep domain experience within the clinical domain as well as regulatory and compliance experience. You need all these to navigate through this turbulent water of clinical research. >> Revolutionary changes taking place. >> It is and the satisfaction is, you are really helping the patients. You know? >> And helping the doctor. >> Helping the doctors. >> At the end of the day, the drug company does not supply the drug. >> Exactly. >> The doctor is prescribing, based on knowledge that she has about that patient and that drug and how they're going to work together. >> And out of the good statistics, in 2017, just last year, 60% of the MDA approved drugs were supported through our platform. 60 percent. So there were, I think, 60 drugs got approved? I think 30 or 35 of them used our platform to run their clinical trial, so think about the satisfaction that we have. >> A job well done. >> Exactly. >> Well, thank you for coming on the show Santi, it's been really great having you on. >> Thank you very much. >> Yes. >> Thank you. >> I'm Rebecca Knight. For Peter Burris, we will have more from MITCDOIQ, and the Cube's coverage of it. just after this. (techno music)
SUMMARY :
Brought to you by SiliconANGLE Media. Thanks so much for coming on the show. We're going to call you Santi, that's what you go by. and the therapeutic experience that you bring to the table the missteps that can happen And data is the primary ingredient that you use is that changing the way we think about clinical trials? patients coming to the clinical trials, So more and more the real world evidence data is being used One of the things, when you say the patients, Is that shortening clinical trials? and the real world evidence data and all this, and then, they're going to do all the work is rife for disruption and ERT is right at the center. It is and the satisfaction is, At the end of the day, and how they're going to work together. And out of the good statistics, Well, thank you for coming on the show Santi, and the Cube's coverage of it.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Jeff Frick | PERSON | 0.99+ |
David | PERSON | 0.99+ |
Rebecca Knight | PERSON | 0.99+ |
Alan | PERSON | 0.99+ |
Jeff | PERSON | 0.99+ |
Adrian | PERSON | 0.99+ |
Peter Burris | PERSON | 0.99+ |
Paul | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Adrian Swinscoe | PERSON | 0.99+ |
Jeff Brewer | PERSON | 0.99+ |
MAN Energy Solutions | ORGANIZATION | 0.99+ |
2017 | DATE | 0.99+ |
Tony | PERSON | 0.99+ |
Shelly | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Volkswagen | ORGANIZATION | 0.99+ |
Tony Fergusson | PERSON | 0.99+ |
Pega | ORGANIZATION | 0.99+ |
Europe | LOCATION | 0.99+ |
Paul Greenberg | PERSON | 0.99+ |
James Hutton | PERSON | 0.99+ |
Shelly Kramer | PERSON | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
Rob Walker | PERSON | 0.99+ |
Dylan | PERSON | 0.99+ |
10 | QUANTITY | 0.99+ |
June 2019 | DATE | 0.99+ |
Corey Quinn | PERSON | 0.99+ |
Don | PERSON | 0.99+ |
Santikary | PERSON | 0.99+ |
Croom | PERSON | 0.99+ |
china | LOCATION | 0.99+ |
Tony Ferguson | PERSON | 0.99+ |
30 | QUANTITY | 0.99+ |
60 drugs | QUANTITY | 0.99+ |
roland cleo | PERSON | 0.99+ |
UK | LOCATION | 0.99+ |
Don Schuerman | PERSON | 0.99+ |
cal poly | ORGANIZATION | 0.99+ |
Santi | PERSON | 0.99+ |
1985 | DATE | 0.99+ |
Duncan Macdonald | PERSON | 0.99+ |
Silicon Valley | LOCATION | 0.99+ |
millions | QUANTITY | 0.99+ |
Cloud Native Computing Foundation | ORGANIZATION | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
one year | QUANTITY | 0.99+ |
10 years | QUANTITY | 0.99+ |
Pegasystems | ORGANIZATION | 0.99+ |
80% | QUANTITY | 0.99+ |
Carol Carpenter, Google Cloud & Ayin Vala, Precision Medicine | Google Cloud Next 2018
>> Live from San Francisco, it's the Cube, covering Google Cloud Next 2018. Brought to you by Google Cloud and its ecosystem partners. >> Hello and welcome back to The Cube coverage here live in San Francisco for Google Cloud's conference Next 2018, #GoogleNext18. I'm John Furrier with Jeff Frick, my cohost all week. Third day of three days of wall to wall live coverage. Our next guest, Carol Carpenter, Vice President of Product Marketing for Google Cloud. And Ayin Vala, Chief Data Science Foundation for Precision Medicine. Welcome to The Cube, thanks for joining us. >> Thank you for having us. >> So congratulations, VP of Product Marketing. Great job getting all these announcements out, all these different products. Open source, big query machine learning, Istio, One dot, I mean, all this, tons of products, congratulations. >> Thank you, thank you. It was a tremendous amount of work. Great team. >> So you guys are starting to show real progress in customer traction, customer scale. Google's always had great technology. Consumption side of it, you guys have made progress. Diane Green mentioned on stage, on day one, she mentioned health care. She mentioned how you guys are organizing around these verticals. Health care is one of the big areas. Precision Medicine, AI usage, tell us about your story. >> Yes, so we are a very small non-profit. And we are at the intersection of data science and medical science and we work on projects that have non-profits impact and social impact. And we work on driving and developing projects that have social impact and in personalized medicine. >> So I think it's amazing. I always think with medicine, right, you look back five years wherever you are and you look back five years and think, oh my god, that was completely barbaric, right. They used to bleed people out and here, today, we still help cancer patients by basically poisoning them until they almost die and hopefully it kills the cancer first. You guys are looking at medicine in a very different way and the future medicine is so different than what it is today. And talk about, what is Presicion Medicine? Just the descriptor, it's a very different approach to kind of some of the treatments that we still use today in 2018. It's crazy. >> Yes, so Presicion Medicine has the meaning of personalized medicine. Meaning that we hone it into smaller population of people to trying to see what is the driving factors, individually customized to those populations and find out the different variables that are important for that population of people for detection of the disease, you know, cancer, Alzheimer's, those things. >> Okay, talk about the news. Okay, go ahead. >> Oh, oh, I was just going to say. And to be able to do what he's doing requires a lot of computational power to be able to actually get that precise. >> Right. Talk about the relationship and the news you guys have here. Some interesting stuff. Non-profits, they need compute power, they need, just like an eneterprise. You guys are bringing some change. What's the relationship between you guys? How are you working together? >> So one of our key messages here at this event is really around making computing available for everyone. Making data and analytics and machine learning available for everyone. This whole idea of human-centered AI. And what we've realized is, you know, data is the new natural resource. >> Yeah. >> In the world these days. And companies that know how to take advantage and actually mine insights from the data to solve problems like what they're solving at Precision Medicine. That is really where the new breakthroughs are going to come. So we announced a program here at the event, It's called Data Solutions for Change. It's from Google Cloud and it's a program in addition to our other non-profit programs. So we actually have other programs like Google Earth for non-profits. G Suite for non-profits. This one is very much focused on harnessing and helping non-profits extract insights from data. >> And is it a funding program, is it technology transfer Can you talk about, just a little detail on how it actually works. >> It's actually a combination of three things. One is funding, it's credits for up to $5,000 a month for up to six months. As well as customer support. One thing we've all talked about is the technology is amazing. You often also need to be able to apply some business logic around it and data scientists are somewhat of a challenge to hire these days. >> Yeah. >> So we're also proving free customer support, as well as online learning. >> Talk about an impact of the Cloud technology for the non-proit because6 I, you know, I'm seeing so much activity, certainly in Washington D.C. and around the world, where, you know, since the Jobs Act, fundings have changed. You got great things happening. You can have funding on mission-based funding. And also, the legacy of brand's are changing and open source changes So faster time to value. (laughs) >> Right. >> And without all the, you know, expertise it's an issue. How is Cloud helping you be better at what you do? Can you give some examples? >> Yes, so we had two different problems early on, as a small non-profit. First of all, we needed to scale up computationally. We had in-house servers. We needed a HIPAA complaint way to put our data up. So that's one of the reasons we were able to even use Google Cloud in the beginning. And now, we are able to run our models or entire data sets. Before that, we were only using a small population. And in Presicion Medicine, that's very important 'cause you want to get% entire population. That makes your models much more accurate. The second things was, we wanted to collaborate with people with clinical research backgrounds. And we need to provide a platform for them to be able to use, have the data on there, visualize, do computations, anything they want to do. And being on a Cloud really helped us to collaborate much more smoothly and you know, we only need their Gmail access, you know to Gmail to give them access and things. >> Yeah. >> And we could do it very, very quickly. Whereas before, it would take us months to transfer data. >> Yeah, it's a huge savings. Talk about the machine learning, AutoML's hot at the show, obviously, hot trend. You start to see AI ops coming in and disrupt more of the enterprise side but as data scientists, as you look at some of these machine learnings, I mean, you must get pretty excited. What are you thinking? What's your vision and how you going to use, like BigQuery's got ML built in now. This is like not new, it's Google's been using it for awhile. Are you tapping some of that? And what's your team doing with ML? >> Absolutely. We use BigQuery ML. We were able to use a few months in advance. It's great 'cause our data scientists like to work in BigQuery. They used to see, you know, you query the data right there. You can actually do the machine learning on there too. And you don't have to send it to different part of the platform for that. And it gives you sort of a proof of concept right away. For doing deep learning and those things, we use Cloud ML still, but for early on, you want to see if there is potential in a data. And you're able to do that very quickly with BigQuery ML right there. We also use AutoML Vision. We had access to about a thousand patients for MRI images and we wanted to see if we can detect Alzheimer's based on those. And we used AutoML for that. Actually works well. >> Some of the relationships with doctors, they're not always seen as the most tech savvy. So now they are getting more. As you do all this high-end, geeky stuff, you got to push it out to an interface. Google's really user-centric philosophy with user interfaces has always been kind of known for. Is that in Sheets, is that G Suite? How will you extend out the analysis and the interactions. How do you integrate into the edge work flow? You know? (laughs) >> So one thing I really appreciated for Google Cloud was that it was, seems to me it's built from the ground up for everyone to use. And it was the ease of access was very, was very important to us, like I said. We have data scientisits and statisticians and computer scientists onboard. But we needed a method and a platform that everybody can use. And through this program, they actually.. You guys provide what's called Qwiklab, which is, you know, screenshot of how to spin up a virtual machine and things like that. That, you know, a couple of years ago you have to run, you know, few command lines, too many command lines, to get that. Now it's just a push of a button. So that's just... Makes it much easier to work with people with background and domain knowledge and take away that 80% of the work, that's just a data engineering work that they don't want to do. >> That's awesome stuff. Well congratulations. Carol, a question to you is How does someone get involved in the Data Solutions for Change? An application? Online? Referral? I mean, how do these work? >> All of the above. (John laughs) We do have an online application and we welcome all non-profits to apply if they have a clear objective data problem that they want to solve. We would love to be able to help them. >> Does scope matter, big size, is it more mission? What's the mission criteria? Is there a certain bar to reach, so to speak, or-- >> Yeah, I mean we're most focused on... there really is not size, in terms of size of the non-profit or the breadth. It's much more around, do you have a problem that data and analytics can actually address. >> Yeah. >> So really working on problems that matter. And in addition, we actually announced this week that we are partnering with United Nations on a contest. It's called Sustainable.. It's for Visualize 2030 >> Yeah. >> So there are 17 sustainable development goals. >> Right, righr. >> And so, that's aimed at college students and storytelling to actually address one of these 17 areas. >> We'd love to follow up after the show, talk about some of the projects. since you have a lot of things going on. >> Yeah. >> Use of technology for good really is important right now, that people see that. People want to work for mission-driven organizations. >> Absolutely >> This becomes a clear citeria. Thanks for coming on. Appreciate it. Thanks for coming on today. Acute coverage here at Google Could Next 18 I'm John Furrier with Jeff Fricks. Stay with us. More coverage after this short break. (upbeat music)
SUMMARY :
Brought to you by Google Cloud Welcome to The Cube, thanks for joining us. So congratulations, VP of Product Marketing. It was a tremendous amount of work. So you guys are starting to show real progress And we work on driving and developing and you look back five years for that population of people for detection of the disease, Okay, talk about the news. And to be able to do what he's doing and the news you guys have here. And what we've realized is, you know, And companies that know how to take advantage Can you talk about, just a little detail You often also need to be able to apply So we're also proving free customer support, And also, the legacy of brand's are changing And without all the, you know, expertise So that's one of the reasons we And we could do it very, very quickly. and disrupt more of the enterprise side And you don't have to send it to different Some of the relationships with doctors, and take away that 80% of the work, Carol, a question to you is All of the above. It's much more around, do you have a problem And in addition, we actually announced this week and storytelling to actually address one of these 17 areas. since you have a lot of things going on. Use of technology for good really is important right now, Thanks for coming on today.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Jeff Frick | PERSON | 0.99+ |
Carol Carpenter | PERSON | 0.99+ |
Diane Green | PERSON | 0.99+ |
80% | QUANTITY | 0.99+ |
Ayin Vala | PERSON | 0.99+ |
United Nations | ORGANIZATION | 0.99+ |
Carol | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
San Francisco | LOCATION | 0.99+ |
Washington D.C. | LOCATION | 0.99+ |
Jeff Fricks | PERSON | 0.99+ |
Precision Medicine | ORGANIZATION | 0.99+ |
John | PERSON | 0.99+ |
five years | QUANTITY | 0.99+ |
John Furrier | PERSON | 0.99+ |
One | QUANTITY | 0.99+ |
three days | QUANTITY | 0.99+ |
Jobs Act | TITLE | 0.99+ |
BigQuery | TITLE | 0.99+ |
G Suite | TITLE | 0.99+ |
2018 | DATE | 0.99+ |
17 areas | QUANTITY | 0.98+ |
one | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
Third day | QUANTITY | 0.98+ |
this week | DATE | 0.98+ |
AutoML | TITLE | 0.98+ |
Cloud ML | TITLE | 0.98+ |
up to six months | QUANTITY | 0.98+ |
First | QUANTITY | 0.97+ |
Gmail | TITLE | 0.97+ |
BigQuery ML | TITLE | 0.97+ |
second things | QUANTITY | 0.97+ |
17 sustainable development goals | QUANTITY | 0.96+ |
about a thousand patients | QUANTITY | 0.95+ |
three things | QUANTITY | 0.95+ |
Google Cloud | ORGANIZATION | 0.94+ |
two different problems | QUANTITY | 0.94+ |
Google Earth | TITLE | 0.93+ |
AutoML Vision | TITLE | 0.93+ |
The Cube | ORGANIZATION | 0.93+ |
ML | TITLE | 0.93+ |
Alzheimer | OTHER | 0.91+ |
up to $5,000 a month | QUANTITY | 0.91+ |
day one | QUANTITY | 0.87+ |
couple of years ago | DATE | 0.87+ |
Istio | PERSON | 0.87+ |
first | QUANTITY | 0.85+ |
Vice President | PERSON | 0.85+ |
Google Cloud | TITLE | 0.85+ |
BigQuery ML. | TITLE | 0.85+ |
Next 2018 | DATE | 0.84+ |
one thing | QUANTITY | 0.83+ |
Qwiklab | TITLE | 0.79+ |
2030 | TITLE | 0.78+ |
Cloud | TITLE | 0.76+ |
#GoogleNext18 | EVENT | 0.73+ |
HIPAA | TITLE | 0.72+ |
Data Science Foundation | ORGANIZATION | 0.72+ |
Next 18 | TITLE | 0.7+ |
Cube | ORGANIZATION | 0.67+ |
I'm John | TITLE | 0.64+ |
tons | QUANTITY | 0.64+ |
Next | DATE | 0.63+ |
Furrier | PERSON | 0.59+ |
messages | QUANTITY | 0.58+ |
David Floyer, Wikibon | Pure Storage Accelerate 2018
>> Narrator: Live from the Bill Graham Auditorium in San Francisco, it's theCUBE, covering Pure Storage Accelerate, 2018, brought to you by Pure Storage. >> Welcome back to theCUBE's coverage of Pure Storage Accelerate 2018. I'm Lisa Martin. Been here all day with Dave Vellante. We're joined by David Floyer now. Guys, really interesting, very informative day. We got to talk to a lot of puritans, but also a breadth of customers, from Mercedes Formula One, to Simpson Strong-Tie to UCLA's School of Medicine. Lot of impact that data is making in a diverse set of industries. Dave, you've been sitting here, with me, all day. What are some of the key takeaways that you have from today? >> Well, Pure's winning in the marketplace. I mean, Pure said, "We're not going to bump along. "We're going to go for it. "We're going to drive growth. "We don't care if we lose money, early on." They bet that the street would reward that model, it has. Kind of a little mini Amazon, version of Amazon model. Grow, grow, grow, worry about profits down the road. They're eking out a slight, little positive free cashflow, on a non-gap basis, so that's good. And they were first with All-Flash, really kind of early on. They kind of won that game. You heard David, today. The NVMe, the first with NVMe. No uplifts on pricing for NVMe. So everybody's going to follow that. They can do the Evergreen model. The can do these things and claim these things as we were first. Of course, we know, David Floyer, you were first to make the call, back in 2008, (laughs) on Flash and the All-Flash data center, but Pure was right there with you. So they're winning in that respect. Their ecosystem is growing. But, you know, storage companies never really have this massive ecosystem that follow them. They really have to do integration. So that's, that's a good thing. So, you know, we're watching growth, we're watching continued execution. It seems like they are betting that their product portfolio, their platform, can serve a lot of different workloads. And it's going to be interesting to see if they can get to two billion, the kind of, the next milestone. They hit a billion. Can they get to two billion with the existing sort of product portfolio and roadmap, or do they have to do M&A? >> David: You're right. >> That's one thing to watch. The other is, can Pure remain independent? David, you know well, we used to have this conversation, all the time, with the likes of David Scott, at 3PAR, and the guys at Compellent, Phil Soran and company. They weren't able, Frank Slootman at Data Domain, they weren't able to stay independent. They got taken out. They weren't pricey enough for the market not to buy them. They got bought out. You know, Pure, five billion dollar market cap, that's kind of rich for somebody to absorb. So it was kind of like NetApp. NetApp got too expensive to get acquired. So, can they achieve that next milestone, two billion. Can they get to five billion. The big difference-- >> Or is there any hiccup, on the way, which will-- >> Yeah, right, exactly. Well the other thing, too, is that, you know, NetApp's market was growing, pretty substantially, at the time, even though they got hit in the dot-com boom. The overall market for Pure isn't really growing. So they have to gain share in order to get to that two billion, three billion, five billion dollar mark. >> If you break the market into the flash and non flash, then they're in the much better half of the market. That one is still growing, from that perspective. >> Well, I kind of like to look at the service end piece of it. I mean, they use this term, by Gartner, today, the something, accelerated, it's a new Gartner term, in 2018-- >> Shared Accelerated Storage >> Shared Accelerated Storage. Gartner finally came up with a category that we called service end. I've been joking all day. Gartner has a better V.P. of naming than we do. (chuckles) We're looking' at service end. I mean, I started, first talking about it, in 2009, thanks to your guidance. But that chart that you have that shows the sort of service end, which is essentially Pure, right? It's the, it's not-- >> Yes. It's a little more software than Pure is. But Pure is an awful lot of software, yes. And showing it growing, at the expense of the other segments, you know. >> David: Particularly sad. >> Particularly sad. Very particularly sad. >> So they're really well positioned, from that standpoint. And, you know, the other thing, Lisa, that was really interesting, we heard from customers today, that they switched for simplicity. Okay, not a surprise. But they were relatively unhappy with some of their existing suppliers. >> Right. >> They got kind of crummy service from some of their existing suppliers. >> Right. >> Now these are, maybe, smaller companies. One customer called out SimpliVity, specifically. He said, "I loved 'em when they were an independent company, "now they're part of HPE, meh, "I don't get service like the way I used to." So, that's a sort of a warning sign and a concern. Maybe their, you know, HPE's prioritizing the bigger customers, maybe the more profitable customers, but that can come back to bite you. >> Lisa: Right. >> So Pure, the point is, Pure has the luxury of being able to lose money, service, like crazy, those customers that might not be as profitable, and grow from it's position of a smaller company, on up. >> Yeah, besides the Evergreen model and the simplicity being, resoundingly, drivers and benefits, that customers across, you know, from Formula One to medical schools, are having, you're right. The independence that Pure has currently is a selling factor for them. And it's also probably a big factor in retention. I mean, they've got a Net Promoter Score of over 83, which is extremely high. >> It's fantastic, isn't it? I think there would be VMI, that I know of, has even higher one, but it's a very, very high score. >> It's very high. They added 300 new customers, last quarter alone, bringing their global customer count to over 4800. And that was a resounding benefit that we were hearing. They, no matter how small, if it's Mercedes Formula One or the Department of Revenue in Mississippi, they all feel important. They feel like they're supported. And that's really key for driving something like a Net Promoter Score. >> Pure had definitely benefited from, it's taken share from EMC. It did early on with VMAX and Symmetrix and VNX. We've seen Dell EMC storage business, you know, decline. It probably has hit bottom, maybe it starts to grow again. When it starts to grow again, I think, even last quarter, it's growth, in dollars, was probably the size of Pure. (chuckles) You know, so, but Pure has definitely benefited from stealing share. The flip side of all this, is when you talk to you know, the CxOs, the big customers, they're doing these big digital transformations. They're not buying products, you know, they're buying transformations. They're buying sets of services. They're buying relationships, and big companies like Dell and IBM and HPE, who have large services arms, can vie for certain business that Pure, necessarily, can't. So, they've got the advantage of being smaller, nimbler, best of breed product, but they don't have this huge portfolio of capabilities that gives them a seat at the CxO table. And you saw that, today. Charlie Giancarlo, his talk, he's a techie. The guys here, Kicks, Hat, they're techies. They're hardcore storage guys. They love storage. It reminds me of the early days of EMC, you know, it's-- >> David: Or NetApp. Yeah. Yeah, or NetApp, right. They're really focused on that. So there's plenty of market for them, right now. But I wonder, David, if you could talk about, sort of architecturally, people used to criticize the two controller, you know, approach. It obviously seems to be doing very well. People take shots at their, the Evergreen model, saying "Oh, we can do that too." But, again, Pure was first. Architecturally, what's your assessment of Pure? >> So, the Evergreen, I think, is excellent. They've gone about that, well. I think, from a straighforward architecture, they kept it very simple. They made a couple of slightly, odd decisions. They went with their own NAND chips, putting them into their own stuff, which made them much smaller, much more compact, completely in charge of the storage stack. And that was a very important choice they made, and it's come out well for them. I have a feeling. My own view is that M.2 is actually going to be the form factor of the future, not the SSD. The Ssd just fitted into a hard disk slot. That was it's only benefit. So, when that comes along, and the NAND vendors want to increase the value that they get from these stacks, etc., I'm a little bit nervous about that. But, having said that, they can convert back. >> Yeah, I mean, that seems like something they could respond to, right? >> Yeah, absolutely. >> I was at the Micron financial analysts' meeting, this week. And a lot of people were expecting that, you know, the memory business has always been very cyclical, it's like the disk drive business. But, it looks like, because of the huge capital expenses required, it looks like supply, looks like they've got a good handle on supply. Micron made a good strong case to the street that, you know, the pricing is probably going to stay pretty favorable for them. So, I don't know what your thoughts are on that, but that could be a little bit of a head wind for some of the systems suppliers. >> I take that with a pinch of salt. They always want to have the market saying it's not going to go down. >> Of course, yeah. And then it crashes. (chuckles) >> The normal market place is, for any of that, is go through this series of S-curves, as you reach a certain point of volume, and 3D NAND has reached that point, that it will go down, inevitably, and then cue comes in,and then that there will go down, again, through that curve. So, I don't see the marketplace changes. I also think that there's plenty of room in the marketplace for enterprise, because the biggest majority of NAND production is for consumer, 80% goes to consumer. So there's plenty of space, in the marketplace, for enterprise to grow. >> But clearly, the prices have not come down as fast as expected because of supply constraints And the way in which companies like Pure have competed with spinning disks, go through excellent data reduction algorithms, right? >> Yes. >> So, at one point, you had predicted there would be a crossover between the cost per bit of flash and spinning disk. Has that crossover occurred, or-- >> Well, I added in the concept of sharing. >> Raw. >> Yeah, raw. But, added in the cost of sharing, the cost-benefit of sharing, and one of the things that really impresses me is their focus on sharing, which is to be able to share that data, for multiple workloads, in one place. And that's excellent technology, they have. And they're extending that from snapshots to cloud snaps, as well. >> Right. >> And I understand that benefit, but from a pure cost per bit standpoint, the crossover hasn't occurred? >> Oh no. No, they're never going to. I don't think they'll ever get to that. The second that happens, disks will just disappear, completely. >> Gosh, guys, I wish we had more time to wrap things up, but thanks, so much, Dave, for joining me all day-- >> Pleasure, Lisa. >> And sporting The Who to my Prince symbol. >> Awesome. >> David, thanks for joining us in the wrap. We appreciate you watching theCUBE, from Pure Storage Accelerate, 2018. I'm Lisa Martin, for Dave and David, thanks for watching.
SUMMARY :
brought to you by Pure Storage. that you have from today? They bet that the street would reward that model, it has. Can they get to five billion. Well the other thing, too, is that, you know, If you break the market into the flash and non flash, Well, I kind of like to look at But that chart that you have that shows the at the expense of the other segments, Particularly sad. And, you know, the other thing, Lisa, They got kind of crummy service but that can come back to bite you. So Pure, the point is, Pure has the luxury that customers across, you know, from I think there would be VMI, that I know of, And that was a resounding benefit that we were hearing. It reminds me of the early days of EMC, you know, it's-- the two controller, you know, approach. completely in charge of the storage stack. And a lot of people were expecting that, you know, I take that with a pinch of salt. And then it crashes. So, I don't see the marketplace changes. So, at one point, you had predicted But, added in the cost of sharing, I don't think they'll ever get to that. We appreciate you watching theCUBE,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Lisa | PERSON | 0.99+ |
David | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
David Floyer | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Frank Slootman | PERSON | 0.99+ |
2018 | DATE | 0.99+ |
2008 | DATE | 0.99+ |
EMC | ORGANIZATION | 0.99+ |
Dave | PERSON | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
VMAX | ORGANIZATION | 0.99+ |
Charlie Giancarlo | PERSON | 0.99+ |
2009 | DATE | 0.99+ |
Gartner | ORGANIZATION | 0.99+ |
two billion | QUANTITY | 0.99+ |
80% | QUANTITY | 0.99+ |
David Scott | PERSON | 0.99+ |
VNX | ORGANIZATION | 0.99+ |
five billion | QUANTITY | 0.99+ |
HPE | ORGANIZATION | 0.99+ |
three billion | QUANTITY | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Symmetrix | ORGANIZATION | 0.99+ |
Department of Revenue | ORGANIZATION | 0.99+ |
300 new customers | QUANTITY | 0.99+ |
Data Domain | ORGANIZATION | 0.99+ |
3PAR | ORGANIZATION | 0.99+ |
Pure | ORGANIZATION | 0.99+ |
last quarter | DATE | 0.99+ |
Pure Storage | ORGANIZATION | 0.99+ |
Phil Soran | PERSON | 0.99+ |
Mississippi | LOCATION | 0.99+ |
UCLA | ORGANIZATION | 0.99+ |
first | QUANTITY | 0.99+ |
Micron | ORGANIZATION | 0.98+ |
Compellent | ORGANIZATION | 0.98+ |
Evergreen | ORGANIZATION | 0.98+ |
today | DATE | 0.98+ |
One customer | QUANTITY | 0.98+ |
one | QUANTITY | 0.98+ |
a billion | QUANTITY | 0.98+ |
over 4800 | QUANTITY | 0.98+ |
San Francisco | LOCATION | 0.97+ |
theCUBE | ORGANIZATION | 0.97+ |
two controller | QUANTITY | 0.97+ |
over 83 | QUANTITY | 0.96+ |
Dell EMC | ORGANIZATION | 0.96+ |
five billion dollar | QUANTITY | 0.96+ |
one place | QUANTITY | 0.95+ |
NVMe | ORGANIZATION | 0.95+ |
Pure | PERSON | 0.95+ |
Simpson Strong-Tie | ORGANIZATION | 0.94+ |
Wikibon | ORGANIZATION | 0.92+ |
NetApp | TITLE | 0.92+ |
Ben Nathan, David Geffen School of Medicine at UCLA | Pure Storage Accelerate 2018
>> Narrator: Live from the Bill Graham Auditorium in San Francisco. It's the Cube. Covering Pure Storage Accelerate 2018. Brought to you by Pure Storage. >> Welcome back to Pure Storage Accelerate 2018. I'm Lisa Martin with the Cube. I'm with Dave Vellante. We are here in San Francisco at the Bill Graham Civic Auditorium which is why we're sporting some concert t-shirts. >> Who. >> The Who and the Clong. >> Roger. Roger Delchi. >> Roger. We are here with the CIO of the David Geffen School of Medicine at UCLA, Pure customer, Ben Nathan. Ben, welcome to the Cube. Thanks for having me. So, talk to us about the shool of medicine at UCLA. You are the CIO there, you've been there for about three years. Give us a little bit of the 10,000 foot view of what your organization looks like to support the school of medicine. >> Sure. We're about 170 people. We have changed a lot over the last three years. So, when I got to UCLA there was 25 separate IT organizations, all smaller groups, operating in each individual department. And, they had built their own sets of managed infrastructure, distributed throughout every closet, nook and cranny in the school. We've consolidated all that under one set of service lines, one organization, and that's including consolidating all the systems and applications as well. So, we've brought all those together and now we're additionally running IT for three more health sciences schools at UCLA, nursing, dentistry, and school of public health, Fielding School of Public Health. Like a lot of CIOs, you serve many masters. You got the administration, you got the students, right. You've got the broader constituency. The community, UCLA. Where do you start? What's the quote on quote customer experience that you're trying to achieve? That's a great way to put it. There's really sort of four pillars that we try to serve. The patient being first and foremost. So, for us, everything is built around a great patient experience. And, that means that when we're educating students it's so they can be great providers of patient care. When we're doing research, When we're doing that research in an effort to eradicate disease et cetera. And, when we're doing community outreach it's also around improving health and peoples lives, so, in IT, we try to stay very connected to those missions. I think it's a large part of what drives people to be a part of an organization that's healthcare or that's a provider. That mission is really, really important. So, yes. We're serving all four of those things at once. >> So, you had lots of silos, lots of data, that's all continuing to grow but, this is data that literally life and death decisions can be made on this. Talk to us about the volumes of data, all the different sources that are generating data. People, sensors, things and how did you make this decision to consolidate leveraging Pure Storage as that foundation? >> Yeah, there's and incredible amount of work going on at UCLA. Particularly in their research education and patient care spaces. We had every brand of server in storage that you've never heard of. Things bought at lowest, bitter methods but, the technical data that we had incurred as part of that was enormous. Right, it's unsustainable. It's unsupportable. It's insecure-able. When I got there and we started to think about how do we deal with all of this? We knew we had an opportunity to green field an infrastructure and consolidate everything onto it. That was the first, that was started us down the road that led us to Pure as one of our major storage vendors. I had worked with them before but, they won on their merits, right? We do these very rigorous RFP processes when we buy things. The thing that really, I think, got them the the victory is us is that the deduplication of data got us to something like an eight to one ratio of virtual to physical. So, we get a lot of virtual servers running on relatively small amount of storage. And, that it's encrypted you know, sort of the time, right? There's not like a switch you might flip or something a vendor says they'll do but it >> Always on. >> doesn't really do, it is always on. And, it's critical for us. We're really building a far more secure and manageable set of services and so all the vendors we work with meet that criteria. >> So, is as a CIO, I would imagine you don't want to wake up every day and think of storage. With all due respect to our friends at Pure. >> That's true. >> So, has bringing it in for infrastructure in, like Pure, that prides itself on simplicity, allowed you to do the things that you really want to do and need to do for your organization? >> Yeah. I'll give you a two part answer. I mean one is simply, I think, it's operationally a really great service. I think that it's well designed, and run, and managed. And, we get great use of out it. I think the thing that makes it so that I don't have to think about it is actually, the business model that they have. So, the fact that I know that it's not going to really obsolete on its own, as long as you're like in the support model, you're upgrading the system every few years, changes, you know the, model for me, 'cause I don't have to think about these new, massive capitalization efforts, it's more of a predictable operational costs and that helps me sleep well because I know what we look like over the next few years and I can explain that to my financial organization. >> Just a follow up on that, a large incumbent storage supplier or system vendor might say, "Well, we can make that transparent to you. We can use our financial services to hide that complexity or make a cloud-like rental experience or you know, play financial games to hide that. Why does that not suffice for you? >> Well, I think, first and foremost we sort of want to run our financials on our own and we're pretty anxious about having anyone else in the middle of all that. Number two is it seems to me different in terms of Pure having built that model from the ground up as part of their service offerings. So, I don't think we see that with too many other vendors and I think that obviously there's far less technical than what I had in the previous design but it can still add up if you're not careful about whatever, what server mechanism you have in place, et cetera. >> But, it eliminates the forklift upgrade, right. Even with those financial incentives or tricks, you still got to forklift it and it's a disruption to your operation. >> Yeah, and I'm sure that's true, yeah. >> So, when you guys were back a year and a half or so, maybe two years ago, looking at this consolidation, where were your thoughts in terms of beyond consolidation and looking at being able to harness the power of AI, for example, we heard a lot of AI today already and this need for legacy infrastructures are insufficient to support that. Was that also part of your plan, was not simply to consolidate and bring your (speaks very rapidly) environment unto Pure source but also to leverage a modern platform that can allow you to harness the power of AI? >> Yeah. That was sort of the later phase bonus period that we're starting to enter now. So, after we sort of consolidate and secure everything, now, we can actually do far more interesting things that would've been much more difficult before. And, in terms of Pure, when we had set out to do this we imagined doing a lot of our analytics and AI machine learning kind of cloud only and we tried that. We're doing a lot of really great things in the cloud but not all of it is makes sense in that environment. Either from a cost perspective or from a capabilities perspective. Particularly with what Pure has been announcing lately, I think there's a really good opportunity for us to build high performance computing clusters in our on premise environment that leverage Pure as a potential storage back end. And that's where our really interesting data goes. We can do the analytics or the AI machine learning on the data that's in our electronic medical record or in our genomics workflows or things like that can all flow through a service like that and there's some interesting discoveries that ought to come from it. >> There's a lot of talk at this event about artificial intelligence, machine intelligence, how do you see AI in health care, generally? And specifically, how you're going to apply it? Is it helping doctors with diagnosis? Is it maybe maintaining better compliance? Or, talk about that a little. >> I think there's two things that I can think of off the top of my head. The first is decision support. So this is helping physicians when they're working directly with patients there's only, there's so many systems, so many data sets, so many way to analyze, and yet getting it all in front of them in some kind of real time way so that they can use it effectively is tricky. So, AI, machine learning, have a chance to help us funnel that into something that's immediately useful in the moment. And then the other thing that we're seeing is that most of the research on genomics and the outcomes that have resulted in changes to clinical care are around individualized mutations in a single nucleotide so there's, those are I guess, quote, relatively easy for a researcher to pick out. There's a letter here that is normally a different letter. But, there are other scenarios where there's not a direct easy tie from a single mutation to an outcome. so, like in autism or diabetes, we're not sure what the genetic components are but we think that with AI machine learning, those things will start to identify patterns in genomic sequences that humans aren't finding with their typical approaches and so, we're really excited to see our genomic platforms built up to a point where they have sequences in them to do that sort of analysis and you need big compute, fast storage to do that kind of thing. >> How is it going to help the big compute, fast storage, this modern infrastructure, help whether its genomics or clinicians be able to sort through masses amounts of data to try to find those needles in the haystack 'cause I think the staff this morning that Charlie Jean and Carla mentioned was that half a percent of data in the world is analyzed. So, how would that under the hood infrastructure going to help facilitate your smart folks getting those needles in the haystack just to start really making big impacts? >> UCLA has an incredible faculty, like brilliant researchers, and sometimes what I've found since I've gotten there, the only ingredient that's missing is the platform where they can do some of this stuff. So, some of them are incredibly enterprising, they've built their own platforms for their own analysis. Others we work with they have a lot of data sets they don't have a place to put them where they can properly interrelate them and do, apply their algorithms at scale. So, we've run into people that are trying to do these massive analysis on a laptop or a little computer or whatever it just fails, right? Or it runs forever. So, giving them, providing a way to have the infrastructure that they can run these things is really the ingredient that we're trying to add and so, that's about storage and compute, et cetera. >> How do you see the role of the CIO evolving? We hear a lot of people on the Cube and these conferences talk about digital transformation and the digital CIO, how much of that is permeating your organization and what do you think it means to the CIO world going forward? >> I wish I knew the real answer to that question. I don't know, time will tell. But, I think that certainly we're trying to follow the trends that we see more broadly which is there's a job of keeping the lights on of operations. And you're not really, you shouldn't have a seat at any other table and so those things are quite excellent. >> Table stakes. >> Yeah. Right. Exactly, table stakes. Security, all that stuff. Once, you've got that, you know, my belief is you need to deeply understand the business and find your way into helping to solve problems for it and so, you know, our realm, a lot of that these days is how do we understand the student journey from prior to, from when they maybe want to apply all the way 'til when they go out and become a resident and then a physician. There's a ton of data that's gathered along that way. We got to ask a lot of questions we don't have easy answers to but, if we put the data together properly, we start to, right? On the research side, same sort of idea, right? Where the more we know about the particular clinical outcomes they're trying to achieve or even just basic science research that they're looking into, the better that we can better micro target a solution to them. Whether it's a on prem, private cloud, or public cloud, either one of those can be harnessed for really specific workloads and I think when we start to do that, we've enabled our faculty to do things that have been tougher for them to do before. Once, we understand the business in those ways I think we really start to have an impact at the strategic level, the organization. >> You've got this centralized services model that was a strategic initiative that you put in place. You've got the foundation there that's going to allow you to start opening up other opportunities. I'm curious, in the UCLA system, maybe the UC system, are there other organizations or schools that are looking at what you're doing as a model to maybe replicate across the system? >> I think there's I don't know about a model. I think there's certainly efforts among some to find, to centralize at least some services because of economies to scale or security or all the normal things. With the anticipated, and then anticipating that that could ultimately provide more value once the baseline stuff is out of the way. UC is vast and varied system so there's a lot of amazing things going on in different realms and we're I think, doing more than ever working together and trying to find common solutions to problems. So, we'll see whose model works out. >> Well, Ben. Thanks so much for stopping by the Cube and sharing the impact that your making at the UCLA School of Medicine, leveraging storage and all the different capabilities that that is generating. We thank you for your time. >> Thanks so much for having me. >> We want to thank you for watching the Cube. I'm Lisa Martin with Dave Vellante. We are live at Pure Accelerate 2018 in San Francisco. Stick around, we'll be right back with our next guest.
SUMMARY :
Brought to you by at the Bill Graham Civic Auditorium So, talk to us about and that's including consolidating all the all the different sources that are generating data. but, the technical data that we had incurred and so all the vendors we work with meet that criteria. With all due respect to our friends at Pure. So, the fact that I know that it's not going to to hide that. So, I don't think we see that with too many and it's a disruption to your operation. that can allow you to harness the power of AI? We can do the analytics or the AI machine learning on There's a lot of talk at this event about that most of the research on genomics that half a percent of data in the world is really the ingredient that we're trying of keeping the lights on of operations. We got to ask a lot of questions we don't have You've got the foundation there that's going to I think there's certainly efforts among some to and sharing the impact that your making at the We want to thank you for watching the Cube.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Vellante | PERSON | 0.99+ |
Ben Nathan | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
Roger Delchi | PERSON | 0.99+ |
UCLA | ORGANIZATION | 0.99+ |
Roger | PERSON | 0.99+ |
San Francisco | LOCATION | 0.99+ |
10,000 foot | QUANTITY | 0.99+ |
Ben | PERSON | 0.99+ |
two things | QUANTITY | 0.99+ |
David Geffen | PERSON | 0.99+ |
one organization | QUANTITY | 0.99+ |
UCLA School of Medicine | ORGANIZATION | 0.99+ |
first | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
two years ago | DATE | 0.98+ |
Bill Graham Civic Auditorium | LOCATION | 0.98+ |
each individual department | QUANTITY | 0.98+ |
25 separate IT organizations | QUANTITY | 0.98+ |
eight | QUANTITY | 0.98+ |
two part | QUANTITY | 0.97+ |
about three years | QUANTITY | 0.96+ |
single mutation | QUANTITY | 0.96+ |
one set | QUANTITY | 0.96+ |
Bill Graham Auditorium | LOCATION | 0.95+ |
David Geffen School of Medicine | ORGANIZATION | 0.95+ |
UC | ORGANIZATION | 0.95+ |
Fielding School of Public Health | ORGANIZATION | 0.94+ |
about 170 people | QUANTITY | 0.94+ |
Pure Storage | ORGANIZATION | 0.94+ |
four | QUANTITY | 0.94+ |
today | DATE | 0.93+ |
half a percent | QUANTITY | 0.92+ |
Pure | ORGANIZATION | 0.9+ |
The Who and the Clong | TITLE | 0.89+ |
last three years | DATE | 0.89+ |
single nucleotide | QUANTITY | 0.86+ |
Pure Storage Accelerate 2018 | TITLE | 0.85+ |
Medicine | ORGANIZATION | 0.85+ |
this morning | DATE | 0.84+ |
Pure Accelerate 2018 | EVENT | 0.78+ |
Number two | QUANTITY | 0.78+ |
three more | QUANTITY | 0.77+ |
Pure Storage Accelerate | TITLE | 0.77+ |
Narrator: Live from | TITLE | 0.77+ |
a year and a half | QUANTITY | 0.77+ |
Charlie Jean and Carla | ORGANIZATION | 0.74+ |
School | ORGANIZATION | 0.72+ |
four pillars | QUANTITY | 0.72+ |
school of public | ORGANIZATION | 0.66+ |
next | DATE | 0.65+ |
Cube | TITLE | 0.61+ |
Pure | TITLE | 0.57+ |
years | DATE | 0.57+ |
sciences | QUANTITY | 0.54+ |
Cube | COMMERCIAL_ITEM | 0.52+ |
2018 | EVENT | 0.5+ |
2018 | DATE | 0.48+ |
Pure Storage | EVENT | 0.46+ |
Accelerate | TITLE | 0.34+ |
Kickoff | Pure Storage Accelerate 2018
>> Announcer: Live from the Bill Graham Auditorium in San Francisco, it's theCUBE covering Pure Storage Accelerate 2018, brought to you by Pure Storage. (bright music) >> Welcome to theCUBE. We are live at Pure Storage Accelerate 2018. I'm Lisa Martin also known as Prince for today with Dave Vellante. We're at the Bill Graham Civic Auditorium, really cool, unique venue. Dave, you've been following Pure for a long time. Today's May 23rd, they just announced FY19 Q1 earnings a couple days ago. Revenue up 40% year over year, added 300 new customers this last quarter including the Department of Energy, Paige.ai, bringing their customer tally now up to about 4800. We just came from the keynote. What are some of the things that you've observed over the last few years of following Pure that excite you about today? >> Well Lisa, Pure's always been a company that is trying to differentiate itself from the pack, the pack largely being EMC at the time. And what Pure talked about today, Matt Kixmoeller talked about, that in 2009, if you go back there, Fusion-io was all the rage, and they were going after the tip of the pyramid, and everybody saw flash, as he said, his words, as the tip of the pyramid. Now of course back then David Floyer in 2008 called that flash was going to change the world, that is was going to dominate. He'd forecast that flash was going to be cheaper than disk over the long term, and that is playing out in many market segments. So he was one of the few that didn't fall into that trap. But the point is that Pure has always said, "We're going to make flash cheaper than "or as cheap as spinning disk, "and we're going to drive performance, "and we're going to differentiate from the market, "and we're going to be first." And you heard that today with this company. This company is accelerated to a billion dollars, the first company to hit a billion dollars since NetApp. Eight years ago I questioned if any company would do that. If you look at the companies that exited the storage market, that entered and exited the storage market that supposedly hit escape velocity, 10 years ago it was 3PAR hit $250 million. Isilon, Data Domain, Compellent, these companies sold for between $1 and $2.5 billion. None of them hit a billion dollars. Pure is the first to do that. Nutanix, which is really not a storage company, they're hyper-converged infrastructure, they got networking and compute, sort of, hit a billion, but Pure is the the first pure play, no pun intended, storage company to do that. They've got a $5 billion evaluation. They're growing, as you said, at 40% a year. They just announced their earnings they beat. But the street reacted poorly because it interpreted their guidance as lower. Now Pure will say that we know we raised (laughs) our guidance, but they're lowering the guidance in terms of growth rates. So that freaks the street out. I personally think it's pure conservativism and I think that they'll continue to beat those expectations so the stock's going to take a hit. They say, "Okay, if you want to guide lower growth, "you're going to take the hit," and I think that's smart play by Pure because if and when they beat they'll get that updraft. But so that's what you saw today. They're finally free cash flow positive. They've got about a billion dollars in cash on the balance sheet. Now half a billion of that was from a convertible note that they just did, so it's really not coming from a ton of free cash flow, but they've hit that milestone. Now the last point I want to make, Lisa, and we talked about this, is Pure Storage at growing at 40% a year, it's like Amazon can grow even though they make small profit. The stock price keeps going up. Pure has experienced that. You're certainly seeing that with companies like Workday, certainly Salesforce and its ascendancy, ServiceNow and its ascendancy. These companies are all about growth. The street is rewarding growth. Very hard for a company like IBM or HPE or EMC when it was public, when they're not growing to actually have the stock price continue to rise even though they're throwing off way more cash than a company like Pure. >> Also today we saw for the first time the new CEO's been Charlie Giancarlo, been the CEO since August of 2017, sort of did a little introduction to himself, and they talked about going all in on shared accelerated storage, this category that Gartner's created. Big, big focus there. >> Yeah, so it's interesting. When I look at so-called shared accelerated storage it's 2018, Gartner finally came up with a new category. Again, I got to give credit to the Wikibon guys. I think David Floyer in 2009 created the category. He called it Server SAN. You don't know if that's David, but I think maybe shared accelerated storage's a better name. Maybe Gartner has a better V.P. of Naming than they do at Wikibon, but he forecast this notion of Server SAN which really it's not DAS, it's not SAN, it's this new class of accelerated storage that's flash-based, that's NVMe-based, eliminates the horrible storage stack. It's exactly what Pure was talking about. Again, Floyer forecast that in 2009, and if you look at the charts that he produced back then it looks like you see the market like this going shoom, the existing market and the new market just exploding. So Pure, I think, is right on. They're targeting that wide market. Now what they announced today is this notion of their flash array for all workloads, bringing NVMe to virtually their entire portfolio. So they're aiming their platform at the big market. Remember, Pure's ascendancy to a billion really came at the expense of EMC's VMAX and VNX business. They aimed at that and they hit it hard. They positioned flash relative to EMC's either spinning disk or flash-based systems as better, easier, cheaper, et cetera, et cetera, and they won that battle even though they were small. Pure's a billion, EMC at the time was $23, $24 billion, but they gained share very rapidly when you see the numbers. So what they're doing is basically staking a claim, Lisa, saying, "We can point our platform "at the entire $30, $40, $50 billion storage TAM," and their intention, we're going to ask Charlie Giancarlo and company, their aspiration is to really continue to gain share in that marketplace and grow significantly faster than the overall market. >> So they also talked about the data-centric architecture today and gave some great examples of customers. I loved the Domino's Pizza example that they talked about, I think he was here last year, and how they're actually using AI at Domino's to analyze the phone calls using this AI engine to identify accurate order information and get you your pizza as quickly as you want. So not only do we have pizza but we were showered with confetti. Lot of momentum there. What is your opinion of Pure, what they're doing to enable companies to utilize and maximize AI-based applications with this data-centric architecture? >> So Pure started in the what's called block storage, really going after the high-volume, the transaction OLTP business. In the early days of Pure you'd see them at Oracle OpenWorld. That's where the high-volume transactions are taking place. They were the first really, by my recollection, to do file-based flash storage. Back in the day it was you would buy EMC for a block, you'd buy NetApp for file. What Pure did is said, "Okay, let's go after "the biggest market player, EMC, "which we'll gain share there in block, "and then now let's go after NetApp space and file." They were again the first to do that. And now they're extending that to AI. Now AI is a small but growing market, so they want to be the infrastructure for artificial intelligence and machine intelligence. They've struck a partnership with Nvidia, they're using the example of Domino's. It's clearly not a majority of their business today, but they're doing some clever things in marketing, getting ahead of the game. This is Pure's game. Be first, get out in the lead, market it hard, and then let everybody else look like they're following which essentially they are and then claim leadership position. So they are able to punch above their weight class by doing that, and that's what you're seeing with the Domino's example. >> You think they're setting the bar? >> Do I think they're setting the bar? Yeah, in many respects they are because they are forcing these larger incumbents to respond and react because they're in virtually all accounts now. The IT practitioners, they look at the Gartner Magic Quadrant, who's in the upper right, I got to call them in for the RFP. They get a seat at that table. I would say it was interesting hearing Charlie speak today and the rest of the executives. These guys are hardcore storage geeks, and I mean that with all due respect. They love storage. It kind of reminds me of the early days of EMC. They are into this stuff. Their messaging is really toward that storage practitioner, that administrator. They're below the line but those are the guys that are actually making the decisions and affecting transactions. They're touching above the line with AI messages and data growth and things like that, but it's really not a hardcore CIO, CFO, CEO message yet. I think that will come later. They see a big enough market selling to those IT practitioners. So I think they are setting the bar in that IT space, I do. >> One of the things I thought that they did well is kind of position the power of data where, you know people talk about data as fuel. Data's really a business catalyst that needs to be analyzed across multiple areas of a business simultaneously to really be able to extract value. They talked about the gold rush, oh gee, of 1849 and now kind of in this new gold rush enabling IT with the tools. And interestingly they also talked about a survey that they did with the SEE Suite who really believe that analyzing data is going to be key to driving businesses forward, identifying new business models, new products, new services. Conversely, IT concern do we have the right tools to actually be able to evaluate all of these data to extract the value from it? Because if you can't extract the value from the data, is it, it's not useful. >> Yeah, and I think again, I mean to, we give Pure great marketing, and a lot of what they're doing, (laughs) it's technology, it's off-the-shelf technology, it's open source components. So what's their differentiation? Their differentiation is clearly their software. Pure has done a great job of simplifying the experience for the customer, no question, much in the same way that 3PAR did 10 or 15 years ago. They've clearly set the bar on simplicity, so check. The other piece that they've done really well is marketing, and marketing is how companies differentiate (laughs) today. There's no question about it that they've done a great job of that. Now having said that I don't think, Lisa, that storage, I think storage is going to be table stakes for AI. Storage infrastructure for AI is going to have to be there, and they talked about the gold rush of 1849. The guys who made all the money were the guys with the picks and the axes and the shovels supplying them, and that's really what Pure Storage is. They're a infrastructure company. They're providing the pickaxes and the shovels and the basic tools to build on top of that AI infrastructure. But the real challenges of AI are where do I apply and how do I infuse it into applications, how do I get ROI, and then how do I actually have a data model where I can apply machine intelligence and how do I get the skillsets applied to that data? So is Pure playing a fundamental catalyst to that? Yes, in the sense that I need good, fast, reliable, simple-to-use storage so that I don't have to waste a bunch of time provisioning LUNs and doing all kinds of heavy lifting that's nondifferentiated. But I do see that as table stakes in the AI game, but that's the game that Pure has to play. They are an infrastructure company. They're not shy about it, and it's a great business for them because it's a huge market where they're gaining share. >> Partners are also key for them. There's a global partner summit going on. We're going to be speaking, you mentioned Nvidia. We're going to be talking with them. They also announced the AIRI Mini today. I got to get a look at that box. It looks pretty blinged out. (laughing) So we're going to be having conversations with partners from Nvidia, from Cisco as well, and they have a really diverse customer base. We've got Mercedes-AMG Petronas Motorsport Formula One, we've got UCLA on the CIO of UCLA Medicine. So that diversity is really interesting to see how data is being, value, rather, from data is being extracted and applied to solve so many different challenges whether it's hitting a race car around a track at 200 kilometers an hour to being able to extract value out of data to advance health care. They talked about Paige.ai, a new customer that they added in Q1 of FY19 who was able to take analog cancer pathology looking at slides and digitize that to advance cancer research. So a really cool kind of variety of use cases we're going to see on this show today. >> Yeah, I think, so a couple thoughts there. One is this, again I keep coming back to Pure's marketing. When you talk to customers, they cite, as I said before, the simplicity. Pure's also done a really clever thing and not a trivial thing with regard to their Evergreen model. So what that means is you can add capacity and upgrade your software and move to the next generation nondisruptively. Why is this a big deal? For decades you would have to actually shut down the storage array, have planned downtime to do an upgrade. It was a disaster for the business. Oftentimes it turned into a disaster because you couldn't really test or if you didn't test properly and then you tried to go live you would actually lose application availability or worse, you'd lose data. So Pure solved that problem with its Evergreen model and its software capability. So its simplicity, the Evergreen model. Now the reality is typically you don't have to bring in new controllers but you probably should to upgrade the power, so there are some nuances there. If you're mixing and matching different types of devices in terms of protocols there's not really tiering, so there's some nuances there. But again it's both great marketing and it simplifies the customer experience to know that I can go back to serial number 00001 and actually have an Evergreen upgrade is very compelling for customers. And again Pure was one of the first if not the first to put that stake in the ground. Here's how I know it's working, because their competitors all complain about it. When the competitors are complaining, "Wow, Pure Storage, they're just doing X, Y, and Z, "and we can do that too," and it's like, "Hey, look at me, look at me! "I do that too!" And Pure tends to get out in front so that they can point and say, "That's everybody following us, we're the leader." And that resonates with customers. >> It does, in fact. And before we wrap things up here a lot of the customer use cases that I read in prepping for this show all talked about this simplicity, how it simplified the portability, the Evergreen model, to make things much easier to eliminate downtime so that the business can keep running as expected. So we have a variety of use cases, a variety of Puritans on the program today as well as partners who are going to be probably articulating that value. >> You know what, I really didn't address the partner issue. Again, having a platform that's API-friendly, that's simple makes it easier to bring in partners, to integrate into new environments. We heard today about integration with Red Hat. I think they took AIRI. I think Cisco's a part of that partnership. Obviously the Nvidia stuff which was kind of rushed together at the last minute and had got it in before the big Nvidia customer show, but they, again, they were the first. Really made competitors mad. "Oh, we can do that too, it's no big deal." Well, it is a big deal from the standpoint of Pure was first, right? There's value in being first and from a standpoint of brand and mindshare. And if it's easier for you to integrate with partners like Cisco and other go-to-market partners like the backup guys you see, Cohesity and Veeam and guys like Catalogic are here. If it's easier to integrate you're going to have more integration partners and the go-to-market is going to be more facile, and that's where a lot of the friction is today, especially in the channel. >> The last thing I'll end with is we got a rain of confetti on us during the main general session today. The culture of Pure is one that is pervasive. You feel it when you walk into a Pure event. The Puritans are very proud of what they've done, of how they're enabling so many, 4800+ customers globally, to really transform their businesses. And that's one of the things that I think is cool about this event, is not just the plethora of orange everywhere but the value and the pride in the value of what they're delivering to their customers. >> Yeah, I think you're right. It is orange everywhere, they're fun. It's a fun company, and as I say they're alpha geeks when it comes to storage. And they love to be first. They're in your face. The confetti came down and the big firecracker boom when they announced that NVMe was going to be available across the board for zero incremental cost. Normally you would expect it to be a 15 to 20% premium. Again, a first that Pure Storage is laying down the gauntlet. They're setting the bar and saying hey guys, we're going to "give" this value away. You're going to have to respond. Everybody will respond. Again, this is great marketing by Pure because they're >> Shock and awe. going to do it and everybody's going to follow suit and they're going to say, "See, we were first. "Everybody's following, we're the leader. "Buy from us," very smart. >> There's that buy. Another first, this is the first time I have actually been given an outfit to wear by a vendor. I'm the symbol of Prince today. I won't reveal who you are underneath that Superman... >> Okay. >> Exterior. Stick around, you won't want to miss the reveal of the concert tee that Dave is wearing. >> Dave: Very apropos of course for Bill Graham auditorium. >> Exactly, we both said it was very hard to choose which we got a list of to pick from and it was very hard to choose, but I'm happy to represent Prince today. So stick around, Dave and I are going to be here all day talking with Puritans from Charlie Giancarlo, David Hatfield. We've also got partners from Cisco, from Nvidia, and a whole bunch of great customer stories. We're going to be right back with our first guest from the Mercedes-AMG Petronas Motorsport F1 team. I'm Lisa "Prince" Martin, Dave Vellante. We'll be here all day, Pure Storage Accelerate. (bright music)
SUMMARY :
brought to you by Pure Storage. What are some of the things that you've observed Pure is the first to do that. been the CEO since August of 2017, Pure's a billion, EMC at the time was $23, $24 billion, I loved the Domino's Pizza example that they talked about, Back in the day it was you would buy EMC for a block, that are actually making the decisions is kind of position the power of data where, and how do I get the skillsets applied to that data? We're going to be speaking, you mentioned Nvidia. if not the first to put that stake in the ground. so that the business can keep running as expected. and the go-to-market is going to be more facile, is not just the plethora of orange everywhere And they love to be first. and they're going to say, "See, we were first. I'm the symbol of Prince today. the reveal of the concert tee that Dave is wearing. We're going to be right back with our first guest
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Charlie Giancarlo | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Charlie | PERSON | 0.99+ |
2009 | DATE | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
Lisa | PERSON | 0.99+ |
2008 | DATE | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
Matt Kixmoeller | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
$30 | QUANTITY | 0.99+ |
Dave | PERSON | 0.99+ |
Nvidia | ORGANIZATION | 0.99+ |
David Hatfield | PERSON | 0.99+ |
EMC | ORGANIZATION | 0.99+ |
15 | QUANTITY | 0.99+ |
David Floyer | PERSON | 0.99+ |
$40 | QUANTITY | 0.99+ |
Department of Energy | ORGANIZATION | 0.99+ |
$5 billion | QUANTITY | 0.99+ |
$23 | QUANTITY | 0.99+ |
$50 billion | QUANTITY | 0.99+ |
Floyer | PERSON | 0.99+ |
HPE | ORGANIZATION | 0.99+ |
Domino | ORGANIZATION | 0.99+ |
David | PERSON | 0.99+ |
first | QUANTITY | 0.99+ |
Gartner | ORGANIZATION | 0.99+ |
Domino's Pizza | ORGANIZATION | 0.99+ |
$24 billion | QUANTITY | 0.99+ |
Nutanix | ORGANIZATION | 0.99+ |
$2.5 billion | QUANTITY | 0.99+ |
2018 | DATE | 0.99+ |
last year | DATE | 0.99+ |
Lisa "Prince" Martin | PERSON | 0.99+ |
Pure | ORGANIZATION | 0.99+ |
August of 2017 | DATE | 0.99+ |
1849 | DATE | 0.99+ |
Wikibon | ORGANIZATION | 0.99+ |
Isilon | ORGANIZATION | 0.99+ |
Eve | PERSON | 0.99+ |
VMAX | ORGANIZATION | 0.99+ |
half a billion | QUANTITY | 0.99+ |
UCLA | ORGANIZATION | 0.99+ |
Data Domain | ORGANIZATION | 0.99+ |
Pure Storage | ORGANIZATION | 0.99+ |
$250 million | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
20% | QUANTITY | 0.99+ |
Prince | PERSON | 0.99+ |
3PAR | ORGANIZATION | 0.99+ |
Today | DATE | 0.99+ |
Bill Graham Civic Auditorium | LOCATION | 0.99+ |
first time | QUANTITY | 0.99+ |
FY19 | COMMERCIAL_ITEM | 0.98+ |
One | QUANTITY | 0.98+ |
Evergreen | ORGANIZATION | 0.98+ |
Laura Stevens, American Heart Association | AWS re:Invent
>> Narrator: Live from Las Vegas, it's theCUBE, covering AWS re:Invent 2017, presented by AWS, Intel, and our ecosystem of partners. >> Hey, welcome back everyone, this is theCUBE's exclusive live coverage here in Las Vegas for AWS Amazon web services re:Invent 2017. I'm John Furrier with Keith Townsend. Our next guest is Laura Stevens, data scientist at the American Heart Association, an AWS customer, welcome to theCUBE. >> Hi, it's nice to be here. >> So, the new architecture, we're seeing all this great stuff, but one of the things that they mention is data is the killer app, that's my word, Verna didn't say that, but essentially saying that. You guys are doing some good work with AWS and precision medicine, what's the story? How does this all work, what are you working with them on? >> Yeah, so the American Heart Association was founded in 1924, and it is the oldest and largest voluntary organization dedicated to curing heart disease and stroke, and I think in the past few years what the American Heart Association has realized is that the potential of technology and data can really help us create innovative ways and really launch precision medicine in a fashion that hasn't been capable to do before. >> What are you guys doing with AWS? What's that, what's the solution? >> Yeah so the HA has strategically partnered with Amazon Web Services to basically use technology as a way to power precision medicine, and so when I say precision medicine, I mean identifying individual treatments, based on one's genetics, their environmental factors, their life factors, that then results in preventative and treatment that's catered to you as an individual rather than kind of a one size fits all approach that is currently happening. >> So more tailored? >> Yeah, specifically tailored to you as an individual. >> What do I do, get a genome sequence? I walk in, they throw a high force computing, sequence my genomes, maybe edit some genes while they're at it, I mean what's going on. There's some cutting edge conversations out there we see in some of the academic areas, course per that was me just throwing that in for fun, but data has to be there. What kind of data do you guys look at? Is it personal data, is it like how big is the data? Give us a sense of some of the data science work that you're doing? >> Yeah so the American Heart Association has launched the Institute for Precision Cardiovascular Medicine, and as a result, with Amazon, they created the precision medicine platform, which is a data marketplace that houses and provides analytic tools that enable high performance computing and data sharing for all sorts of different types of data, whether it be personal data, clinical trial data, pharmaceutical data, other data that's collected in different industries, hospital data, so a variety of data. >> So Laura, there's a lot of think fud out there around the ability to store data in a cloud, but there's also some valid concerns. A lot of individual researchers, I would imagine, don't have the skillset to properly protect data. What is the Heart Association doing with the framework to help your customers protect data? >> Yeah so the I guess security of data, the security of the individual, and the privacy of the individual is at the heart of the AHA, and it's their number one concern, and making anything that they provide that a number one priority, and the way that we do that in partnering with AWS is with this cloud environment we've been able to create even if you have data that you'd like to use sort of a walled garden behind your data so that it's not accessible to people who don't have access to the data, and it's also HIPAA compliant, it meets the standards that the utmost secure standards of health care today. >> So I want to make sure we're clear on this, the Heart Association doesn't collect data themselves. Are you guys creating a platform for your members to leverage this technology? >> So there's, I would so maybe both actually. The American Heart Association does have data that it is associated with, with its volunteers and the hospitals that it's associated with, and then on top of that, we've actually just launched My Research Legacy, which allows individuals of the community to, who want to share their data, whether you're healthy or just sick, either one, they want to share their data and help in aiding to cure heart disease and stroke, and so they can share their own data, and then on top of that, anybody, we are committed to strategically partnering with anybody who's involved and wants to share their data and make their data accessible. >> So I can share my data? >> Yes, you can share your data. >> Wow, so what type of tools do you guys use against that data set and what are some of the outcomes? >> Yeah so I think the foundation is the cloud, and that's where the data is stored and housed, and then from there, we have a variety of different tools that enable researchers to kind of custom build data sets that they want to answer the specific research questions they have, and so some of those tools, they range from common tools that are already in use today on your personal computer, such as Python or R Bioconductor, and then they have more high performance computing tools, such as Hal or any kind of s3 environment, or Amazon services, and then on top of that I think what is so awesome about the platform is that it's very dynamic, so a tool that's needed to use for high performance computing or a tool that's needed even just as a on a smaller data set, that can easily be installed and may be available to researchers, and so that they can use it for their research. >> So kind of data as a service. I would love to know about the community itself. How are you guys sharing the results of kind of oh this process worked great for this type of analysis amongst your members? >> Yeah so I think that there's kind of two different targets in that sense that you can think of is that there's the researchers and the researchers that come to the platform and then there's actually the patient itself, and ultimately the HA's goal is to make, to use the data and use the researcher for patient centered care, so with the researchers specifically, we have a variety of tutorials available so that researchers can one, learn how to perform high performance computing analysis, see what other people have done. We have a forum where researchers can log on and enable, I guess access other researchers and talk to them about different analysis, and then additionally we have My Research Legacy, which is patient centered, so it's this is what's been found and this is what we can give back to you as the patient about your specific individualized treatment. >> What do you do on a daily basis? Take us through your job, are you writing code, are you slinging API's around? What are some of the things that you're doing? >> I think I might say all of the above. I think right now my main effort is focused on one, conducting research using the platform, so I do use the platform to answer my own research questions, and those we have presented at different conferences, for example the American Heart Association, we had a talk here about the precision medicine platform, and then two, I'm focused on strategically making the precision medicine platform better by getting more data, adding data to the platform, improving the way that data is harmonized in the platform, and improving the amount of data that we have, and the diversity, and the variety. >> Alright, we'll help you with that, so let's help you get some people recruited, so what do they got to do to volunteer, volunteer their data, because I think this is one of those things where you know people do want to help. So, how do they, how you onboard? You use the website, is it easy, one click? Do they have to wear an iWatch, I mean what I mean? >> Yeah. >> What's the deal? What do I got to do? >> So I think I would encourage researchers and scientists and anybody who is data centric to go to precision.heart.org, and they can just sign up for an account, they can contact us through that, there's plenty of different ways to get in touch with us and plenty of ways to help. >> Precision.heart.org. >> Yup, precision.heart.org. >> Stu: Register now. >> Register now click, >> Powered by AWS. >> Yup. >> Alright so I gotta ask you as an AWS customer, okay take your customer hat off, put your citizen's hat on, what is Amazon mean to you, I mean is it, how do you describe it to people who don't use it? >> Okay yeah, so I think... the HA's ultimate mission right, is to provide individualized treatment and cures for cardiovascular disease and stroke. Amazon is a way to enable that and make that actually happen so that we can mine extremely large data sets, identify those individualized patterns. It allows us to store data in a fashion where we can provide a market place where there's extremely large amounts of data, extremely diverse amounts of data, and data that can be processed effectively, so that it can be directly used for research. >> What's your favorite tool or product or service within Amazon? >> That's a good question. I think, I mean the cloud and s3 buckets are definitely in a sense they're my favorites because there's so much that can be stored right there, Athena I think is also pretty awesome, and then the EMR clusters with Spark. >> The list is too long. >> My jam. >> It is. (laughs) >> So, one of the interesting things that I love is a lot of my friends are in non-profits, fundraising is a big, big challenge, grants are again, a big challenge, have you guys seen any new opportunities as a result of the results of the research coming out of HA and AWS in the cloud? >> Yeah so I think one of the coolest things about the HA is that they have this Institute for Precision Cardiovascular Medicine, and the strategic partnership between the HA and AWS, even just this year we've launched 13 new grants, where the HA kind of backs the research behind, and the AWS provides credit so that people can come to the cloud and use the cloud and use the tools available on a grant funded basis. >> So tell me a little bit more about that program. Anybody specifically that you, kind of like saying, seeing that's used these credits from AWS to do some cool research? >> Yeah definitely, so I think specifically we have one grantee right now that is really focused on identifying outcomes across multiple clinical trials, so currently clinical trials take 20 years, and there's a large variety of them. I don't know if any of you are familiar with the Framingham heart study, the Dallas heart study, the Jackson heart study, and trying to determine how those trials compare, and what outcomes we can generate, and research insights we can generate across multiple data sets is something that's been challenging due to the ability to not being able to necessarily access that data, all of those different data sets together, and then two, trying to find ways to actually compare them, and so with the precision medicine platform, we have a grantee at the University of Colorado-Denver, who has been able to find those synchronicities across data sets and has actually created kind of a framework that then can be implemented in the precision medicine platform. >> Well I just registered, it takes really two seconds to register, that's cool. Thanks so much for pointing out precision.heart.org. Final question, you said EMR's your jam. (laughing) >> Why, why is it? Why do you like it so much, is it fast, is it easy to use? >> I think the speed is one of the things. When it comes to using genetic data and multiple biological levels of data, whether it be your genetics, your lifestyle, your environment factors, there's... it just ends up being extremely large amounts of data, and to be able to implement things like server-less AI, and artificial intelligence, and machine learning on that data set is time consuming, and having the power of an EMR cluster that is scalable makes that so much faster so that we can then answer our research questions faster and identify those insights and get them to out in the world. >> Gotta love the new services they're launching, too. It just builds on top of it. Doesn't it? >> Yes. >> Yeah, soon everyone's gonna be jamming on AWS in our opinion. Thanks so much for coming on, appreciate the stories and commentary. >> Yeah. >> Precision.heart.org, you want to volunteer if you're a researcher or a user, want to share your data, they've got a lot of data science mojo going on over there, so check it out. It's theCUBE bringing a lot of data here, tons of data from the show, three days of wall to wall coverage, we'll be back with more live coverage after this short break. (upbeat music)
SUMMARY :
Narrator: Live from Las Vegas, scientist at the American Heart Association, but one of the things that they mention is that the potential of technology Yeah so the HA has strategically partnered What kind of data do you guys look at? Yeah so the American Heart Association has launched the framework to help your customers protect data? so that it's not accessible to people who the Heart Association doesn't collect data themselves. and the hospitals that it's associated with, and so that they can use it for their research. How are you guys sharing the results of kind back to you as the patient about your conferences, for example the American Heart Association, do they got to do to volunteer, volunteer to go to precision.heart.org, and they can actually happen so that we can mine extremely I mean the cloud and s3 buckets It is. and the AWS provides credit so that people from AWS to do some cool research? kind of a framework that then can be implemented Final question, you said EMR's your jam. of data, and to be able to implement Gotta love the new services they're launching, too. Thanks so much for coming on, appreciate the Precision.heart.org, you want to volunteer
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
AWS | ORGANIZATION | 0.99+ |
Laura Stevens | PERSON | 0.99+ |
Laura | PERSON | 0.99+ |
American Heart Association | ORGANIZATION | 0.99+ |
Amazon Web Services | ORGANIZATION | 0.99+ |
Keith Townsend | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
two seconds | QUANTITY | 0.99+ |
20 years | QUANTITY | 0.99+ |
Heart Association | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Institute for Precision Cardiovascular Medicine | ORGANIZATION | 0.99+ |
1924 | DATE | 0.99+ |
AHA | ORGANIZATION | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
Institute for Precision Cardiovascular Medicine | ORGANIZATION | 0.99+ |
13 new grants | QUANTITY | 0.99+ |
precision.heart.org | OTHER | 0.99+ |
Python | TITLE | 0.99+ |
HA | ORGANIZATION | 0.99+ |
two | QUANTITY | 0.99+ |
Precision.heart.org | OTHER | 0.99+ |
University of Colorado | ORGANIZATION | 0.99+ |
this year | DATE | 0.99+ |
HIPAA | TITLE | 0.98+ |
one | QUANTITY | 0.98+ |
R Bioconductor | TITLE | 0.98+ |
both | QUANTITY | 0.98+ |
Dallas | LOCATION | 0.98+ |
Intel | ORGANIZATION | 0.98+ |
iWatch | COMMERCIAL_ITEM | 0.97+ |
one click | QUANTITY | 0.97+ |
three days | QUANTITY | 0.96+ |
Verna | PERSON | 0.96+ |
tons of data | QUANTITY | 0.96+ |
s3 | TITLE | 0.92+ |
one grantee | QUANTITY | 0.92+ |
theCUBE | TITLE | 0.9+ |
two different targets | QUANTITY | 0.9+ |
My Research Legacy | TITLE | 0.9+ |
Invent 2017 | EVENT | 0.89+ |
Framingham | ORGANIZATION | 0.89+ |
Spark | TITLE | 0.85+ |
Denver | ORGANIZATION | 0.83+ |
today | DATE | 0.82+ |
Hal | TITLE | 0.82+ |
lot of data | QUANTITY | 0.79+ |
Narrator: Live from Las | TITLE | 0.79+ |
Invent | EVENT | 0.71+ |
re:Invent 2017 | EVENT | 0.71+ |
past few years | DATE | 0.7+ |
one size | QUANTITY | 0.67+ |
EMR | ORGANIZATION | 0.64+ |
Vegas | LOCATION | 0.63+ |
s3 | COMMERCIAL_ITEM | 0.56+ |
re | EVENT | 0.53+ |
Jackson | PERSON | 0.52+ |
precision | ORGANIZATION | 0.5+ |
Wikibon Presents: Software is Eating the Edge | The Entangling of Big Data and IIoT
>> So as folks make their way over from Javits I'm going to give you the least interesting part of the evening and that's my segment in which I welcome you here, introduce myself, lay out what what we're going to do for the next couple of hours. So first off, thank you very much for coming. As all of you know Wikibon is a part of SiliconANGLE which also includes theCUBE, so if you look around, this is what we have been doing for the past couple of days here in the TheCUBE. We've been inviting some significant thought leaders from over on the show and in incredibly expensive limousines driven them up the street to come on to TheCUBE and spend time with us and talk about some of the things that are happening in the industry today that are especially important. We tore it down, and we're having this party tonight. So we want to thank you very much for coming and look forward to having more conversations with all of you. Now what are we going to talk about? Well Wikibon is the research arm of SiliconANGLE. So we take data that comes out of TheCUBE and other places and we incorporated it into our research. And work very closely with large end users and large technology companies regarding how to make better decisions in this incredibly complex, incredibly important transformative world of digital business. What we're going to talk about tonight, and I've got a couple of my analysts assembled, and we're also going to have a panel, is this notion of software is eating the Edge. Now most of you have probably heard Marc Andreessen, the venture capitalist and developer, original developer of Netscape many years ago, talk about how software's eating the world. Well, if software is truly going to eat the world, it's going to eat at, it's going to take the big chunks, big bites at the Edge. That's where the actual action's going to be. And what we want to talk about specifically is the entangling of the internet or the industrial internet of things and IoT with analytics. So that's what we're going to talk about over the course of the next couple of hours. To do that we're going to, I've already blown the schedule, that's on me. But to do that I'm going to spend a couple minutes talking about what we regard as the essential digital business capabilities which includes analytics and Big Data, and includes IIoT and we'll explain at least in our position why those two things come together the way that they do. But I'm going to ask the august and revered Neil Raden, Wikibon analyst to come on up and talk about harvesting value at the Edge. 'Cause there are some, not now Neil, when we're done, when I'm done. So I'm going to ask Neil to come on up and we'll talk, he's going to talk about harvesting value at the Edge. And then Jim Kobielus will follow up with him, another Wikibon analyst, he'll talk specifically about how we're going to take that combination of analytics and Edge and turn it into the new types of systems and software that are going to sustain this significant transformation that's going on. And then after that, I'm going to ask Neil and Jim to come, going to invite some other folks up and we're going to run a panel to talk about some of these issues and do a real question and answer. So the goal here is before we break for drinks is to create a community feeling within the room. That includes smart people here, smart people in the audience having a conversation ultimately about some of these significant changes so please participate and we look forward to talking about the rest of it. All right, let's get going! What is digital business? One of the nice things about being an analyst is that you can reach back on people who were significantly smarter than you and build your points of view on the shoulders of those giants including Peter Drucker. Many years ago Peter Drucker made the observation that the purpose of business is to create and keep a customer. Not better shareholder value, not anything else. It is about creating and keeping your customer. Now you can argue with that, at the end of the day, if you don't have customers, you don't have a business. Now the observation that we've made, what we've added to that is that we've made the observation that the difference between business and digital business essentially is one thing. That's data. A digital business uses data to differentially create and keep customers. That's the only difference. If you think about the difference between taxi cab companies here in New York City, every cab that I've been in in the last three days has bothered me about Uber. The reason, the difference between Uber and a taxi cab company is data. That's the primary difference. Uber uses data as an asset. And we think this is the fundamental feature of digital business that everybody has to pay attention to. How is a business going to use data as an asset? Is the business using data as an asset? Is a business driving its engagement with customers, the role of its product et cetera using data? And if they are, they are becoming a more digital business. Now when you think about that, what we're really talking about is how are they going to put data to work? How are they going to take their customer data and their operational data and their financial data and any other kind of data and ultimately turn that into superior engagement or improved customer experience or more agile operations or increased automation? Those are the kinds of outcomes that we're talking about. But it is about putting data to work. That's fundamentally what we're trying to do within a digital business. Now that leads to an observation about the crucial strategic business capabilities that every business that aspires to be more digital or to be digital has to put in place. And I want to be clear. When I say strategic capabilities I mean something specific. When you talk about, for example technology architecture or information architecture there is this notion of what capabilities does your business need? Your business needs capabilities to pursue and achieve its mission. And in the digital business these are the capabilities that are now additive to this core question, ultimately of whether or not the company is a digital business. What are the three capabilities? One, you have to capture data. Not just do a good job of it, but better than your competition. You have to capture data better than your competition. In a way that is ultimately less intrusive on your markets and on your customers. That's in many respects, one of the first priorities of the internet of things and people. The idea of using sensors and related technologies to capture more data. Once you capture that data you have to turn it into value. You have to do something with it that creates business value so you can do a better job of engaging your markets and serving your customers. And that essentially is what we regard as the basis of Big Data. Including operations, including financial performance and everything else, but ultimately it's taking the data that's being captured and turning it into value within the business. The last point here is that once you have generated a model, or an insight or some other resource that you can act upon, you then have to act upon it in the real world. We call that systems of agency, the ability to enact based on data. Now I want to spend just a second talking about systems of agency 'cause we think it's an interesting concept and it's something Jim Kobielus is going to talk about a little bit later. When we say systems of agency, what we're saying is increasingly machines are acting on behalf of a brand. Or systems, combinations of machines and people are acting on behalf of the brand. And this whole notion of agency is the idea that ultimately these systems are now acting as the business's agent. They are at the front line of engaging customers. It's an extremely rich proposition that has subtle but crucial implications. For example I was talking to a senior decision maker at a business today and they made a quick observation, they talked about they, on their way here to New York City they had followed a woman who was going through security, opened up her suitcase and took out a bird. And then went through security with the bird. And the reason why I bring this up now is as TSA was trying to figure out how exactly to deal with this, the bird started talking and repeating things that the woman had said and many of those things, in fact, might have put her in jail. Now in this case the bird is not an agent of that woman. You can't put the woman in jail because of what the bird said. But increasingly we have to ask ourselves as we ask machines to do more on our behalf, digital instrumentation and elements to do more on our behalf, it's going to have blow back and an impact on our brand if we don't do it well. I want to draw that forward a little bit because I suggest there's going to be a new lifecycle for data. And the way that we think about it is we have the internet or the Edge which is comprised of things and crucially people, using sensors, whether they be smaller processors in control towers or whether they be phones that are tracking where we go, and this crucial element here is something that we call information transducers. Now a transducer in a traditional sense is something that takes energy from one form to another so that it can perform new types of work. By information transducer I essentially mean it takes information from one form to another so it can perform another type of work. This is a crucial feature of data. One of the beauties of data is that it can be used in multiple places at multiple times and not engender significant net new costs. It's one of the few assets that you can say about that. So the concept of an information transducer's really important because it's the basis for a lot of transformations of data as data flies through organizations. So we end up with the transducers storing data in the form of analytics, machine learning, business operations, other types of things, and then it goes back and it's transduced, back into to the real world as we program the real world and turning into these systems of agency. So that's the new lifecycle. And increasingly, that's how we have to think about data flows. Capturing it, turning it into value and having it act on our behalf in front of markets. That could have enormous implications for how ultimately money is spent over the next few years. So Wikibon does a significant amount of market research in addition to advising our large user customers. And that includes doing studies on cloud, public cloud, but also studies on what's happening within the analytics world. And if you take a look at it, what we basically see happening over the course of the next few years is significant investments in software and also services to get the word out. But we also expect there's going to be a lot of hardware. A significant amount of hardware that's ultimately sold within this space. And that's because of something that we call true private cloud. This concept of ultimately a business increasingly being designed and architected around the idea of data assets means that the reality, the physical realities of how data operates, how much it costs to store it or move it, the issues of latency, the issues of intellectual property protection as well as things like the regulatory regimes that are being put in place to govern how data gets used in between locations. All of those factors are going to drive increased utilization of what we call true private cloud. On premise technologies that provide the cloud experience but act where the data naturally needs to be processed. I'll come a little bit more to that in a second. So we think that it's going to be a relatively balanced market, a lot of stuff is going to end up in the cloud, but as Neil and Jim will talk about, there's going to be an enormous amount of analytics that pulls an enormous amount of data out to the Edge 'cause that's where the action's going to be. Now one of the things I want to also reveal to you is we've done a fair amount of data, we've done a fair amount of research around this question of where or how will data guide decisions about infrastructure? And in particular the Edge is driving these conversations. So here is a piece of research that one of our cohorts at Wikibon did, David Floyer. Taking a look at IoT Edge cost comparisons over a three year period. And it showed on the left hand side, an example where the sensor towers and other types of devices were streaming data back into a central location in a wind farm, stylized wind farm example. Very very expensive. Significant amounts of money end up being consumed, significant resources end up being consumed by the cost of moving the data from one place to another. Now this is even assuming that latency does not become a problem. The second example that we looked at is if we kept more of that data at the Edge and processed at the Edge. And literally it is a 85 plus percent cost reduction to keep more of the data at the Edge. Now that has enormous implications, how we think about big data, how we think about next generation architectures, et cetera. But it's these costs that are going to be so crucial to shaping the decisions that we make over the next two years about where we put hardware, where we put resources, what type of automation is possible, and what types of technology management has to be put in place. Ultimately we think it's going to lead to a structure, an architecture in the infrastructure as well as applications that is informed more by moving cloud to the data than moving the data to the cloud. That's kind of our fundamental proposition is that the norm in the industry has been to think about moving all data up to the cloud because who wants to do IT? It's so much cheaper, look what Amazon can do. Or what AWS can do. All true statements. Very very important in many respects. But most businesses today are starting to rethink that simple proposition and asking themselves do we have to move our business to the cloud, or can we move the cloud to the business? And increasingly what we see happening as we talk to our large customers about this, is that the cloud is being extended out to the Edge, we're moving the cloud and cloud services out to the business. Because of economic reasons, intellectual property control reasons, regulatory reasons, security reasons, any number of other reasons. It's just a more natural way to deal with it. And of course, the most important reason is latency. So with that as a quick backdrop, if I may quickly summarize, we believe fundamentally that the difference today is that businesses are trying to understand how to use data as an asset. And that requires an investment in new sets of technology capabilities that are not cheap, not simple and require significant thought, a lot of planning, lot of change within an IT and business organizations. How we capture data, how we turn it into value, and how we translate that into real world action through software. That's going to lead to a rethinking, ultimately, based on cost and other factors about how we deploy infrastructure. How we use the cloud so that the data guides the activity and not the choice of cloud supplier determines or limits what we can do with our data. And that's going to lead to this notion of true private cloud and elevate the role the Edge plays in analytics and all other architectures. So I hope that was perfectly clear. And now what I want to do is I want to bring up Neil Raden. Yes, now's the time Neil! So let me invite Neil up to spend some time talking about harvesting value at the Edge. Can you see his, all right. Got it. >> Oh boy. Hi everybody. Yeah, this is a really, this is a really big and complicated topic so I decided to just concentrate on something fairly simple, but I know that Peter mentioned customers. And he also had a picture of Peter Drucker. I had the pleasure in 1998 of interviewing Peter and photographing him. Peter Drucker, not this Peter. Because I'd started a magazine called Hired Brains. It was for consultants. And Peter said, Peter said a number of really interesting things to me, but one of them was his definition of a customer was someone who wrote you a check that didn't bounce. He was kind of a wag. He was! So anyway, he had to leave to do a video conference with Jack Welch and so I said to him, how do you charge Jack Welch to spend an hour on a video conference? And he said, you know I have this theory that you should always charge your client enough that it hurts a little bit or they don't take you seriously. Well, I had the chance to talk to Jack's wife, Suzie Welch recently and I told her that story and she said, "Oh he's full of it, Jack never paid "a dime for those conferences!" (laughs) So anyway, all right, so let's talk about this. To me, things about, engineered things like the hardware and network and all these other standards and so forth, we haven't fully developed those yet, but they're coming. As far as I'm concerned, they're not the most interesting thing. The most interesting thing to me in Edge Analytics is what you're going to get out of it, what the result is going to be. Making sense of this data that's coming. And while we're on data, something I've been thinking a lot lately because everybody I've talked to for the last three days just keeps talking to me about data. I have this feeling that data isn't actually quite real. That any data that we deal with is the result of some process that's captured it from something else that's actually real. In other words it's proxy. So it's not exactly perfect. And that's why we've always had these problems about customer A, customer A, customer A, what's their definition? What's the definition of this, that and the other thing? And with sensor data, I really have the feeling, when companies get, not you know, not companies, organizations get instrumented and start dealing with this kind of data what they're going to find is that this is the first time, and I've been involved in analytics, I don't want to date myself, 'cause I know I look young, but the first, I've been dealing with analytics since 1975. And everything we've ever done in analytics has involved pulling data from some other system that was not designed for analytics. But if you think about sensor data, this is data that we're actually going to catch the first time. It's going to be ours! We're not going to get it from some other source. It's going to be the real deal, to the extent that it's the real deal. Now you may say, ya know Neil, a sensor that's sending us information about oil pressure or temperature or something like that, how can you quarrel with that? Well, I can quarrel with it because I don't know if the sensor's doing it right. So we still don't know, even with that data, if it's right, but that's what we have to work with. Now, what does that really mean? Is that we have to be really careful with this data. It's ours, we have to take care of it. We don't get to reload it from source some other day. If we munge it up it's gone forever. So that has, that has very serious implications, but let me, let me roll you back a little bit. The way I look at analytics is it's come in three different eras. And we're entering into the third now. The first era was business intelligence. It was basically built and governed by IT, it was system of record kind of reporting. And as far as I can recall, it probably started around 1988 or at least that's the year that Howard Dresner claims to have invented the term. I'm not sure it's true. And things happened before 1988 that was sort of like BI, but 88 was when they really started coming out, that's when we saw BusinessObjects and Cognos and MicroStrategy and those kinds of things. The second generation just popped out on everybody else. We're all looking around at BI and we were saying why isn't this working? Why are only five people in the organization using this? Why are we not getting value out of this massive license we bought? And along comes companies like Tableau doing data discovery, visualization, data prep and Line of Business people are using this now. But it's still the same kind of data sources. It's moved out a little bit, but it still hasn't really hit the Big Data thing. Now we're in third generation, so we not only had Big Data, which has come and hit us like a tsunami, but we're looking at smart discovery, we're looking at machine learning. We're looking at AI induced analytics workflows. And then all the natural language cousins. You know, natural language processing, natural language, what's? Oh Q, natural language query. Natural language generation. Anybody here know what natural language generation is? Yeah, so what you see now is you do some sort of analysis and that tool comes up and says this chart is about the following and it used the following data, and it's blah blah blah blah blah. I think it's kind of wordy and it's going to refined some, but it's an interesting, it's an interesting thing to do. Now, the problem I see with Edge Analytics and IoT in general is that most of the canonical examples we talk about are pretty thin. I know we talk about autonomous cars, I hope to God we never have them, 'cause I'm a car guy. Fleet Management, I think Qualcomm started Fleet Management in 1988, that is not a new application. Industrial controls. I seem to remember, I seem to remember Honeywell doing industrial controls at least in the 70s and before that I wasn't, I don't want to talk about what I was doing, but I definitely wasn't in this industry. So my feeling is we all need to sit down and think about this and get creative. Because the real value in Edge Analytics or IoT, whatever you want to call it, the real value is going to be figuring out something that's new or different. Creating a brand new business. Changing the way an operation happens in a company, right? And I think there's a lot of smart people out there and I think there's a million apps that we haven't even talked about so, if you as a vendor come to me and tell me how great your product is, please don't talk to me about autonomous cars or Fleet Managing, 'cause I've heard about that, okay? Now, hardware and architecture are really not the most interesting thing. We fell into that trap with data warehousing. We've fallen into that trap with Big Data. We talk about speeds and feeds. Somebody said to me the other day, what's the narrative of this company? This is a technology provider. And I said as far as I can tell, they don't have a narrative they have some products and they compete in a space. And when they go to clients and the clients say, what's the value of your product? They don't have an answer for that. So we don't want to fall into this trap, okay? Because IoT is going to inform you in ways you've never even dreamed about. Unfortunately some of them are going to be really stinky, you know, they're going to be really bad. You're going to lose more of your privacy, it's going to get harder to get, I dunno, mortgage for example, I dunno, maybe it'll be easier, but in any case, it's not going to all be good. So let's really think about what you want to do with this technology to do something that's really valuable. Cost takeout is not the place to justify an IoT project. Because number one, it's very expensive, and number two, it's a waste of the technology because you should be looking at, you know the old numerator denominator thing? You should be looking at the numerators and forget about the denominators because that's not what you do with IoT. And the other thing is you don't want to get over confident. Actually this is good advice about anything, right? But in this case, I love this quote by Derek Sivers He's a pretty funny guy. He said, "If more information was the answer, "then we'd all be billionaires with perfect abs." I'm not sure what's on his wishlist, but you know, I would, those aren't necessarily the two things I would think of, okay. Now, what I said about the data, I want to explain some more. Big Data Analytics, if you look at this graphic, it depicts it perfectly. It's a bunch of different stuff falling into the funnel. All right? It comes from other places, it's not original material. And when it comes in, it's always used as second hand data. Now what does that mean? That means that you have to figure out the semantics of this information and you have to find a way to put it together in a way that's useful to you, okay. That's Big Data. That's where we are. How is that different from IoT data? It's like I said, IoT is original. You can put it together any way you want because no one else has ever done that before. It's yours to construct, okay. You don't even have to transform it into a schema because you're creating the new application. But the most important thing is you have to take care of it 'cause if you lose it, it's gone. It's the original data. It's the same way, in operational systems for a long long time we've always been concerned about backup and security and everything else. You better believe this is a problem. I know a lot of people think about streaming data, that we're going to look at it for a minute, and we're going to throw most of it away. Personally I don't think that's going to happen. I think it's all going to be saved, at least for a while. Now, the governance and security, oh, by the way, I don't know where you're going to find a presentation where somebody uses a newspaper clipping about Vladimir Lenin, but here it is, enjoy yourselves. I believe that when people think about governance and security today they're still thinking along the same grids that we thought about it all along. But this is very very different and again, I'm sorry I keep thrashing this around, but this is treasured data that has to be carefully taken care of. Now when I say governance, my experience has been over the years that governance is something that IT does to make everybody's lives miserable. But that's not what I mean by governance today. It means a comprehensive program to really secure the value of the data as an asset. And you need to think about this differently. Now the other thing is you may not get to think about it differently, because some of the stuff may end up being subject to regulation. And if the regulators start regulating some of this, then that'll take some of the degrees of freedom away from you in how you put this together, but you know, that's the way it works. Now, machine learning, I think I told somebody the other day that claims about machine learning in software products are as common as twisters in trail parks. And a lot of it is not really what I'd call machine learning. But there's a lot of it around. And I think all of the open source machine learning and artificial intelligence that's popped up, it's great because all those math PhDs who work at Home Depot now have something to do when they go home at night and they construct this stuff. But if you're going to have machine learning at the Edge, here's the question, what kind of machine learning would you have at the Edge? As opposed to developing your models back at say, the cloud, when you transmit the data there. The devices at the Edge are not very powerful. And they don't have a lot of memory. So you're only going to be able to do things that have been modeled or constructed somewhere else. But that's okay. Because machine learning algorithm development is actually slow and painful. So you really want the people who know how to do this working with gobs of data creating models and testing them offline. And when you have something that works, you can put it there. Now there's one thing I want to talk about before I finish, and I think I'm almost finished. I wrote a book about 10 years ago about automated decision making and the conclusion that I came up with was that little decisions add up, and that's good. But it also means you don't have to get them all right. But you don't want computers or software making decisions unattended if it involves human life, or frankly any life. Or the environment. So when you think about the applications that you can build using this architecture and this technology, think about the fact that you're not going to be doing air traffic control, you're not going to be monitoring crossing guards at the elementary school. You're going to be doing things that may seem fairly mundane. Managing machinery on the factory floor, I mean that may sound great, but really isn't that interesting. Managing well heads, drilling for oil, well I mean, it's great to the extent that it doesn't cause wells to explode, but they don't usually explode. What it's usually used for is to drive the cost out of preventative maintenance. Not very interesting. So use your heads. Come up with really cool stuff. And any of you who are involved in Edge Analytics, the next time I talk to you I don't want to hear about the same five applications that everybody talks about. Let's hear about some new ones. So, in conclusion, I don't really have anything in conclusion except that Peter mentioned something about limousines bringing people up here. On Monday I was slogging up and down Park Avenue and Madison Avenue with my client and we were visiting all the hedge funds there because we were doing a project with them. And in the miserable weather I looked at him and I said, for godsake Paul, where's the black car? And he said, that was the 90s. (laughs) Thank you. So, Jim, up to you. (audience applauding) This is terrible, go that way, this was terrible coming that way. >> Woo, don't want to trip! And let's move to, there we go. Hi everybody, how ya doing? Thanks Neil, thanks Peter, those were great discussions. So I'm the third leg in this relay race here, talking about of course how software is eating the world. And focusing on the value of Edge Analytics in a lot of real world scenarios. Programming the real world for, to make the world a better place. So I will talk, I'll break it out analytically in terms of the research that Wikibon is doing in the area of the IoT, but specifically how AI intelligence is being embedded really to all material reality potentially at the Edge. But mobile applications and industrial IoT and the smart appliances and self driving vehicles. I will break it out in terms of a reference architecture for understanding what functions are being pushed to the Edge to hardware, to our phones and so forth to drive various scenarios in terms of real world results. So I'll move a pace here. So basically AI software or AI microservices are being infused into Edge hardware as we speak. What we see is more vendors of smart phones and other, real world appliances and things like smart driving, self driving vehicles. What they're doing is they're instrumenting their products with computer vision and natural language processing, environmental awareness based on sensing and actuation and those capabilities and inferences that these devices just do to both provide human support for human users of these devices as well as to enable varying degrees of autonomous operation. So what I'll be talking about is how AI is a foundation for data driven systems of agency of the sort that Peter is talking about. Infusing data driven intelligence into everything or potentially so. As more of this capability, all these algorithms for things like, ya know for doing real time predictions and classifications, anomaly detection and so forth, as this functionality gets diffused widely and becomes more commoditized, you'll see it burned into an ever-wider variety of hardware architecture, neuro synaptic chips, GPUs and so forth. So what I've got here in front of you is a sort of a high level reference architecture that we're building up in our research at Wikibon. So AI, artificial intelligence is a big term, a big paradigm, I'm not going to unpack it completely. Of course we don't have oodles of time so I'm going to take you fairly quickly through the high points. It's a driver for systems of agency. Programming the real world. Transducing digital inputs, the data, to analog real world results. Through the embedding of this capability in the IoT, but pushing more and more of it out to the Edge with points of decision and action in real time. And there are four capabilities that we're seeing in terms of AI enabled, enabling capabilities that are absolutely critical to software being pushed to the Edge are sensing, actuation, inference and Learning. Sensing and actuation like Peter was describing, it's about capturing data from the environment within which a device or users is operating or moving. And then actuation is the fancy term for doing stuff, ya know like industrial IoT, it's obviously machine controlled, but clearly, you know self driving vehicles is steering a vehicle and avoiding crashing and so forth. Inference is the meat and potatoes as it were of AI. Analytics does inferences. It infers from the data, the logic of the application. Predictive logic, correlations, classification, abstractions, differentiation, anomaly detection, recognizing faces and voices. We see that now with Apple and the latest version of the iPhone is embedding face recognition as a core, as the core multifactor authentication technique. Clearly that's a harbinger of what's going to be universal fairly soon which is that depends on AI. That depends on convolutional neural networks, that is some heavy hitting processing power that's necessary and it's processing the data that's coming from your face. So that's critically important. So what we're looking at then is the AI software is taking root in hardware to power continuous agency. Getting stuff done. Powered decision support by human beings who have to take varying degrees of action in various environments. We don't necessarily want to let the car steer itself in all scenarios, we want some degree of override, for lots of good reasons. They want to protect life and limb including their own. And just more data driven automation across the internet of things in the broadest sense. So unpacking this reference framework, what's happening is that AI driven intelligence is powering real time decisioning at the Edge. Real time local sensing from the data that it's capturing there, it's ingesting the data. Some, not all of that data, may be persistent at the Edge. Some, perhaps most of it, will be pushed into the cloud for other processing. When you have these highly complex algorithms that are doing AI deep learning, multilayer, to do a variety of anti-fraud and higher level like narrative, auto-narrative roll-ups from various scenes that are unfolding. A lot of this processing is going to begin to happen in the cloud, but a fair amount of the more narrowly scoped inferences that drive real time decision support at the point of action will be done on the device itself. Contextual actuation, so it's the sensor data that's captured by the device along with other data that may be coming down in real time streams through the cloud will provide the broader contextual envelope of data needed to drive actuation, to drive various models and rules and so forth that are making stuff happen at the point of action, at the Edge. Continuous inference. What it all comes down to is that inference is what's going on inside the chips at the Edge device. And what we're seeing is a growing range of hardware architectures, GPUs, CPUs, FPGAs, ASIC, Neuro synaptic chips of all sorts playing in various combinations that are automating more and more very complex inference scenarios at the Edge. And not just individual devices, swarms of devices, like drones and so forth are essentially an Edge unto themselves. You'll see these tiered hierarchies of Edge swarms that are playing and doing inferences of ever more complex dynamic nature. And much of this will be, this capability, the fundamental capabilities that is powering them all will be burned into the hardware that powers them. And then adaptive learning. Now I use the term learning rather than training here, training is at the core of it. Training means everything in terms of the predictive fitness or the fitness of your AI services for whatever task, predictions, classifications, face recognition that you, you've built them for. But I use the term learning in a broader sense. It's what's make your inferences get better and better, more accurate over time is that you're training them with fresh data in a supervised learning environment. But you can have reinforcement learning if you're doing like say robotics and you don't have ground truth against which to train the data set. You know there's maximize a reward function versus minimize a loss function, you know, the standard approach, the latter for supervised learning. There's also, of course, the issue, or not the issue, the approach of unsupervised learning with cluster analysis critically important in a lot of real world scenarios. So Edge AI Algorithms, clearly, deep learning which is multilayered machine learning models that can do abstractions at higher and higher levels. Face recognition is a high level abstraction. Faces in a social environment is an even higher level of abstraction in terms of groups. Faces over time and bodies and gestures, doing various things in various environments is an even higher level abstraction in terms of narratives that can be rolled up, are being rolled up by deep learning capabilities of great sophistication. Convolutional neural networks for processing images, recurrent neural networks for processing time series. Generative adversarial networks for doing essentially what's called generative applications of all sort, composing music, and a lot of it's being used for auto programming. These are all deep learning. There's a variety of other algorithm approaches I'm not going to bore you with here. Deep learning is essentially the enabler of the five senses of the IoT. Your phone's going to have, has a camera, it has a microphone, it has the ability to of course, has geolocation and navigation capabilities. It's environmentally aware, it's got an accelerometer and so forth embedded therein. The reason that your phone and all of the devices are getting scary sentient is that they have the sensory modalities and the AI, the deep learning that enables them to make environmentally correct decisions in the wider range of scenarios. So machine learning is the foundation of all of this, but there are other, I mean of deep learning, artificial neural networks is the foundation of that. But there are other approaches for machine learning I want to make you aware of because support vector machines and these other established approaches for machine learning are not going away but really what's driving the show now is deep learning, because it's scary effective. And so that's where most of the investment in AI is going into these days for deep learning. AI Edge platforms, tools and frameworks are just coming along like gangbusters. Much development of AI, of deep learning happens in the context of your data lake. This is where you're storing your training data. This is the data that you use to build and test to validate in your models. So we're seeing a deepening stack of Hadoop and there's Kafka, and Spark and so forth that are driving the training (coughs) excuse me, of AI models that are power all these Edge Analytic applications so that that lake will continue to broaden in terms, and deepen in terms of a scope and the range of data sets and the range of modeling, AI modeling supports. Data science is critically important in this scenario because the data scientist, the data science teams, the tools and techniques and flows of data science are the fundamental development paradigm or discipline or capability that's being leveraged to build and to train and to deploy and iterate all this AI that's being pushed to the Edge. So clearly data science is at the center, data scientists of an increasingly specialized nature are necessary to the realization to this value at the Edge. AI frameworks are coming along like you know, a mile a minute. TensorFlow has achieved a, is an open source, most of these are open source, has achieved sort of almost like a defacto standard, status, I'm using the word defacto in air quotes. There's Theano and Keras and xNet and CNTK and a variety of other ones. We're seeing range of AI frameworks come to market, most open source. Most are supported by most of the major tool vendors as well. So at Wikibon we're definitely tracking that, we plan to go deeper in our coverage of that space. And then next best action, powers recommendation engines. I mean next best action decision automation of the sort of thing Neil's covered in a variety of contexts in his career is fundamentally important to Edge Analytics to systems of agency 'cause it's driving the process automation, decision automation, sort of the targeted recommendations that are made at the Edge to individual users as well as to process that automation. That's absolutely necessary for self driving vehicles to do their jobs and industrial IoT. So what we're seeing is more and more recommendation engine or recommender capabilities powered by ML and DL are going to the Edge, are already at the Edge for a variety of applications. Edge AI capabilities, like I said, there's sensing. And sensing at the Edge is becoming ever more rich, mixed reality Edge modalities of all sort are for augmented reality and so forth. We're just seeing a growth in certain, the range of sensory modalities that are enabled or filtered and analyzed through AI that are being pushed to the Edge, into the chip sets. Actuation, that's where robotics comes in. Robotics is coming into all aspects of our lives. And you know, it's brainless without AI, without deep learning and these capabilities. Inference, autonomous edge decisioning. Like I said, it's, a growing range of inferences that are being done at the Edge. And that's where it has to happen 'cause that's the point of decision. Learning, training, much training, most training will continue to be done in the cloud because it's very data intensive. It's a grind to train and optimize an AI algorithm to do its job. It's not something that you necessarily want to do or can do at the Edge at Edge devices so, the models that are built and trained in the cloud are pushed down through a dev ops process down to the Edge and that's the way it will work pretty much in most AI environments, Edge analytics environments. You centralize the modeling, you decentralize the execution of the inference models. The training engines will be in the cloud. Edge AI applications. I'll just run you through sort of a core list of the ones that are coming into, already come into the mainstream at the Edge. Multifactor authentication, clearly the Apple announcement of face recognition is just a harbinger of the fact that that's coming to every device. Computer vision speech recognition, NLP, digital assistance and chat bots powered by natural language processing and understanding, it's all AI powered. And it's becoming very mainstream. Emotion detection, face recognition, you know I could go on and on but these are like the core things that everybody has access to or will by 2020 and they're core devices, mass market devices. Developers, designers and hardware engineers are coming together to pool their expertise to build and train not just the AI, but also the entire package of hardware in UX and the orchestration of real world business scenarios or life scenarios that all this intelligence, the submitted intelligence enables and most, much of what they build in terms of AI will be containerized as micro services through Docker and orchestrated through Kubernetes as full cloud services in an increasingly distributed fabric. That's coming along very rapidly. We can see a fair amount of that already on display at Strata in terms of what the vendors are doing or announcing or who they're working with. The hardware itself, the Edge, you know at the Edge, some data will be persistent, needs to be persistent to drive inference. That's, and you know to drive a variety of different application scenarios that need some degree of historical data related to what that device in question happens to be sensing or has sensed in the immediate past or you know, whatever. The hardware itself is geared towards both sensing and increasingly persistence and Edge driven actuation of real world results. The whole notion of drones and robotics being embedded into everything that we do. That's where that comes in. That has to be powered by low cost, low power commodity chip sets of various sorts. What we see right now in terms of chip sets is it's a GPUs, Nvidia has gone real far and GPUs have come along very fast in terms of power inference engines, you know like the Tesla cars and so forth. But GPUs are in many ways the core hardware sub straight for in inference engines in DL so far. But to become a mass market phenomenon, it's got to get cheaper and lower powered and more commoditized, and so we see a fair number of CPUs being used as the hardware for Edge Analytic applications. Some vendors are fairly big on FPGAs, I believe Microsoft has gone fairly far with FPGAs inside DL strategy. ASIC, I mean, there's neuro synaptic chips like IBM's got one. There's at least a few dozen vendors of neuro synaptic chips on the market so at Wikibon we're going to track that market as it develops. And what we're seeing is a fair number of scenarios where it's a mixed environment where you use one chip set architecture at the inference side of the Edge, and other chip set architectures that are driving the DL as processed in the cloud, playing together within a common architecture. And we see some, a fair number of DL environments where the actual training is done in the cloud on Spark using CPUs and parallelized in memory, but pushing Tensorflow models that might be trained through Spark down to the Edge where the inferences are done in FPGAs and GPUs. Those kinds of mixed hardware scenarios are very, very, likely to be standard going forward in lots of areas. So analytics at the Edge power continuous results is what it's all about. The whole point is really not moving the data, it's putting the inference at the Edge and working from the data that's already captured and persistent there for the duration of whatever action or decision or result needs to be powered from the Edge. Like Neil said cost takeout alone is not worth doing. Cost takeout alone is not the rationale for putting AI at the Edge. It's getting new stuff done, new kinds of things done in an automated consistent, intelligent, contextualized way to make our lives better and more productive. Security and governance are becoming more important. Governance of the models, governance of the data, governance in a dev ops context in terms of version controls over all those DL models that are built, that are trained, that are containerized and deployed. Continuous iteration and improvement of those to help them learn to do, make our lives better and easier. With that said, I'm going to hand it over now. It's five minutes after the hour. We're going to get going with the Influencer Panel so what we'd like to do is I call Peter, and Peter's going to call our influencers. >> All right, am I live yet? Can you hear me? All right so, we've got, let me jump back in control here. We've got, again, the objective here is to have community take on some things. And so what we want to do is I want to invite five other people up, Neil why don't you come on up as well. Start with Neil. You can sit here. On the far right hand side, Judith, Judith Hurwitz. >> Neil: I'm glad I'm on the left side. >> From the Hurwitz Group. >> From the Hurwitz Group. Jennifer Shin who's affiliated with UC Berkeley. Jennifer are you here? >> She's here, Jennifer where are you? >> She was here a second ago. >> Neil: I saw her walk out she may have, >> Peter: All right, she'll be back in a second. >> Here's Jennifer! >> Here's Jennifer! >> Neil: With 8 Path Solutions, right? >> Yep. >> Yeah 8 Path Solutions. >> Just get my mic. >> Take your time Jen. >> Peter: All right, Stephanie McReynolds. Far left. And finally Joe Caserta, Joe come on up. >> Stephie's with Elysian >> And to the left. So what I want to do is I want to start by having everybody just go around introduce yourself quickly. Judith, why don't we start there. >> I'm Judith Hurwitz, I'm president of Hurwitz and Associates. We're an analyst research and fault leadership firm. I'm the co-author of eight books. Most recent is Cognitive Computing and Big Data Analytics. I've been in the market for a couple years now. >> Jennifer. >> Hi, my name's Jennifer Shin. I'm the founder and Chief Data Scientist 8 Path Solutions LLC. We do data science analytics and technology. We're actually about to do a big launch next month, with Box actually. >> We're apparent, are we having a, sorry Jennifer, are we having a problem with Jennifer's microphone? >> Man: Just turn it back on? >> Oh you have to turn it back on. >> It was on, oh sorry, can you hear me now? >> Yes! We can hear you now. >> Okay, I don't know how that turned back off, but okay. >> So you got to redo all that Jen. >> Okay, so my name's Jennifer Shin, I'm founder of 8 Path Solutions LLC, it's a data science analytics and technology company. I founded it about six years ago. So we've been developing some really cool technology that we're going to be launching with Box next month. It's really exciting. And I have, I've been developing a lot of patents and some technology as well as teaching at UC Berkeley as a lecturer in data science. >> You know Jim, you know Neil, Joe, you ready to go? >> Joe: Just broke my microphone. >> Joe's microphone is broken. >> Joe: Now it should be all right. >> Jim: Speak into Neil's. >> Joe: Hello, hello? >> I just feel not worthy in the presence of Joe Caserta. (several laughing) >> That's right, master of mics. If you can hear me, Joe Caserta, so yeah, I've been doing data technology solutions since 1986, almost as old as Neil here, but been doing specifically like BI, data warehousing, business intelligence type of work since 1996. And been doing, wholly dedicated to Big Data solutions and modern data engineering since 2009. Where should I be looking? >> Yeah I don't know where is the camera? >> Yeah, and that's basically it. So my company was formed in 2001, it's called Caserta Concepts. We recently rebranded to only Caserta 'cause what we do is way more than just concepts. So we conceptualize the stuff, we envision what the future brings and we actually build it. And we help clients large and small who are just, want to be leaders in innovation using data specifically to advance their business. >> Peter: And finally Stephanie McReynolds. >> I'm Stephanie McReynolds, I had product marketing as well as corporate marketing for a company called Elysian. And we are a data catalog so we help bring together not only a technical understanding of your data, but we curate that data with human knowledge and use automated intelligence internally within the system to make recommendations about what data to use for decision making. And some of our customers like City of San Diego, a large automotive manufacturer working on self driving cars and General Electric use Elysian to help power their solutions for IoT at the Edge. >> All right so let's jump right into it. And again if you have a question, raise your hand, and we'll do our best to get it to the floor. But what I want to do is I want to get seven questions in front of this group and have you guys discuss, slog, disagree, agree. Let's start here. What is the relationship between Big Data AI and IoT? Now Wikibon's put forward its observation that data's being generated at the Edge, that action is being taken at the Edge and then increasingly the software and other infrastructure architectures need to accommodate the realities of how data is going to work in these very complex systems. That's our perspective. Anybody, Judith, you want to start? >> Yeah, so I think that if you look at AI machine learning, all these different areas, you have to be able to have the data learned. Now when it comes to IoT, I think one of the issues we have to be careful about is not all data will be at the Edge. Not all data needs to be analyzed at the Edge. For example if the light is green and that's good and it's supposed to be green, do you really have to constantly analyze the fact that the light is green? You actually only really want to be able to analyze and take action when there's an anomaly. Well if it goes purple, that's actually a sign that something might explode, so that's where you want to make sure that you have the analytics at the edge. Not for everything, but for the things where there is an anomaly and a change. >> Joe, how about from your perspective? >> For me I think the evolution of data is really becoming, eventually oxygen is just, I mean data's going to be the oxygen we breathe. It used to be very very reactive and there used to be like a latency. You do something, there's a behavior, there's an event, there's a transaction, and then you go record it and then you collect it, and then you can analyze it. And it was very very waterfallish, right? And then eventually we figured out to put it back into the system. Or at least human beings interpret it to try to make the system better and that is really completely turned on it's head, we don't do that anymore. Right now it's very very, it's synchronous, where as we're actually making these transactions, the machines, we don't really need, I mean human beings are involved a bit, but less and less and less. And it's just a reality, it may not be politically correct to say but it's a reality that my phone in my pocket is following my behavior, and it knows without telling a human being what I'm doing. And it can actually help me do things like get to where I want to go faster depending on my preference if I want to save money or save time or visit things along the way. And I think that's all integration of big data, streaming data, artificial intelligence and I think the next thing that we're going to start seeing is the culmination of all of that. I actually, hopefully it'll be published soon, I just wrote an article for Forbes with the term of ARBI and ARBI is the integration of Augmented Reality and Business Intelligence. Where I think essentially we're going to see, you know, hold your phone up to Jim's face and it's going to recognize-- >> Peter: It's going to break. >> And it's going to say exactly you know, what are the key metrics that we want to know about Jim. If he works on my sales force, what's his attainment of goal, what is-- >> Jim: Can it read my mind? >> Potentially based on behavior patterns. >> Now I'm scared. >> I don't think Jim's buying it. >> It will, without a doubt be able to predict what you've done in the past, you may, with some certain level of confidence you may do again in the future, right? And is that mind reading? It's pretty close, right? >> Well, sometimes, I mean, mind reading is in the eye of the individual who wants to know. And if the machine appears to approximate what's going on in the person's head, sometimes you can't tell. So I guess, I guess we could call that the Turing machine test of the paranormal. >> Well, face recognition, micro gesture recognition, I mean facial gestures, people can do it. Maybe not better than a coin toss, but if it can be seen visually and captured and analyzed, conceivably some degree of mind reading can be built in. I can see when somebody's angry looking at me so, that's a possibility. That's kind of a scary possibility in a surveillance society, potentially. >> Neil: Right, absolutely. >> Peter: Stephanie, what do you think? >> Well, I hear a world of it's the bots versus the humans being painted here and I think that, you know at Elysian we have a very strong perspective on this and that is that the greatest impact, or the greatest results is going to be when humans figure out how to collaborate with the machines. And so yes, you want to get to the location more quickly, but the machine as in the bot isn't able to tell you exactly what to do and you're just going to blindly follow it. You need to train that machine, you need to have a partnership with that machine. So, a lot of the power, and I think this goes back to Judith's story is then what is the human decision making that can be augmented with data from the machine, but then the humans are actually training the training side and driving machines in the right direction. I think that's when we get true power out of some of these solutions so it's not just all about the technology. It's not all about the data or the AI, or the IoT, it's about how that empowers human systems to become smarter and more effective and more efficient. And I think we're playing that out in our technology in a certain way and I think organizations that are thinking along those lines with IoT are seeing more benefits immediately from those projects. >> So I think we have a general agreement of what kind of some of the things you talked about, IoT, crucial capturing information, and then having action being taken, AI being crucial to defining and refining the nature of the actions that are being taken Big Data ultimately powering how a lot of that changes. Let's go to the next one. >> So actually I have something to add to that. So I think it makes sense, right, with IoT, why we have Big Data associated with it. If you think about what data is collected by IoT. We're talking about a serial information, right? It's over time, it's going to grow exponentially just by definition, right, so every minute you collect a piece of information that means over time, it's going to keep growing, growing, growing as it accumulates. So that's one of the reasons why the IoT is so strongly associated with Big Data. And also why you need AI to be able to differentiate between one minute versus next minute, right? Trying to find a better way rather than looking at all that information and manually picking out patterns. To have some automated process for being able to filter through that much data that's being collected. >> I want to point out though based on what you just said Jennifer, I want to bring Neil in at this point, that this question of IoT now generating unprecedented levels of data does introduce this idea of the primary source. Historically what we've done within technology, or within IT certainly is we've taken stylized data. There is no such thing as a real world accounting thing. It is a human contrivance. And we stylize data and therefore it's relatively easy to be very precise on it. But when we start, as you noted, when we start measuring things with a tolerance down to thousandths of a millimeter, whatever that is, metric system, now we're still sometimes dealing with errors that we have to attend to. So, the reality is we're not just dealing with stylized data, we're dealing with real data, and it's more, more frequent, but it also has special cases that we have to attend to as in terms of how we use it. What do you think Neil? >> Well, I mean, I agree with that, I think I already said that, right. >> Yes you did, okay let's move on to the next one. >> Well it's a doppelganger, the digital twin doppelganger that's automatically created by your very fact that you're living and interacting and so forth and so on. It's going to accumulate regardless. Now that doppelganger may not be your agent, or might not be the foundation for your agent unless there's some other piece of logic like an interest graph that you build, a human being saying this is my broad set of interests, and so all of my agents out there in the IoT, you all need to be aware that when you make a decision on my behalf as my agent, this is what Jim would do. You know I mean there needs to be that kind of logic somewhere in this fabric to enable true agency. >> All right, so I'm going to start with you. Oh go ahead. >> I have a real short answer to this though. I think that Big Data provides the data and compute platform to make AI possible. For those of us who dipped our toes in the water in the 80s, we got clobbered because we didn't have the, we didn't have the facilities, we didn't have the resources to really do AI, we just kind of played around with it. And I think that the other thing about it is if you combine Big Data and AI and IoT, what you're going to see is people, a lot of the applications we develop now are very inward looking, we look at our organization, we look at our customers. We try to figure out how to sell more shoes to fashionable ladies, right? But with this technology, I think people can really expand what they're thinking about and what they model and come up with applications that are much more external. >> Actually what I would add to that is also it actually introduces being able to use engineering, right? Having engineers interested in the data. Because it's actually technical data that's collected not just say preferences or information about people, but actual measurements that are being collected with IoT. So it's really interesting in the engineering space because it opens up a whole new world for the engineers to actually look at data and to actually combine both that hardware side as well as the data that's being collected from it. >> Well, Neil, you and I have talked about something, 'cause it's not just engineers. We have in the healthcare industry for example, which you know a fair amount about, there's this notion of empirical based management. And the idea that increasingly we have to be driven by data as a way of improving the way that managers do things, the way the managers collect or collaborate and ultimately collectively how they take action. So it's not just engineers, it's supposed to also inform business, what's actually happening in the healthcare world when we start thinking about some of this empirical based management, is it working? What are some of the barriers? >> It's not a function of technology. What happens in medicine and healthcare research is, I guess you can say it borders on fraud. (people chuckling) No, I'm not kidding. I know the New England Journal of Medicine a couple of years ago released a study and said that at least half their articles that they published turned out to be written, ghost written by pharmaceutical companies. (man chuckling) Right, so I think the problem is that when you do a clinical study, the one that really killed me about 10 years ago was the women's health initiative. They spent $700 million gathering this data over 20 years. And when they released it they looked at all the wrong things deliberately, right? So I think that's a systemic-- >> I think you're bringing up a really important point that we haven't brought up yet, and that is is can you use Big Data and machine learning to begin to take the biases out? So if you let the, if you divorce your preconceived notions and your biases from the data and let the data lead you to the logic, you start to, I think get better over time, but it's going to take a while to get there because we do tend to gravitate towards our biases. >> I will share an anecdote. So I had some arm pain, and I had numbness in my thumb and pointer finger and I went to, excruciating pain, went to the hospital. So the doctor examined me, and he said you probably have a pinched nerve, he said, but I'm not exactly sure which nerve it would be, I'll be right back. And I kid you not, he went to a computer and he Googled it. (Neil laughs) And he came back because this little bit of information was something that could easily be looked up, right? Every nerve in your spine is connected to your different fingers so the pointer and the thumb just happens to be your C6, so he came back and said, it's your C6. (Neil mumbles) >> You know an interesting, I mean that's a good example. One of the issues with healthcare data is that the data set is not always shared across the entire research community, so by making Big Data accessible to everyone, you actually start a more rational conversation or debate on well what are the true insights-- >> If that conversation includes what Judith talked about, the actual model that you use to set priorities and make decisions about what's actually important. So it's not just about improving, this is the test. It's not just about improving your understanding of the wrong thing, it's also testing whether it's the right or wrong thing as well. >> That's right, to be able to test that you need to have humans in dialog with one another bringing different biases to the table to work through okay is there truth in this data? >> It's context and it's correlation and you can have a great correlation that's garbage. You know if you don't have the right context. >> Peter: So I want to, hold on Jim, I want to, >> It's exploratory. >> Hold on Jim, I want to take it to the next question 'cause I want to build off of what you talked about Stephanie and that is that this says something about what is the Edge. And our perspective is that the Edge is not just devices. That when we talk about the Edge, we're talking about human beings and the role that human beings are going to play both as sensors or carrying things with them, but also as actuators, actually taking action which is not a simple thing. So what do you guys think? What does the Edge mean to you? Joe, why don't you start? >> Well, I think it could be a combination of the two. And specifically when we talk about healthcare. So I believe in 2017 when we eat we don't know why we're eating, like I think we should absolutely by now be able to know exactly what is my protein level, what is my calcium level, what is my potassium level? And then find the foods to meet that. What have I depleted versus what I should have, and eat very very purposely and not by taste-- >> And it's amazing that red wine is always the answer. >> It is. (people laughing) And tequila, that helps too. >> Jim: You're a precision foodie is what you are. (several chuckle) >> There's no reason why we should not be able to know that right now, right? And when it comes to healthcare is, the biggest problem or challenge with healthcare is no matter how great of a technology you have, you can't, you can't, you can't manage what you can't measure. And you're really not allowed to use a lot of this data so you can't measure it, right? You can't do things very very scientifically right, in the healthcare world and I think regulation in the healthcare world is really burdening advancement in science. >> Peter: Any thoughts Jennifer? >> Yes, I teach statistics for data scientists, right, so you know we talk about a lot of these concepts. I think what makes these questions so difficult is you have to find a balance, right, a middle ground. For instance, in the case of are you being too biased through data, well you could say like we want to look at data only objectively, but then there are certain relationships that your data models might show that aren't actually a causal relationship. For instance, if there's an alien that came from space and saw earth, saw the people, everyone's carrying umbrellas right, and then it started to rain. That alien might think well, it's because they're carrying umbrellas that it's raining. Now we know from real world that that's actually not the way these things work. So if you look only at the data, that's the potential risk. That you'll start making associations or saying something's causal when it's actually not, right? So that's one of the, one of the I think big challenges. I think when it comes to looking also at things like healthcare data, right? Do you collect data about anything and everything? Does it mean that A, we need to collect all that data for the question we're looking at? Or that it's actually the best, more optimal way to be able to get to the answer? Meaning sometimes you can take some shortcuts in terms of what data you collect and still get the right answer and not have maybe that level of specificity that's going to cost you millions extra to be able to get. >> So Jennifer as a data scientist, I want to build upon what you just said. And that is, are we going to start to see methods and models emerge for how we actually solve some of these problems? So for example, we know how to build a system for stylized process like accounting or some elements of accounting. We have methods and models that lead to technology and actions and whatnot all the way down to that that system can be generated. We don't have the same notion to the same degree when we start talking about AI and some of these Big Datas. We have algorithms, we have technology. But are we going to start seeing, as a data scientist, repeatability and learning and how to think the problems through that's going to lead us to a more likely best or at least good result? >> So I think that's a bit of a tough question, right? Because part of it is, it's going to depend on how many of these researchers actually get exposed to real world scenarios, right? Research looks into all these papers, and you come up with all these models, but if it's never tested in a real world scenario, well, I mean we really can't validate that it works, right? So I think it is dependent on how much of this integration there's going to be between the research community and industry and how much investment there is. Funding is going to matter in this case. If there's no funding in the research side, then you'll see a lot of industry folk who feel very confident about their models that, but again on the other side of course, if researchers don't validate those models then you really can't say for sure that it's actually more accurate, or it's more efficient. >> It's the issue of real world testing and experimentation, A B testing, that's standard practice in many operationalized ML and AI implementations in the business world, but real world experimentation in the Edge analytics, what you're actually transducing are touching people's actual lives. Problem there is, like in healthcare and so forth, when you're experimenting with people's lives, somebody's going to die. I mean, in other words, that's a critical, in terms of causal analysis, you've got to tread lightly on doing operationalizing that kind of testing in the IoT when people's lives and health are at stake. >> We still give 'em placebos. So we still test 'em. All right so let's go to the next question. What are the hottest innovations in AI? Stephanie I want to start with you as a company, someone at a company that's got kind of an interesting little thing happening. We start thinking about how do we better catalog data and represent it to a large number of people. What are some of the hottest innovations in AI as you see it? >> I think it's a little counter intuitive about what the hottest innovations are in AI, because we're at a spot in the industry where the most successful companies that are working with AI are actually incorporating them into solutions. So the best AI solutions are actually the products that you don't know there's AI operating underneath. But they're having a significant impact on business decision making or bringing a different type of application to the market and you know, I think there's a lot of investment that's going into AI tooling and tool sets for data scientists or researchers, but the more innovative companies are thinking through how do we really take AI and make it have an impact on business decision making and that means kind of hiding the AI to the business user. Because if you think a bot is making a decision instead of you, you're not going to partner with that bot very easily or very readily. I worked at, way at the start of my career, I worked in CRM when recommendation engines were all the rage online and also in call centers. And the hardest thing was to get a call center agent to actually read the script that the algorithm was presenting to them, that algorithm was 99% correct most of the time, but there was this human resistance to letting a computer tell you what to tell that customer on the other side even if it was more successful in the end. And so I think that the innovation in AI that's really going to push us forward is when humans feel like they can partner with these bots and they don't think of it as a bot, but they think about as assisting their work and getting to a better result-- >> Hence the augmentation point you made earlier. >> Absolutely, absolutely. >> Joe how 'about you? What do you look at? What are you excited about? >> I think the coolest thing at the moment right now is chat bots. Like to be able, like to have voice be able to speak with you in natural language, to do that, I think that's pretty innovative, right? And I do think that eventually, for the average user, not for techies like me, but for the average user, I think keyboards are going to be a thing of the past. I think we're going to communicate with computers through voice and I think this is the very very beginning of that and it's an incredible innovation. >> Neil? >> Well, I think we all have myopia here. We're all thinking about commercial applications. Big, big things are happening with AI in the intelligence community, in military, the defense industry, in all sorts of things. Meteorology. And that's where, well, hopefully not on an every day basis with military, you really see the effect of this. But I was involved in a project a couple of years ago where we were developing AI software to detect artillery pieces in terrain from satellite imagery. I don't have to tell you what country that was. I think you can probably figure that one out right? But there are legions of people in many many companies that are involved in that industry. So if you're talking about the dollars spent on AI, I think the stuff that we do in our industries is probably fairly small. >> Well it reminds me of an application I actually thought was interesting about AI related to that, AI being applied to removing mines from war zones. >> Why not? >> Which is not a bad thing for a whole lot of people. Judith what do you look at? >> So I'm looking at things like being able to have pre-trained data sets in specific solution areas. I think that that's something that's coming. Also the ability to, to really be able to have a machine assist you in selecting the right algorithms based on what your data looks like and the problems you're trying to solve. Some of the things that data scientists still spend a lot of their time on, but can be augmented with some, basically we have to move to levels of abstraction before this becomes truly ubiquitous across many different areas. >> Peter: Jennifer? >> So I'm going to say computer vision. >> Computer vision? >> Computer vision. So computer vision ranges from image recognition to be able to say what content is in the image. Is it a dog, is it a cat, is it a blueberry muffin? Like a sort of popular post out there where it's like a blueberry muffin versus like I think a chihuahua and then it compares the two. And can the AI really actually detect difference, right? So I think that's really where a lot of people who are in this space of being in both the AI space as well as data science are looking to for the new innovations. I think, for instance, cloud vision I think that's what Google still calls it. The vision API we've they've released on beta allows you to actually use an API to send your image and then have it be recognized right, by their API. There's another startup in New York called Clarify that also does a similar thing as well as you know Amazon has their recognition platform as well. So I think in a, from images being able to detect what's in the content as well as from videos, being able to say things like how many people are entering a frame? How many people enter the store? Not having to actually go look at it and count it, but having a computer actually tally that information for you, right? >> There's actually an extra piece to that. So if I have a picture of a stop sign, and I'm an automated car, and is it a picture on the back of a bus of a stop sign, or is it a real stop sign? So that's going to be one of the complications. >> Doesn't matter to a New York City cab driver. How 'about you Jim? >> Probably not. (laughs) >> Hottest thing in AI is General Adversarial Networks, GANT, what's hot about that, well, I'll be very quick, most AI, most deep learning, machine learning is analytical, it's distilling or inferring insights from the data. Generative takes that same algorithmic basis but to build stuff. In other words, to create realistic looking photographs, to compose music, to build CAD CAM models essentially that can be constructed on 3D printers. So GANT, it's a huge research focus all around the world are used for, often increasingly used for natural language generation. In other words it's institutionalizing or having a foundation for nailing the Turing test every single time, building something with machines that looks like it was constructed by a human and doing it over and over again to fool humans. I mean you can imagine the fraud potential. But you can also imagine just the sheer, like it's going to shape the world, GANT. >> All right so I'm going to say one thing, and then we're going to ask if anybody in the audience has an idea. So the thing that I find interesting is traditional programs, or when you tell a machine to do something you don't need incentives. When you tell a human being something, you have to provide incentives. Like how do you get someone to actually read the text. And this whole question of elements within AI that incorporate incentives as a way of trying to guide human behavior is absolutely fascinating to me. Whether it's gamification, or even some things we're thinking about with block chain and bitcoins and related types of stuff. To my mind that's going to have an enormous impact, some good, some bad. Anybody in the audience? I don't want to lose everybody here. What do you think sir? And I'll try to do my best to repeat it. Oh we have a mic. >> So my question's about, Okay, so the question's pretty much about what Stephanie's talking about which is human and loop training right? I come from a computer vision background. That's the problem, we need millions of images trained, we need humans to do that. And that's like you know, the workforce is essentially people that aren't necessarily part of the AI community, they're people that are just able to use that data and analyze the data and label that data. That's something that I think is a big problem everyone in the computer vision industry at least faces. I was wondering-- >> So again, but the problem is that is the difficulty of methodologically bringing together people who understand it and people who, people who have domain expertise people who have algorithm expertise and working together? >> I think the expertise issue comes in healthcare, right? In healthcare you need experts to be labeling your images. With contextual information where essentially augmented reality applications coming in, you have the AR kit and everything coming out, but there is a lack of context based intelligence. And all of that comes through training images, and all of that requires people to do it. And that's kind of like the foundational basis of AI coming forward is not necessarily an algorithm, right? It's how well are datas labeled? Who's doing the labeling and how do we ensure that it happens? >> Great question. So for the panel. So if you think about it, a consultant talks about being on the bench. How much time are they going to have to spend on trying to develop additional business? How much time should we set aside for executives to help train some of the assistants? >> I think that the key is not, to think of the problem a different way is that you would have people manually label data and that's one way to solve the problem. But you can also look at what is the natural workflow of that executive, or that individual? And is there a way to gather that context automatically using AI, right? And if you can do that, it's similar to what we do in our product, we observe how someone is analyzing the data and from those observations we can actually create the metadata that then trains the system in a particular direction. But you have to think about solving the problem differently of finding the workflow that then you can feed into to make this labeling easy without the human really realizing that they're labeling the data. >> Peter: Anybody else? >> I'll just add to what Stephanie said, so in the IoT applications, all those sensory modalities, the computer vision, the speech recognition, all that, that's all potential training data. So it cross checks against all the other models that are processing all the other data coming from that device. So that the natural language process of understanding can be reality checked against the images that the person happens to be commenting upon, or the scene in which they're embedded, so yeah, the data's embedded-- >> I don't think we're, we're not at the stage yet where this is easy. It's going to take time before we do start doing the pre-training of some of these details so that it goes faster, but right now, there're not that many shortcuts. >> Go ahead Joe. >> Sorry so a couple things. So one is like, I was just caught up on your incentivizing programs to be more efficient like humans. You know in Ethereum that has this notion, which is bot chain, has this theory, this concept of gas. Where like as the process becomes more efficient it costs less to actually run, right? It costs less ether, right? So it actually is kind of, the machine is actually incentivized and you don't really know what it's going to cost until the machine processes it, right? So there is like some notion of that there. But as far as like vision, like training the machine for computer vision, I think it's through adoption and crowdsourcing, so as people start using it more they're going to be adding more pictures. Very very organically. And then the machines will be trained and right now is a very small handful doing it, and it's very proactive by the Googles and the Facebooks and all of that. But as we start using it, as they start looking at my images and Jim's and Jen's images, it's going to keep getting smarter and smarter through adoption and through very organic process. >> So Neil, let me ask you a question. Who owns the value that's generated as a consequence of all these people ultimately contributing their insight and intelligence into these systems? >> Well, to a certain extent the people who are contributing the insight own nothing because the systems collect their actions and the things they do and then that data doesn't belong to them, it belongs to whoever collected it or whoever's going to do something with it. But the other thing, getting back to the medical stuff. It's not enough to say that the systems, people will do the right thing, because a lot of them are not motivated to do the right thing. The whole grant thing, the whole oh my god I'm not going to go against the senior professor. A lot of these, I knew a guy who was a doctor at University of Pittsburgh and they were doing a clinical study on the tubes that they put in little kids' ears who have ear infections, right? And-- >> Google it! Who helps out? >> Anyway, I forget the exact thing, but he came out and said that the principle investigator lied when he made the presentation, that it should be this, I forget which way it went. He was fired from his position at Pittsburgh and he has never worked as a doctor again. 'Cause he went against the senior line of authority. He was-- >> Another question back here? >> Man: Yes, Mark Turner has a question. >> Not a question, just want to piggyback what you're saying about the transfixation of maybe in healthcare of black and white images and color images in the case of sonograms and ultrasound and mammograms, you see that happening using AI? You see that being, I mean it's already happening, do you see it moving forward in that kind of way? I mean, talk more about that, about you know, AI and black and white images being used and they can be transfixed, they can be made to color images so you can see things better, doctors can perform better operations. >> So I'm sorry, but could you summarize down? What's the question? Summarize it just, >> I had a lot of students, they're interested in the cross pollenization between AI and say the medical community as far as things like ultrasound and sonograms and mammograms and how you can literally take a black and white image and it can, using algorithms and stuff be made to color images that can help doctors better do the work that they've already been doing, just do it better. You touched on it like 30 seconds. >> So how AI can be used to actually add information in a way that's not necessarily invasive but is ultimately improves how someone might respond to it or use it, yes? Related? I've also got something say about medical images in a second, any of you guys want to, go ahead Jennifer. >> Yeah, so for one thing, you know and it kind of goes back to what we were talking about before. When we look at for instance scans, like at some point I was looking at CT scans, right, for lung cancer nodules. In order for me, who I don't have a medical background, to identify where the nodule is, of course, a doctor actually had to go in and specify which slice of the scan had the nodule and where exactly it is, so it's on both the slice level as well as, within that 2D image, where it's located and the size of it. So the beauty of things like AI is that ultimately right now a radiologist has to look at every slice and actually identify this manually, right? The goal of course would be that one day we wouldn't have to have someone look at every slice to like 300 usually slices and be able to identify it much more automated. And I think the reality is we're not going to get something where it's going to be 100%. And with anything we do in the real world it's always like a 95% chance of it being accurate. So I think it's finding that in between of where, what's the threshold that we want to use to be able to say that this is, definitively say a lung cancer nodule or not. I think the other thing to think about is in terms of how their using other information, what they might use is a for instance, to say like you know, based on other characteristics of the person's health, they might use that as sort of a grading right? So you know, how dark or how light something is, identify maybe in that region, the prevalence of that specific variable. So that's usually how they integrate that information into something that's already existing in the computer vision sense. I think that's, the difficulty with this of course, is being able to identify which variables were introduced into data that does exist. >> So I'll make two quick observations on this then I'll go to the next question. One is radiologists have historically been some of the highest paid physicians within the medical community partly because they don't have to be particularly clinical. They don't have to spend a lot of time with patients. They tend to spend time with doctors which means they can do a lot of work in a little bit of time, and charge a fair amount of money. As we start to introduce some of these technologies that allow us to from a machine standpoint actually make diagnoses based on those images, I find it fascinating that you now see television ads promoting the role that the radiologist plays in clinical medicine. It's kind of an interesting response. >> It's also disruptive as I'm seeing more and more studies showing that deep learning models processing images, ultrasounds and so forth are getting as accurate as many of the best radiologists. >> That's the point! >> Detecting cancer >> Now radiologists are saying oh look, we do this great thing in terms of interacting with the patients, never have because they're being dis-intermediated. The second thing that I'll note is one of my favorite examples of that if I got it right, is looking at the images, the deep space images that come out of Hubble. Where they're taking data from thousands, maybe even millions of images and combining it together in interesting ways you can actually see depth. You can actually move through to a very very small scale a system that's 150, well maybe that, can't be that much, maybe six billion light years away. Fascinating stuff. All right so let me go to the last question here, and then I'm going to close it down, then we can have something to drink. What are the hottest, oh I'm sorry, question? >> Yes, hi, my name's George, I'm with Blue Talon. You asked earlier there the question what's the hottest thing in the Edge and AI, I would say that it's security. It seems to me that before you can empower agency you need to be able to authorize what they can act on, how they can act on, who they can act on. So it seems if you're going to move from very distributed data at the Edge and analytics at the Edge, there has to be security similarly done at the Edge. And I saw (speaking faintly) slides that called out security as a key prerequisite and maybe Judith can comment, but I'm curious how security's going to evolve to meet this analytics at the Edge. >> Well, let me do that and I'll ask Jen to comment. The notion of agency is crucially important, slightly different from security, just so we're clear. And the basic idea here is historically folks have thought about moving data or they thought about moving application function, now we are thinking about moving authority. So as you said. That's not necessarily, that's not really a security question, but this has been a problem that's been in, of concern in a number of different domains. How do we move authority with the resources? And that's really what informs the whole agency process. But with that said, Jim. >> Yeah actually I'll, yeah, thank you for bringing up security so identity is the foundation of security. Strong identity, multifactor, face recognition, biometrics and so forth. Clearly AI, machine learning, deep learning are powering a new era of biometrics and you know it's behavioral metrics and so forth that's organic to people's use of devices and so forth. You know getting to the point that Peter was raising is important, agency! Systems of agency. Your agent, you have to, you as a human being should be vouching in a secure, tamper proof way, your identity should be vouching for the identity of some agent, physical or virtual that does stuff on your behalf. How can that, how should that be managed within this increasingly distributed IoT fabric? Well a lot of that's been worked. It all ran through webs of trust, public key infrastructure, formats and you know SAML for single sign and so forth. It's all about assertion, strong assertions and vouching. I mean there's the whole workflows of things. Back in the ancient days when I was actually a PKI analyst three analyst firms ago, I got deep into all the guts of all those federation agreements, something like that has to be IoT scalable to enable systems agency to be truly fluid. So we can vouch for our agents wherever they happen to be. We're going to keep on having as human beings agents all over creation, we're not even going to be aware of everywhere that our agents are, but our identity-- >> It's not just-- >> Our identity has to follow. >> But it's not just identity, it's also authorization and context. >> Permissioning, of course. >> So I may be the right person to do something yesterday, but I'm not authorized to do it in another context in another application. >> Role based permissioning, yeah. Or persona based. >> That's right. >> I agree. >> And obviously it's going to be interesting to see the role that block chain or its follow on to the technology is going to play here. Okay so let me throw one more questions out. What are the hottest applications of AI at the Edge? We've talked about a number of them, does anybody want to add something that hasn't been talked about? Or do you want to get a beer? (people laughing) Stephanie, you raised your hand first. >> I was going to go, I bring something mundane to the table actually because I think one of the most exciting innovations with IoT and AI are actually simple things like City of San Diego is rolling out 3200 automated street lights that will actually help you find a parking space, reduce the amount of emissions into the atmosphere, so has some environmental change, positive environmental change impact. I mean, it's street lights, it's not like a, it's not medical industry, it doesn't look like a life changing innovation, and yet if we automate streetlights and we manage our energy better, and maybe they can flicker on and off if there's a parking space there for you, that's a significant impact on everyone's life. >> And dramatically suppress the impact of backseat driving! >> (laughs) Exactly. >> Joe what were you saying? >> I was just going to say you know there's already the technology out there where you can put a camera on a drone with machine learning within an artificial intelligence within it, and it can look at buildings and determine whether there's rusty pipes and cracks in cement and leaky roofs and all of those things. And that's all based on artificial intelligence. And I think if you can do that, to be able to look at an x-ray and determine if there's a tumor there is not out of the realm of possibility, right? >> Neil? >> I agree with both of them, that's what I meant about external kind of applications. Instead of figuring out what to sell our customers. Which is most what we hear. I just, I think all of those things are imminently doable. And boy street lights that help you find a parking place, that's brilliant, right? >> Simple! >> It improves your life more than, I dunno. Something I use on the internet recently, but I think it's great! That's, I'd like to see a thousand things like that. >> Peter: Jim? >> Yeah, building on what Stephanie and Neil were saying, it's ambient intelligence built into everything to enable fine grain microclimate awareness of all of us as human beings moving through the world. And enable reading of every microclimate in buildings. In other words, you know you have sensors on your body that are always detecting the heat, the humidity, the level of pollution or whatever in every environment that you're in or that you might be likely to move into fairly soon and either A can help give you guidance in real time about where to avoid, or give that environment guidance about how to adjust itself to your, like the lighting or whatever it might be to your specific requirements. And you know when you have a room like this, full of other human beings, there has to be some negotiated settlement. Some will find it too hot, some will find it too cold or whatever but I think that is fundamental in terms of reshaping the sheer quality of experience of most of our lived habitats on the planet potentially. That's really the Edge analytics application that depends on everybody having, being fully equipped with a personal area network of sensors that's communicating into the cloud. >> Jennifer? >> So I think, what's really interesting about it is being able to utilize the technology we do have, it's a lot cheaper now to have a lot of these ways of measuring that we didn't have before. And whether or not engineers can then leverage what we have as ways to measure things and then of course then you need people like data scientists to build the right model. So you can collect all this data, if you don't build the right model that identifies these patterns then all that data's just collected and it's just made a repository. So without having the models that supports patterns that are actually in the data, you're not going to find a better way of being able to find insights in the data itself. So I think what will be really interesting is to see how existing technology is leveraged, to collect data and then how that's actually modeled as well as to be able to see how technology's going to now develop from where it is now, to being able to either collect things more sensitively or in the case of say for instance if you're dealing with like how people move, whether we can build things that we can then use to measure how we move, right? Like how we move every day and then being able to model that in a way that is actually going to give us better insights in things like healthcare and just maybe even just our behaviors. >> Peter: Judith? >> So, I think we also have to look at it from a peer to peer perspective. So I may be able to get some data from one thing at the Edge, but then all those Edge devices, sensors or whatever, they all have to interact with each other because we don't live, we may, in our business lives, act in silos, but in the real world when you look at things like sensors and devices it's how they react with each other on a peer to peer basis. >> All right, before I invite John up, I want to say, I'll say what my thing is, and it's not the hottest. It's the one I hate the most. I hate AI generated music. (people laughing) Hate it. All right, I want to thank all the panelists, every single person, some great commentary, great observations. I want to thank you very much. I want to thank everybody that joined. John in a second you'll kind of announce who's the big winner. But the one thing I want to do is, is I was listening, I learned a lot from everybody, but I want to call out the one comment that I think we all need to remember, and I'm going to give you the award Stephanie. And that is increasing we have to remember that the best AI is probably AI that we don't even know is working on our behalf. The same flip side of that is all of us have to be very cognizant of the idea that AI is acting on our behalf and we may not know it. So, John why don't you come on up. Who won the, whatever it's called, the raffle? >> You won. >> Thank you! >> How 'about a round of applause for the great panel. (audience applauding) Okay we have a put the business cards in the basket, we're going to have that brought up. We're going to have two raffle gifts, some nice Bose headsets and speaker, Bluetooth speaker. Got to wait for that. I just want to say thank you for coming and for the folks watching, this is our fifth year doing our own event called Big Data NYC which is really an extension of the landscape beyond the Big Data world that's Cloud and AI and IoT and other great things happen and great experts and influencers and analysts here. Thanks for sharing your opinion. Really appreciate you taking the time to come out and share your data and your knowledge, appreciate it. Thank you. Where's the? >> Sam's right in front of you. >> There's the thing, okay. Got to be present to win. We saw some people sneaking out the back door to go to a dinner. >> First prize first. >> Okay first prize is the Bose headset. >> Bluetooth and noise canceling. >> I won't look, Sam you got to hold it down, I can see the cards. >> All right. >> Stephanie you won! (Stephanie laughing) Okay, Sawny Cox, Sawny Allie Cox? (audience applauding) Yay look at that! He's here! The bar's open so help yourself, but we got one more. >> Congratulations. Picture right here. >> Hold that I saw you. Wake up a little bit. Okay, all right. Next one is, my kids love this. This is great, great for the beach, great for everything portable speaker, great gift. >> What is it? >> Portable speaker. >> It is a portable speaker, it's pretty awesome. >> Oh you grabbed mine. >> Oh that's one of our guys. >> (lauging) But who was it? >> Can't be related! Ava, Ava, Ava. Okay Gene Penesko (audience applauding) Hey! He came in! All right look at that, the timing's great. >> Another one? (people laughing) >> Hey thanks everybody, enjoy the night, thank Peter Burris, head of research for SiliconANGLE, Wikibon and he great guests and influencers and friends. And you guys for coming in the community. Thanks for watching and thanks for coming. Enjoy the party and some drinks and that's out, that's it for the influencer panel and analyst discussion. Thank you. (logo music)
SUMMARY :
is that the cloud is being extended out to the Edge, the next time I talk to you I don't want to hear that are made at the Edge to individual users We've got, again, the objective here is to have community From the Hurwitz Group. And finally Joe Caserta, Joe come on up. And to the left. I've been in the market for a couple years now. I'm the founder and Chief Data Scientist We can hear you now. And I have, I've been developing a lot of patents I just feel not worthy in the presence of Joe Caserta. If you can hear me, Joe Caserta, so yeah, I've been doing We recently rebranded to only Caserta 'cause what we do to make recommendations about what data to use the realities of how data is going to work in these to make sure that you have the analytics at the edge. and ARBI is the integration of Augmented Reality And it's going to say exactly you know, And if the machine appears to approximate what's and analyzed, conceivably some degree of mind reading but the machine as in the bot isn't able to tell you kind of some of the things you talked about, IoT, So that's one of the reasons why the IoT of the primary source. Well, I mean, I agree with that, I think I already or might not be the foundation for your agent All right, so I'm going to start with you. a lot of the applications we develop now are very So it's really interesting in the engineering space And the idea that increasingly we have to be driven I know the New England Journal of Medicine So if you let the, if you divorce your preconceived notions So the doctor examined me, and he said you probably have One of the issues with healthcare data is that the data set the actual model that you use to set priorities and you can have a great correlation that's garbage. What does the Edge mean to you? And then find the foods to meet that. And tequila, that helps too. Jim: You're a precision foodie is what you are. in the healthcare world and I think regulation For instance, in the case of are you being too biased We don't have the same notion to the same degree but again on the other side of course, in the Edge analytics, what you're actually transducing What are some of the hottest innovations in AI and that means kind of hiding the AI to the business user. I think keyboards are going to be a thing of the past. I don't have to tell you what country that was. AI being applied to removing mines from war zones. Judith what do you look at? and the problems you're trying to solve. And can the AI really actually detect difference, right? So that's going to be one of the complications. Doesn't matter to a New York City cab driver. (laughs) So GANT, it's a huge research focus all around the world So the thing that I find interesting is traditional people that aren't necessarily part of the AI community, and all of that requires people to do it. So for the panel. of finding the workflow that then you can feed into that the person happens to be commenting upon, It's going to take time before we do start doing and Jim's and Jen's images, it's going to keep getting Who owns the value that's generated as a consequence But the other thing, getting back to the medical stuff. and said that the principle investigator lied and color images in the case of sonograms and ultrasound and say the medical community as far as things in a second, any of you guys want to, go ahead Jennifer. to say like you know, based on other characteristics I find it fascinating that you now see television ads as many of the best radiologists. and then I'm going to close it down, It seems to me that before you can empower agency Well, let me do that and I'll ask Jen to comment. agreements, something like that has to be IoT scalable and context. So I may be the right person to do something yesterday, Or persona based. that block chain or its follow on to the technology into the atmosphere, so has some environmental change, the technology out there where you can put a camera And boy street lights that help you find a parking place, That's, I'd like to see a thousand things like that. that are always detecting the heat, the humidity, patterns that are actually in the data, but in the real world when you look at things and I'm going to give you the award Stephanie. and for the folks watching, We saw some people sneaking out the back door I can see the cards. Stephanie you won! Picture right here. This is great, great for the beach, great for everything All right look at that, the timing's great. that's it for the influencer panel and analyst discussion.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Judith | PERSON | 0.99+ |
Jennifer | PERSON | 0.99+ |
Jim | PERSON | 0.99+ |
Neil | PERSON | 0.99+ |
Stephanie McReynolds | PERSON | 0.99+ |
Jack | PERSON | 0.99+ |
2001 | DATE | 0.99+ |
Marc Andreessen | PERSON | 0.99+ |
Jim Kobielus | PERSON | 0.99+ |
Jennifer Shin | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Joe Caserta | PERSON | 0.99+ |
Suzie Welch | PERSON | 0.99+ |
Joe | PERSON | 0.99+ |
David Floyer | PERSON | 0.99+ |
Peter | PERSON | 0.99+ |
Stephanie | PERSON | 0.99+ |
Jen | PERSON | 0.99+ |
Neil Raden | PERSON | 0.99+ |
Mark Turner | PERSON | 0.99+ |
Judith Hurwitz | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Elysian | ORGANIZATION | 0.99+ |
Uber | ORGANIZATION | 0.99+ |
Qualcomm | ORGANIZATION | 0.99+ |
Peter Burris | PERSON | 0.99+ |
2017 | DATE | 0.99+ |
Honeywell | ORGANIZATION | 0.99+ |
Apple | ORGANIZATION | 0.99+ |
Derek Sivers | PERSON | 0.99+ |
New York | LOCATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
New York City | LOCATION | 0.99+ |
1998 | DATE | 0.99+ |
Brian McDaniel, Baylor College of Medicine | Pure Accelerate 2017
>> Announcer: Live from San Fransisco It's theCUBE Covering PURE Accelerate 2017. Brought to you by PURESTORAGE. >> Welcome back to PURE Accelerate. This is theCUBE, the leader in live tech coverage. I'm Dave Vellante with my co-host Stu Miniman. This is PURE Accelerate. We're here at Pier 70. Brian McDaniel is here he's an infrastructure architect at the Baylor College of Medicine, not to be confused with Baylor University in Waco Texas, anymore. Brian Welcome to theCUBE. >> Thanks for having me appreciate it. >> You're very welcome. Tell us about the Baylor College of Medicine. >> So, Baylor College of Medicine is a, first and foremost, a teaching facility but also the leader in research and development for healthcare in the Texas Medical Center in Houston Texas. We currently employ roughly 1,500 physicians and so they occupy a multitude of institutions, not only at Baylor but other facilities and hospitals in and around the Texas Medical Center. >> So, it's kind of' healthcare morning here Stu. We've been talking about electronic medical records, meaningful use, the Affordable Care Act, potential changes there, HIPAA, saving lives. These are big issues. >> We're not at the HIMSS Conference Dave? >> We should be at HIMMS. So these are big issues for any organization in healthcare. It's just exacerbates the challenges on IT. So, I wonder if you can talk about some of the drivers in your business, compliance, and in new tech and maybe share with us some of the things that you're seeing. >> Absolutely so first and foremost, we are an Epic system shop. That's our EMR. So, from a enterprise and clinical operation, that is our number one mission critical application. It provides your electronic medical records to our staff, regardless of where they're physically located at. So that alone is a demanding type of solution if you will, the mobility aspect of it. Delivering that in a fast manner and a repeatable manner is upmost important to our physicians because they're actually seeing patients and getting to your records and being able to add notes and collaborate with other institutions if necessary. So, time to market is very important and accessibility is also up there. >> Right so, you mentioned that collaboration and part of that collaboration is so much data now, being able to harness that data and share it. Data explodes everywhere but in healthcare, there's so much data to the extent we start instrumenting things. What are you guys doing with all that data? >> Right now, it lives within the clinical application, right in Epic, but as you pointed out that is where the value is. that is where your crown jewels so to speak are at. That data is now being looked at as a possible access point outside of the clinical operation. So, it's environment is going to be even more important going forward, when you look to branch out into some of the basic sides in more of a research, to gain access to that clinical data. That historically has been problematic for the research to be done accessing that information. >> So, in the corporate we like to think of, from an IT perspective, you got to run the business, you got to grow the business, you got to transform the business. It's a little different in healthcare. You kind of got to comply. A lot of your time is spent on compliance and regulation changes and keeping up with that. And then there's got to be a fair amount that's at least attempting to do transformation and in kind of keeping up with the innovations. Maybe you could talk about that a little bit. >> Absolutely, particularly on the innovation side, we work closely with out partners at Epic and we work to decide roadmaps and how that fits into the Baylor world. Case in point, a year ago we were set to go to the new version of Epic, which was 2015. And Epic is nice enough to lay out requirements for you and say, here's what your system needs to meet in order to comply with Epic standards. So, they give you a seal of approval, so to speak. And there's monetary implications for not meeting those requirements. So it's actually dollars and cents. It's not just , we want you to meet this. If you do then there's advantages to meeting it. So, they provided that to us and went though the normal testing phases and evaluations of our current platform, both from compute and storage. And honestly we struggled to meet their requirements with our legacy systems. So the team was challenged to say well, what can we do to meet this? We have our historical infrastructures, so if we're going to deviate from that, let's really deviate and look at what's available to the market. So, Flash comes to mind immediately. So, there's a multitude of vendors that make Flash storage products. So we started meeting with all of 'em, doing our fact finding and our data gathering, meeting with all of 'em. First and foremost, they have to be Epic certified. That eliminated a couple of contenders right off the bat. Right? You're not certified. >> I would expect some of the startups especially. >> It did. Some of the smaller, Flash vendors, for example, one of 'em came in and we said, well, what do you do with Epic? And they said what's Epic. And you kind of scratch your head and say thank you. >> Thank you for playing. >> Here's the door. So, it eliminates people but then when we meet with PURE, and we talked to them and we meet 'em and you get to really know that the family and the culture that they bring with the technology. Yes it's got to be fast but Flash is going to be fast. What else can you do? And that's where you start learning about how it was born on Flash, how it was native to Flash and so you get added benefits to the infrastructure by looking at that type of technology, which ultimately led us there, where we're at running Epic on our Flash arrays. >> And Brian, you're using the Flash stack configuration of converge infrastructure. It sounds like it was PURE that lead you that way as opposed to Cisco? Could you maybe walk us through that? >> That's very interesting, so we're a UCS shop. We were before PURE. So when PURE came in, the fact that they had a validated design with the Flash stack infrastructure, made it all that more easier to implement the PURE solution because it just is modular enough to fit in with our current infrastructure. That made it very appealing that we didn't have to change or alter much. We just looked at the validated design that says, here's your reference architecture, how it applies to the Flash stack. You already have UCS. We love it, we're a big fan. And here's how to implement it. And it made the time to market, to get production work loads on it, very quick. >> And the CVD that you got from Cisco, that's Cisco plus PURE but was it healthcare Epic specific or was that the PURE had some knowledge for that that they pulled in? >> So, that was one of the value adds that we feel PURE brought was the Epic experience. And whether that's scripting, the backups, and if you're familiar with Epic, the environmental refreshes that they have to do. There's seven Epic environments. And they all have to refresh off of each other and play off of each other so, >> So you have a window that you have to hit right. >> And you do right? And historically that window's been quite large. And now, not so much which makes everybody happy. >> Hey, that's what weekends are for. >> Absolutely, yeah, our DBAs attest to that right? So, we would like to think we've made their world and life a little bit more enjoyable 'cause those weekends now, they're not having to babysit the Epic refreshes. Back to the point of Epic experience, that was instrumental in the decision makings from a support with the PURESTORAGE help desk, awareness of what it takes to run Epic on PURE, and then going forward knowing that there's a partnership behind Epic and PURE and certainly Baylor College of Medicine as we continue to look at the next versions of Epic, whether that's 2018 and on to 2020, whatever that decision is, we know that we have a solid foundation now to grow. >> And Brian I'm curious, you've been a Cisco shop for a while, Cisco has lots of partnerships as well as, they've got a hyper-converged offering that they sell themselves. What was your experience working with Cisco and do they just let you choose and you said, I want PURE and they're like, great? Do you know? What was that like? >> To your point, there's validated designs for many customers and Cisco is kind of at the hub of that, that core with the compute and memory of the blade systems, the UCS. They liked the fact that we went with PURE 'cause it does me a validated design. And they have others with other vendors. The challenge there is how do they really integrate with each other from tools to possibly automation down the road, and how do they truly integrate with each other. 'Cause we did bring in some of the other validated design architecture organizations and I think we did our due diligence and looked at 'em to see how they differentiate between each other. And ultimately, we wanted something that was new and different approach to storage. It wasn't just layering your legacy OS on a bunch of Flash drives and call it good. Something that was natively born to take advantage of that technology. And that's what ultimately led us to PURE. >> Well, PURE has a point of view on the so called hyper-converged space. You heard Scott Dietzen talking this morning. What's your perspective on hyper-convergence? >> Hyper-converge is one of those buzz words that I think gets thrown out of there kind of off the cuff if you will. But people hear it and get excited about it. But what type of workloads are you looking to take advantage of it? Is it truly hyper-converged or is it just something that you can say you're doing because it sounds cool? I think to some degree, people are led astray on the buzzwords of the technology where they get down to say, what's going to take advantage of it? What kind of application are you putting on it? If your application, in our case, can be written by a grad student 20 years ago that a lab is still using, it does it make sense to put it on hyper-converged? No, because it can't take advantage of the architecture or the design. So, in a lot of ways, we're waiting and seeing. And the reason we didn't go to a hyper-converged platform is a, Epic support and b, we were already changing enough to stay comfortable with the environment and knowing that come Monday morning, doctors will be seeing patients and we're already changing enough, that was another layer that we chose not to change. We went with a standard UCS configuration that everyone was already happy with. That made a significant difference from an operational perspective. >> Essentially, your processes are tightly tied to Epic and the workflow associated with that. So from an infrastructure perspective, it sounds like you just don't want it to be in the way. >> We don't. The last thing we want in infrastructure getting in the way. And quite frankly, it was in the way. Whether that was meeting latency requirements or IOPS requirements from the Cache database or the Clarity database within the Epic system, or if was just all of are just taking a little bit longer than they expect. We don't want to be that bottleneck, if you will, we want them to be able to see patients faster, run reports faster, gain access to that valuable data in a much faster way to enable them to go about their business and not have to worry about infrastructure. >> Brian, PURE said that they had, I believe it's like 25 new announcements made this morning, a lot of software features. Curious, is there anything that jumped out at you, that you've been waiting for and anything still on your to do list that you're hoping for PURE or PURE and it's extended ecosystem to deliver for you? >> Great question, so at the top of that list is the replication of the arrays, whether that's in an offsite data center or a colo and how that applies to an Epic environment that has to go through this flux of refreshes, and from a disaster or business continuity standpoint, we're actively pursuing that, and how that's going to fit with Baylor. So, we're very excited to see what our current investment, free of charge by the way, once you do the upgrade to 5.0, is to take advantage of those features, with replication being one of 'em. >> And then, I thought I heard today, Third Sight is a service. Right? So you don't have to install your own infrastructure. So, I'm not sure exactly what that's all about. I got to peel the onion on that one. >> To be determined right? When we look at things like that, particularly with Epic, we have to be careful because that is the HIPAA, PHI, that's your records, yours and mine, medical records right? You just don't want that, if I told you it's going to be hosted in a public cloud. Wait a minute. Where? No it's not. We don't want to be on the 10 o'clock news right? However, there's things like SAP HANA and other enterprise applications that we certainly could look at leveraging that technology. >> Excellent, we listen, thank you very much Brian for coming on theCUBE. We appreciate your perspectives and sort of educating us a little bit on your business and your industry anyway. And have a great rest of the show. >> Yeah, thank you very much. Appreciate it. >> You're welcome. Alright keep it right there everybody. This is theCUBE. We're back live right after this short break from PURE Accelerate 2017. Be right back.
SUMMARY :
Brought to you by PURESTORAGE. not to be confused with Baylor University You're very welcome. and so they occupy a multitude of institutions, So, it's kind of' healthcare morning here Stu. So, I wonder if you can talk about some of the drivers and getting to your records and being able to add notes there's so much data to the extent we start for the research to be done accessing that information. and in kind of keeping up with the innovations. And Epic is nice enough to lay out requirements for you And you kind of scratch your head and you get to really know that the family and the culture It sounds like it was PURE that lead you that way And it made the time to market, the environmental refreshes that they have to do. And you do right? and certainly Baylor College of Medicine as we continue and do they just let you choose and you said, They liked the fact that we went with PURE What's your perspective on hyper-convergence? kind of off the cuff if you will. and the workflow associated with that. and not have to worry about infrastructure. or PURE and it's extended ecosystem to deliver for you? and how that applies to an Epic environment So you don't have to install your own infrastructure. because that is the HIPAA, PHI, that's your records, Excellent, we listen, thank you very much Brian Yeah, thank you very much. This is theCUBE.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Brian | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Brian McDaniel | PERSON | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
Baylor College of Medicine | ORGANIZATION | 0.99+ |
PURE | ORGANIZATION | 0.99+ |
2015 | DATE | 0.99+ |
Scott Dietzen | PERSON | 0.99+ |
Baylor University | ORGANIZATION | 0.99+ |
Epic | ORGANIZATION | 0.99+ |
2020 | DATE | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
Affordable Care Act | TITLE | 0.99+ |
2018 | DATE | 0.99+ |
25 new announcements | QUANTITY | 0.99+ |
Monday morning | DATE | 0.99+ |
Baylor | ORGANIZATION | 0.99+ |
10 o'clock | DATE | 0.99+ |
Houston Texas | LOCATION | 0.99+ |
a year ago | DATE | 0.99+ |
HIPAA | TITLE | 0.99+ |
Waco Texas | LOCATION | 0.99+ |
First | QUANTITY | 0.99+ |
San Fransisco | LOCATION | 0.99+ |
both | QUANTITY | 0.99+ |
today | DATE | 0.98+ |
theCUBE | ORGANIZATION | 0.98+ |
Medical Center | ORGANIZATION | 0.98+ |
PURE Accelerate | ORGANIZATION | 0.98+ |
SAP HANA | TITLE | 0.98+ |
UCS | ORGANIZATION | 0.98+ |
Pier 70 | LOCATION | 0.97+ |
Flash | TITLE | 0.96+ |
one | QUANTITY | 0.96+ |
first | QUANTITY | 0.94+ |
Dave | PERSON | 0.93+ |
Third Sight | ORGANIZATION | 0.93+ |
20 years ago | DATE | 0.93+ |
HIMMS | ORGANIZATION | 0.89+ |
this morning | DATE | 0.88+ |
1,500 physicians | QUANTITY | 0.84+ |
Texas | LOCATION | 0.84+ |
Accelerate 2017 | COMMERCIAL_ITEM | 0.84+ |
PHI | TITLE | 0.82+ |
Bill Mannel & Dr. Nicholas Nystrom | HPE Discover 2017
>> Announcer: Live, from Las Vegas, it's the Cube, covering HPE Discover 2017. Brought to you by Hewlett Packard Enterprise. >> Hey, welcome back everyone. We are here live in Las Vegas for day two of three days of exclusive coverage from the Cube here at HPE Discover 2017. Our two next guests is Bill Mannel, VP and General Manager of HPC and AI for HPE. Bill, great to see you. And Dr. Nick Nystrom, senior of research at Pittsburgh's Supercomputer Center. Welcome to The Cube, thanks for coming on, appreciate it. >> My pleasure >> Thanks for having us. >> As we wrap up day two, first of all before we get started, love the AI, love the high performance computing. We're seeing great applications for compute. Everyone now sees that a lot of compute actually is good. That's awesome. What is the Pittsburgh Supercomputer Center? Give a quick update and describe what that is. >> Sure. The quick update is we're operating a system called Bridges. Bridges is operating for the National Science Foundation. It democratizes HPC. It brings people who have never used high performance computing before to be able to use HPC seamlessly, almost as a cloud. It unifies HPC big data and artificial intelligence. >> So who are some of the users that are getting access that they didn't have before? Could you just kind of talk about some of the use cases of the organizations or people that you guys are opening this up to? >> Sure. I think one of the newest communities that's very significant is deep learning. So we have collaborations between the University of Pittsburgh life sciences and the medical center with Carnegie Mellon, the machine learning researchers. We're looking to apply AI machine learning to problems in breast and lung cancer. >> Yeah, we're seeing the data. Talk about some of the innovations that HPE's bringing with you guys in the partnership, because we're seeing, people are seeing the results of using big data and deep learning and breakthroughs that weren't possible before. So not only do you have the democratization cool element happening, you have a tsunami of awesome open source code coming in from big places. You see Google donating a bunch of machine learning libraries. Everyone's donating code. It's like open bar and open source, as I say, and the young kids that are new are the innovators as well, so not just us systems guys, but a lot of young developers are coming in. What's the innovation? Why is this happening? What's the ah-ha moment? Is it just cloud, is it a combination of things, talk about it. >> It's a combination of all the big data coming in, and then new techniques that allow us to analyze and get value from it and from that standpoint. So the traditional HPC world, typically we built equations which then generated data. Now we're actually kind of doing the reverse, which is we take the data and then build equations to understand the data. So it's a different paradigm. And so there's more and more energy understanding those two different techniques of kind of getting two of the same answers, but in a different way. >> So Bill, you and I talked in London last year. >> Yes. With Dr. Gho. And we talked a lot about SGI and what that acquisition meant to you guys. So I wonder if you could give us a quick update on the business? I mean it's doing very well, Meg talked about it on the conference call this last quarter. Really high point and growing. What's driving the growth, and give us an update on the business. >> Sure. And I think the thing that's driving the growth is all this data and the fact that customers want to get value from it. So we're seeing a lot of growth in industries like financial services, like in manufacturing, where folks are moving to digitization, which means that in the past they might have done a lot of their work through experimentation. Now they're moving it to a digital format, and they're simulating everything. So that's driven a lot more HPC over time. As far as the SGI, integration is concern. We've integrated about halfway, so we're at about the halfway point. And now we've got the engineering teams together and we're driving a road map and a new set of products that are coming out. Our Gen 10-based products are on target, and they're going to be releasing here over the next few months. >> So Nick, from your standpoint, when you look at, there's been an ebb and flow in the supercomputer landscape for decades. All the way back to the 70s and the 80s. So from a customer perspective, what do you see now? Obviously China's much more prominent in the game. There's sort of an arms race, if you will, in computing power. From a customer's perspective, what are you seeing, what are you looking for in a supplier? >> Well, so I agree with you, there is this arms race for exaflops. Where we are really focused right now is enabling data-intensive applications, looking at big data service, HPC is a service, really making things available to users to be able to draw on the large data sets you mentioned, to be able to put the capability class computing, which will go to exascale, together with AI, and data and Linux under one platform, under one integrated fabric. That's what we did with HPE for Bridges. And looking to build on that in the future, to be able to do the exascale applications that you're referring to, but also to couple on data, and to be able to use AI with classic simulation to make those simulations better. >> So it's always good to have a true practitioner on The Cube. But when you talk about AI and machine learning and deep learning, John and I sometimes joke, is it same wine, new bottle, or is there really some fundamental shift going on that just sort of happened to emerge in the last six to nine months? >> I think there is a fundamental shift. And the shift is due to what Bill mentioned. It's the availability of data. So we have that. We have more and more communities who are building on that. You mentioned the open source frameworks. So yes, they're building on the TensorFlows, on the Cafes, and we have people who have not been programmers. They're using these frameworks though, and using that to drive insights from data they did not have access to. >> These are flipped upside down, I mean this is your point, I mean, Bill pointed it out, it's like the models are upside down. This is the new world. I mean, it's crazy, I don't believe it. >> So if that's the case, and I believe it, it feels like we're entering this new wave of innovation which for decades we talked about how we march to the cadence of Moore's Law. That's been the innovation. You think back, you know, your five megabyte disk drive, then it went to 10, then 20, 30, now it's four terabytes. Okay, wow. Compared to what we're about to see, I mean it pales in comparison. So help us envision what the world is going to look like in 10 or 20 years. And I know it's hard to do that, but can you help us get our minds around the potential that this industry is going to tap? >> So I think, first of all, I think the potential of AI is very hard to predict. We see that. What we demonstrated in Pittsburgh with the victory of Libratus, the poker-playing bot, over the world's best humans, is the ability of an AI to beat humans in a situation where they have incomplete information, where you have an antagonist, an adversary who is bluffing, who is reacting to you, and who you have to deal with. And I think that's a real breakthrough. We're going to see that move into other aspects of life. It will be buried in apps. It will be transparent to a lot of us, but those sorts of AI's are going to influence a lot. That's going to take a lot of IT on the back end for the infrastructure, because these will continue to be compute-hungry. >> So I always use the example of Kasperov and he got beaten by the machine, and then he started a competition to team up with a supercomputer and beat the machine. Yeah, humans and machines beat machines. Do you expect that's going to continue? Maybe both your opinions. I mean, we're just sort of spitballing here. But will that augmentation continue for an indefinite period of time, or are we going to see the day that it doesn't happen? >> I think over time you'll continue to see progress, and you'll continue to see more and more regular type of symmetric type workloads being done by machines, and that allows us to do the really complicated things that the human brain is able to better process than perhaps a machine brain, if you will. So I think it's exciting from the standpoint of being able to take some of those other roles and so forth, and be able to get those done in perhaps a more efficient manner than we're able to do. >> Bill, talk about, I want to get your reaction to the concept of data. As data evolves, you brought up the model, I like the way you're going with that, because things are being flipped around. In the old days, I want to monetize my data. I have data sets, people are looking at their data. I'm going to make money from my data. So people would talk about how we monetizing the data. >> Dave: Old days, like two years ago. >> Well and people actually try to solve and monetize their data, and this could be use case for one piece of it. Other people are saying no, I'm going to open, make people own their own data, make it shareable, make it more of an enabling opportunity, or creating opportunities to monetize differently. In a different shift. That really comes down to the insights question. What's your, what trends do you guys see emerging where data is much more of a fabric, it's less of a discreet, monetizable asset, but more of an enabling asset. What's your vision on the role of data? As developers start weaving in some of these insights. You mentioned the AI, I think that's right on. What's your reaction to the role of data, the value of the data? >> Well, I think one thing that we're seeing in some of our, especially our big industrial customers is the fact that they really want to be able to share that data together and collect it in one place, and then have that regularly updated. So if you look at a big aircraft manufacturer, for example, they actually are putting sensors all over their aircraft, and in realtime, bringing data down and putting it into a place where now as they're doing new designs, they can access that data, and use that data as a way of making design trade-offs and design decision. So a lot of customers that I talk to in the industrial area are really trying to capitalize on all the data possible to allow them to bring new insights in, to predict things like future failures, to figure out how they need to maintain whatever they have in the field and those sorts of things at all. So it's just kind of keeping it within the enterprise itself. I mean, that's a challenge, a really big challenge, just to get data collected in one place and be able to efficiently use it just within an enterprise. We're not even talking about sort of pan-enterprise, but just within the enterprise. That is a significant change that we're seeing. Actually an effort to do that and see the value in that. >> And the high performance computing really highlights some of these nuggets that are coming out. If you just throw compute at something, if you set it up and wrangle it, you're going to get these insights. I mean, new opportunities. >> Bill: Yeah, absolutely. >> What's your vision, Nick? How do you see the data, how do you talk to your peers and people who are generally curious on how to approach it? How to architect data modeling and how to think about it? >> I think one of the clearest examples on managing that sort of data comes from the life sciences. So we're working with researchers at University of Pittsburgh Medical Center, and the Institute for Precision Medicine at Pitt Cancer Center. And there it's bringing together the large data as Bill alluded to. But there it's very disparate data. It is genomic data. It is individual tumor data from individual patients across their lifetime. It is imaging data. It's the electronic health records. And trying to be able to do this sort of AI on that to be able to deliver true precision medicine, to be able to say that for a given tumor type, we can look into that and give you the right therapy, or even more interestingly, how can we prevent some of these issues proactively? >> Dr. Nystrom, it's expensive doing what you do. Is there a commercial opportunity at the end of the rainbow here for you or is that taboo, I mean, is that a good thing? >> No, thank you, it's both. So as a national supercomputing center, our resources are absolutely free for open research. That's a good use of our taxpayer dollars. They've funded these, we've worked with HP, we've designed the system that's great for everybody. We also can make this available to industry at an extremely low rate because it is a federal resource. We do not make a profit on that. But looking forward, we are working with local industry to let them test things, to try out ideas, especially in AI. A lot of people want to do AI, they don't know what to do. And so we can help them. We can help them architect solutions, put things on hardware, and when they determine what works, then they can scale that up, either locally on prem, or with us. >> This is a great digital resource. You talk about federally funded. I mean, you can look at Yosemite, it's a state park, you know, Yellowstone, these are natural resources, but now when you start thinking about the goodness that's being funded. You want to talk about democratization, medicine is just the tip of the iceberg. This is an interesting model as we move forward. We see what's going on in government, and see how things are instrumented, some things not, delivery of drugs and medical care, all these things are coalescing. How do you see this digital age extending? Because if this continues, we should be doing more of these, right? >> We should be. We need to be. >> It makes sense. So is there, I mean I just not up to speed on what's going on with federally funded-- >> Yeah, I think one thing that Pittsburgh has done with the Bridges machine, is really try to bring in data and compute and all the different types of disciplines in there, and provide a place where a lot of people can learn, they can build applications and things like that. That's really unusual in HPC. A lot of times HPC is around big iron. People want to have the biggest iron basically on the top 500 list. This is where the focus hasn't been on that. This is where the focus has been on really creating value through the data, and getting people to utilize it, and then build more applications. >> You know, I'll make an observation. When we first started doing The Cube, we observed that, we talked about big data, and we said that the practitioners of big data, are where the guys are going to make all the money. And so far that's proven true. You look at the public big data companies, none of them are making any money. And maybe this was sort of true with ERP, but not like it is with big data. It feels like AI is going to be similar, that the consumers of AI, those people that can find insights from that data are really where the big money is going to be made here. I don't know, it just feels like-- >> You mean a long tail of value creation? >> Yeah, in other words, you used to see in the computing industry, it was Microsoft and Intel became, you know, trillion dollar value companies, and maybe there's a couple of others. But it really seems to be the folks that are absorbing those technologies, applying them, solving problems, whether it's health care, or logistics, transportation, etc., looks to where the huge economic opportunities may be. I don't know if you guys have thought about that. >> Well I think that's happened a little bit in big data. So if you look at what the financial services market has done, they've probably benefited far more than the companies that make the solutions, because now they understand what their consumers want, they can better predict their life insurance, how they should-- >> Dave: You could make that argument for Facebook, for sure. >> Absolutely, from that perspective. So I expect it to get to your point around AI as well, so the folks that really use it, use it well, will probably be the ones that benefit it. >> Because the tooling is very important. You've got to make the application. That's the end state in all this That's the rubber meets the road. >> Bill: Exactly. >> Nick: Absolutely. >> All right, so final question. What're you guys showing here at Discover? What's the big HPC? What's the story for you guys? >> So we're actually showing our Gen 10 product. So this is with the latest microprocessors in all of our Apollo lines. So these are specifically optimized platforms for HPC and now also artificial intelligence. We have a platform called the Apollo 6500, which is used by a lot of companies to do AI work, so it's a very dense GPU platform, and does a lot of processing and things in terms of video, audio, these types of things that are used a lot in some of the workflows around AI. >> Nick, anything spectacular for you here that you're interested in? >> So we did show here. We had video in Meg's opening session. And that was showing the poker result, and I think that was really significant, because it was actually a great amount of computing. It was 19 million core hours. So was an HPC AI application, and I think that was a really interesting success. >> The unperfect information really, we picked up this earlier in our last segment with your colleagues. It really amplifies the unstructured data world, right? People trying to solve the streaming problem. With all this velocity, you can't get everything, so you need to use machines, too. Otherwise you have a haystack of needles. Instead of trying to find the needles in the haystack, as they was saying. Okay, final question, just curious on this natural, not natural, federal resource. Natural resource, feels like it. Is there like a line to get in? Like I go to the park, like this camp waiting list, I got to get in there early. How do you guys handle the flow for access to the supercomputer center? Is it, my uncle works there, I know a friend of a friend? Is it a reservation system? I mean, who gets access to this awesomeness? >> So there's a peer reviewed system, it's fair. People apply for large allocations four times a year. This goes to a national committee. They met this past Sunday and Monday for the most recent. They evaluate the proposals based on merit, and they make awards accordingly. We make 90% of the system available through that means. We have 10% discretionary that we can make available to the corporate sector and to others who are doing proprietary research in data-intensive computing. >> Is there a duration, when you go through the application process, minimums and kind of like commitments that they get involved, for the folks who might be interested in hitting you up? >> For academic research, the normal award is one year. These are renewable, people can extend these and they do. What we see now of course is for large data resources. People keep those going. The AI knowledge base is 2.6 petabytes. That's a lot. For industrial engagements, those could be any length. >> John: Any startup action coming in, or more bigger, more-- >> Absolutely. A coworker of mine has been very active in life sciences startups in Pittsburgh, and engaging many of these. We have meetings every week with them now, it seems. And with other sectors, because that is such a great opportunity. >> Well congratulations. It's fantastic work, and we're happy to promote it and get the word out. Good to see HP involved as well. Thanks for sharing and congratulations. >> Absolutely. >> Good to see your work, guys. Okay, great way to end the day here. Democratizing supercomputing, bringing high performance computing. That's what the cloud's all about. That's what great software's out there with AI. I'm John Furrier, Dave Vellante bringing you all the data here from HPE Discover 2017. Stay tuned for more live action after this short break.
SUMMARY :
Brought to you by Hewlett Packard Enterprise. of exclusive coverage from the Cube What is the Pittsburgh Supercomputer Center? to be able to use HPC seamlessly, almost as a cloud. and the medical center with Carnegie Mellon, and the young kids that are new are the innovators as well, It's a combination of all the big data coming in, that acquisition meant to you guys. and they're going to be releasing here So from a customer perspective, what do you see now? and to be able to use AI with classic simulation in the last six to nine months? And the shift is due to what Bill mentioned. This is the new world. So if that's the case, and I believe it, is the ability of an AI to beat humans and he got beaten by the machine, that the human brain is able to better process I like the way you're going with that, You mentioned the AI, I think that's right on. So a lot of customers that I talk to And the high performance computing really highlights and the Institute for Precision Medicine the end of the rainbow here for you We also can make this available to industry I mean, you can look at Yosemite, it's a state park, We need to be. So is there, I mean I just not up to speed and getting people to utilize it, the big money is going to be made here. But it really seems to be the folks that are So if you look at what the financial services Dave: You could make that argument So I expect it to get to your point around AI as well, That's the end state in all this What's the story for you guys? We have a platform called the Apollo 6500, and I think that was really significant, I got to get in there early. We make 90% of the system available through that means. For academic research, the normal award is one year. and engaging many of these. and get the word out. Good to see your work, guys.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave | PERSON | 0.99+ |
National Science Foundation | ORGANIZATION | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
London | LOCATION | 0.99+ |
John | PERSON | 0.99+ |
Institute for Precision Medicine | ORGANIZATION | 0.99+ |
Pittsburgh | LOCATION | 0.99+ |
Carnegie Mellon | ORGANIZATION | 0.99+ |
Nick | PERSON | 0.99+ |
Meg | PERSON | 0.99+ |
Nick Nystrom | PERSON | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
Bill | PERSON | 0.99+ |
Bill Mannel | PERSON | 0.99+ |
90% | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
20 | QUANTITY | 0.99+ |
10% | QUANTITY | 0.99+ |
ORGANIZATION | 0.99+ | |
University of Pittsburgh Medical Center | ORGANIZATION | 0.99+ |
HP | ORGANIZATION | 0.99+ |
10 | QUANTITY | 0.99+ |
last year | DATE | 0.99+ |
ORGANIZATION | 0.99+ | |
HPE | ORGANIZATION | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
Discover | ORGANIZATION | 0.99+ |
Hewlett Packard Enterprise | ORGANIZATION | 0.99+ |
Yosemite | LOCATION | 0.99+ |
30 | QUANTITY | 0.99+ |
Nystrom | PERSON | 0.99+ |
one year | QUANTITY | 0.99+ |
three days | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
Nicholas Nystrom | PERSON | 0.99+ |
HPC | ORGANIZATION | 0.99+ |
two next guests | QUANTITY | 0.99+ |
SGI | ORGANIZATION | 0.99+ |
Kasperov | PERSON | 0.99+ |
2.6 petabytes | QUANTITY | 0.99+ |
80s | DATE | 0.98+ |
one piece | QUANTITY | 0.98+ |
two years ago | DATE | 0.98+ |
70s | DATE | 0.98+ |
Yellowstone | LOCATION | 0.98+ |
five megabyte | QUANTITY | 0.98+ |
one platform | QUANTITY | 0.97+ |
two different techniques | QUANTITY | 0.97+ |
Pitt Cancer Center | ORGANIZATION | 0.97+ |
20 years | QUANTITY | 0.97+ |
Monday | DATE | 0.97+ |
Dr. | PERSON | 0.96+ |
one thing | QUANTITY | 0.96+ |
Gho | PERSON | 0.96+ |
one | QUANTITY | 0.95+ |
first | QUANTITY | 0.95+ |
one place | QUANTITY | 0.95+ |
day two | QUANTITY | 0.94+ |
four terabytes | QUANTITY | 0.94+ |
past Sunday | DATE | 0.93+ |
Pittsburgh Supercomputer Center | ORGANIZATION | 0.93+ |
University of Pittsburgh life sciences | ORGANIZATION | 0.9+ |
last quarter | DATE | 0.89+ |
four times a year | QUANTITY | 0.89+ |
Linux | TITLE | 0.88+ |
19 million core hours | QUANTITY | 0.86+ |
nine months | QUANTITY | 0.84+ |
decades | QUANTITY | 0.83+ |
Bridges | ORGANIZATION | 0.81+ |
Josh Gluck, Weill Cornell Medicine | ServiceNow Knowledge17
(upbeat techno music) >> Announcer: Live, from Orlando, Florida. It's The Cube. Covering ServiceNow Knowledge17. Brought to you by ServiceNow. (upbeat techno music) >> We're back at Knowledge17. Dave Vellante with Jeff Frick. Josh Gluck is here, he's the deputy CIO of Weill Cornell Medical College in the big apple. Thanks for coming to The Cube. >> Thanks very much for having me. >> Tell us about Weill Cornell, It's a collaboration with Sloan Kettering, originally, and ... >> Yeah, we're a three part, mission-oriented institution. Patient care, being first. Our physician organization delivers patient care in New York City. We're partnered with New York Presbyterian Hospital, Memorial Sloan Kettering Cancer Center, and also the hospital for special surgery. >> So, let's get right into it. CIO, you were probably doing some of the CIO activities here, this week. Love to hear about that. But let's get right into how you're, you know, using automation, how you're using the ServiceNow platform. Let's talk in the context of IT transformation. >> Yeah. So we've been a ServiceNow customer since 2012. We actually went live on 12/12/12. Everybody thought that was a joke, but it turned out to be the real "go live" date. You know, and as the platform's matured, and as our organization's matured, you know, we started focused on ITSM, strictly. Over the last few years though, we've found that, you know, our focus for ServiceNow should be the equivalent of building a 3-1-1 platform for the administrative departments. So we've onboarded folks in HR. We're doing case management now with ServiceNow. Obviously all the ITSM, ITIL-based processes. We've worked with our Department of Environmental Health and Safety. To help them with some of the regulatory compliance, about workflows that they need to have in place. We've also built out Project and Portfolio Management in ServiceNow, and we've been doing it, actually, since the beginning. We worked with ServiceNow pretty intimately to build out those functions. And now, we're actually at the point where, the platform has surpassed what we custom developed back in the early days. And we're really focused on understanding where we can unwrap some of those customizations, and just go to the native portfolio. >> Yeah, I wanted to ask you about that. >> Yeah. >> So, that's not an uncommon story and how complicated is it to unwrap that stuff? 'Cause obviously, you don't want the custom mods there if you don't have to have them. >> Yeah, well you know we spent, what, five, six years now, focused on developing the platform to meet our needs, meet our process. You know, we're academics at heart. Right, being part of Cornell University. So, I think we have a habit of sometimes overthinking solutions. So, our customizations are pretty complex. We also though, understand that it's a heavy lift for us to keep it up. So, we partner with ServiceNow, we've had them come in and help us to an evaluation of what really could be done with a slight change to our process. Or, even just direct support for our process, straight out of the box. We're really excited about the stuff that's coming out of Jakarta. >> Okay, so it's fair to say, I mean, we've all been there. Where you have software development problems, and you go "ah, jeez, I wish I had done it differently." But, when we talk to folks like you, that are unwrapping, unraveling, custom mods, there's no regrets. You got a lot of value >> Josh: Yeah, no. >> out of 'em. And now you're moving forward, right? >> Josh: Yep. Yeah we >> That's interesting. >> Josh: Definitely did the right thing, at the right time. You know, we went through an evolution, in the way that we did Project and Portfolio Management internally at Weill Cornell. And we're focused on some of the high-level problems, high-order problems today, that some organizations may not get to. Right, we're doing resource management, proactive scheduling, and you know, for us to get to the next level, the enhancements that are available in Jakarta are around time-carding and resource management, are really going to help us, I think, not overthink the problem. And come to some standard that the rest of the industry, or other verticals are using, in how they do their resource management. >> And Josh, the 3-1-1 concept is interesting. When did you go from "this is our an ITSM tool, that's going to be pretty cool." >> Yeah. >> To "this is a platform, that we can now take this kind of 3-1-1 approach, and use that as kind of an overarching mission, >> Yeah. >> for that which you're trying to accomplish"? >> I think the concept ... I think when we first went into partnership with ServiceNow, we knew that we wanted it to be more than just a replacement for heat, right? I've actually been with two different organizations. New York Presbyterian Hospital and Weill Cornell, who have come from other ITIL platforms, ITSM platforms, and moved to ServiceNow. I was a BMC Remedy customer for a long time at New York Presbyterian. We were a heat customer at Weill Cornell, prior to going to ServiceNow. So, I think we were all familiar with the fact that it doesn't make sense to buy these point products, to do all of these different workflows. Let's buy a platform. ServiceNow represented that platform. Even in its early stages, we knew that we wanted to do more with it. We had conversations about process users. And I know you guys were talking a little bit before about changes to the license model that are happening. >> Dave: Yep. >> But we really wanted it to be something we could develop further. Our first project just happened to be, in both cases "we have an ITSM platform that isn't working." Remedy at NYP, heat at Weill Cornell. "Let's get off of it, and get onto ServiceNow." But I think, we didn't start calling it the 3-1-1 until maybe a year or two ago. >> Okay. >> And it really started with Case Management. I think that was a big deal. >> It's a good little marketing, CIO selling. >> Josh: Yeah. >> You know, Daniel Pink. How large of an organization ... >> Josh: Is, IT, or Weill Cornell itself? >> Weill Cornell. >> We're between ... We're about five-thousand and change. >> Okay, so not enormous. But, the reason for the question is, at what point does it make sense to bring in a ServiceNow? You know, our little fifty-person company. You know, we're trying ... >> Josh: Yeah. But it's still not there yet. Is it size of company? Is it size of problem? What is your advice there? >> You know, I think it's actually a good idea for most mid-level companies to talk to ServiceNow. And I think there's even a play for some small businesses. It depends on what you want to get out of the tool. Right? I mean, if you're going to use it as just a simple incident-response system, which isn't really the value that ServiceNow provides, it might be a hard sell. But, because it's a hosted system, because there is such a wealth of partners in the community now, and such a following for ServiceNow, I don't know. If you were a ten-person organization and you were customer focused, and you wanted to use it to do ... >> Jeff: Yep, yeah, that makes sense. A couple of different business processes, it could actually make sense for you. >> Josh, really tight schedule today, we'll give you the last word on Knowledge17, some of the things that have excited you, what's the bumper sticker on K17 for you? >> I think the keynotes have been great. I think you guys at The Cube have been doing a great job, of also, >> Dave: Thank you very much, appreciate that. >> you know, getting people up here and asking 'em tough questions and stuff. I appreciate you going easy on me. Than you. But, it's been great. It's been a really good show. >> Well come back again, and we'll really go at it. So, thanks very much Josh, >> Josh: Thank you. appreciate your time. Alright, keep it right there everybody. We'll be back with our next guest, right after this short break. (upbeat techno music)
SUMMARY :
Brought to you by ServiceNow. of Weill Cornell Medical College in the big apple. It's a collaboration with and also the hospital for special surgery. Let's talk in the context of IT transformation. You know, and as the platform's matured, and how complicated is it to unwrap that stuff? the platform to meet our needs, meet our process. and you go "ah, jeez, I wish I had done it differently." And now you're moving forward, right? in the way that we did Project and Portfolio Management And Josh, the 3-1-1 concept is interesting. And I know you guys were talking to be something we could develop further. And it really started with Case Management. You know, Daniel Pink. We're about five-thousand and change. But, the reason for the question is, Josh: Yeah. and you were customer focused, it could actually make sense for you. I think you guys at The Cube I appreciate you going easy on me. So, thanks very much Josh, We'll be back with our next guest,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Josh | PERSON | 0.99+ |
Daniel Pink | PERSON | 0.99+ |
Jeff Frick | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Josh Gluck | PERSON | 0.99+ |
Weill Cornell | ORGANIZATION | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Jeff | PERSON | 0.99+ |
Memorial Sloan Kettering Cancer Center | ORGANIZATION | 0.99+ |
Jakarta | LOCATION | 0.99+ |
Cornell University | ORGANIZATION | 0.99+ |
Weill Cornell Medical College | ORGANIZATION | 0.99+ |
12/12/12 | DATE | 0.99+ |
New York City | LOCATION | 0.99+ |
Orlando, Florida | LOCATION | 0.99+ |
five | QUANTITY | 0.99+ |
New York Presbyterian Hospital | ORGANIZATION | 0.99+ |
ten-person | QUANTITY | 0.99+ |
Sloan Kettering | ORGANIZATION | 0.99+ |
ServiceNow | ORGANIZATION | 0.99+ |
first project | QUANTITY | 0.99+ |
BMC Remedy | ORGANIZATION | 0.99+ |
2012 | DATE | 0.99+ |
both cases | QUANTITY | 0.99+ |
first | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
six years | QUANTITY | 0.98+ |
The Cube | ORGANIZATION | 0.98+ |
this week | DATE | 0.97+ |
about five-thousand | QUANTITY | 0.97+ |
a year | DATE | 0.96+ |
New York Presbyterian | ORGANIZATION | 0.96+ |
fifty-person | QUANTITY | 0.94+ |
Weill Cornell Medicine | ORGANIZATION | 0.94+ |
three part | QUANTITY | 0.93+ |
two different organizations | QUANTITY | 0.91+ |
two ago | DATE | 0.88+ |
Department of Environmental Health and Safety | ORGANIZATION | 0.87+ |
Knowledge17 | ORGANIZATION | 0.85+ |
ServiceNow | TITLE | 0.85+ |
3-1-1 | OTHER | 0.73+ |
ServiceNow Knowledge17 | ORGANIZATION | 0.72+ |
NYP | ORGANIZATION | 0.7+ |
Cube | ORGANIZATION | 0.69+ |
K17 | COMMERCIAL_ITEM | 0.61+ |
years | DATE | 0.57+ |
AI for Good Panel - Precision Medicine - SXSW 2017 - #IntelAI - #theCUBE
>> Welcome to the Intel AI Lounge. Today, we're very excited to share with you the Precision Medicine panel discussion. I'll be moderating the session. My name is Kay Erin. I'm the general manager of Health and Life Sciences at Intel. And I'm excited to share with you these three panelists that we have here. First is John Madison. He is a chief information medical officer and he is part of Kaiser Permanente. We're very excited to have you here. Thank you, John. >> Thank you. >> We also have Naveen Rao. He is the VP and general manager for the Artificial Intelligence Solutions at Intel. He's also the former CEO of Nervana, which was acquired by Intel. And we also have Bob Rogers, who's the chief data scientist at our AI solutions group. So, why don't we get started with our questions. I'm going to ask each of the panelists to talk, introduce themselves, as well as talk about how they got started with AI. So why don't we start with John? >> Sure, so can you hear me okay in the back? Can you hear? Okay, cool. So, I am a recovering evolutionary biologist and a recovering physician and a recovering geek. And I implemented the health record system for the first and largest region of Kaiser Permanente. And it's pretty obvious that most of the useful data in a health record, in lies in free text. So I started up a natural language processing team to be able to mine free text about a dozen years ago. So we can do things with that that you can't otherwise get out of health information. I'll give you an example. I read an article online from the New England Journal of Medicine about four years ago that said over half of all people who have had their spleen taken out were not properly vaccinated for a common form of pneumonia, and when your spleen's missing, you must have that vaccine or you die a very sudden death with sepsis. In fact, our medical director in Northern California's father died of that exact same scenario. So, when I read the article, I went to my structured data analytics team and to my natural language processing team and said please show me everybody who has had their spleen taken out and hasn't been appropriately vaccinated and we ran through about 20 million records in about three hours with the NLP team, and it took about three weeks with a structured data analytics team. That sounds counterintuitive but it actually happened that way. And it's not a competition for time only. It's a competition for quality and sensitivity and specificity. So we were able to indentify all of our members who had their spleen taken out, who should've had a pneumococcal vaccine. We vaccinated them and there are a number of people alive today who otherwise would've died absent that capability. So people don't really commonly associate natural language processing with machine learning, but in fact, natural language processing relies heavily and is the first really, highly successful example of machine learning. So we've done dozens of similar projects, mining free text data in millions of records very efficiently, very effectively. But it really helped advance the quality of care and reduce the cost of care. It's a natural step forward to go into the world of personalized medicine with the arrival of a 100-dollar genome, which is actually what it costs today to do a full genome sequence. Microbiomics, that is the ecosystem of bacteria that are in every organ of the body actually. And we know now that there is a profound influence of what's in our gut and how we metabolize drugs, what diseases we get. You can tell in a five year old, whether or not they were born by a vaginal delivery or a C-section delivery by virtue of the bacteria in the gut five years later. So if you look at the complexity of the data that exists in the genome, in the microbiome, in the health record with free text and you look at all the other sources of data like this streaming data from my wearable monitor that I'm part of a research study on Precision Medicine out of Stanford, there is a vast amount of disparate data, not to mention all the imaging, that really can collectively produce much more useful information to advance our understanding of science, and to advance our understanding of every individual. And then we can do the mash up of a much broader range of science in health care with a much deeper sense of data from an individual and to do that with structured questions and structured data is very yesterday. The only way we're going to be able to disambiguate those data and be able to operate on those data in concert and generate real useful answers from the broad array of data types and the massive quantity of data, is to let loose machine learning on all of those data substrates. So my team is moving down that pathway and we're very excited about the future prospects for doing that. >> Yeah, great. I think that's actually some of the things I'm very excited about in the future with some of the technologies we're developing. My background, I started actually being fascinated with computation in biological forms when I was nine. Reading and watching sci-fi, I was kind of a big dork which I pretty much still am. I haven't really changed a whole lot. Just basically seeing that machines really aren't all that different from biological entities, right? We are biological machines and kind of understanding how a computer works and how we engineer those things and trying to pull together concepts that learn from biology into that has always been a fascination of mine. As an undergrad, I was in the EE, CS world. Even then, I did some research projects around that. I worked in the industry for about 10 years designing chips, microprocessors, various kinds of ASICs, and then actually went back to school, quit my job, got a Ph.D. in neuroscience, computational neuroscience, to specifically understand what's the state of the art. What do we really understand about the brain? And are there concepts that we can take and bring back? Inspiration's always been we want to... We watch birds fly around. We want to figure out how to make something that flies. We extract those principles, and then build a plane. Don't necessarily want to build a bird. And so Nervana's really was the combination of all those experiences, bringing it together. Trying to push computation in a new a direction. Now, as part of Intel, we can really add a lot of fuel to that fire. I'm super excited to be part of Intel in that the technologies that we were developing can really proliferate and be applied to health care, can be applied to Internet, can be applied to every facet of our lives. And some of the examples that John mentioned are extremely exciting right now and these are things we can do today. And the generality of these solutions are just really going to hit every part of health care. I mean from a personal viewpoint, my whole family are MDs. I'm sort of the black sheep of the family. I don't have an MD. And it's always been kind of funny to me that knowledge is concentrated in a few individuals. Like you have a rare tumor or something like that, you need the guy who knows how to read this MRI. Why? Why is it like that? Can't we encapsulate that knowledge into a computer or into an algorithm, and democratize it. And the reason we couldn't do it is we just didn't know how. And now we're really getting to a point where we know how to do that. And so I want that capability to go to everybody. It'll bring the cost of healthcare down. It'll make all of us healthier. That affects everything about our society. So that's really what's exciting about it to me. >> That's great. So, as you heard, I'm Bob Rogers. I'm chief data scientist for analytics and artificial intelligence solutions at Intel. My mission is to put powerful analytics in the hands of every decision maker and when I think about Precision Medicine, decision makers are not just doctors and surgeons and nurses, but they're also case managers and care coordinators and probably most of all, patients. So the mission is really to put powerful analytics and AI capabilities in the hands of everyone in health care. It's a very complex world and we need tools to help us navigate it. So my background, I started with a Ph.D. in physics and I was computer modeling stuff, falling into super massive black holes. And there's a lot of applications for that in the real world. No, I'm kidding. (laughter) >> John: There will be, I'm sure. Yeah, one of these days. Soon as we have time travel. Okay so, I actually, about 1991, I was working on my post doctoral research, and I heard about neural networks, these things that could compute the way the brain computes. And so, I started doing some research on that. I wrote some papers and actually, it was an interesting story. The problem that we solved that got me really excited about neural networks, which have become deep learning, my office mate would come in. He was this young guy who was about to go off to grad school. He'd come in every morning. "I hate my project." Finally, after two weeks, what's your project? What's the problem? It turns out he had to circle these little fuzzy spots on these images from a telescope. So they were looking for the interesting things in a sky survey, and he had to circle them and write down their coordinates all summer. Anyone want to volunteer to do that? No? Yeah, he was very unhappy. So we took the first two weeks of data that he created doing his work by hand, and we trained an artificial neural network to do his summer project and finished it in about eight hours of computing. (crowd laughs) And so he was like yeah, this is amazing. I'm so happy. And we wrote a paper. I was the first author of course, because I was the senior guy at age 24. And he was second author. His first paper ever. He was very, very excited. So we have to fast forward about 20 years. His name popped up on the Internet. And so it caught my attention. He had just won the Nobel Prize in physics. (laughter) So that's where artificial intelligence will get you. (laughter) So thanks Naveen. Fast forwarding, I also developed some time series forecasting capabilities that allowed me to create a hedge fund that I ran for 12 years. After that, I got into health care, which really is the center of my passion. Applying health care to figuring out how to get all the data from all those siloed sources, put it into the cloud in a secure way, and analyze it so you can actually understand those cases that John was just talking about. How do you know that that person had had a splenectomy and that they needed to get that pneumovax? You need to be able to search all the data, so we used AI, natural language processing, machine learning, to do that and then two years ago, I was lucky enough to join Intel and, in the intervening time, people like Naveen actually thawed the AI winter and we're really in a spring of amazing opportunities with AI, not just in health care but everywhere, but of course, the health care applications are incredibly life saving and empowering so, excited to be here on this stage with you guys. >> I just want to cue off of your comment about the role of physics in AI and health care. So the field of microbiomics that I referred to earlier, bacteria in our gut. There's more bacteria in our gut than there are cells in our body. There's 100 times more DNA in that bacteria than there is in the human genome. And we're now discovering a couple hundred species of bacteria a year that have never been identified under a microscope just by their DNA. So it turns out the person who really catapulted the study and the science of microbiomics forward was an astrophysicist who did his Ph.D. in Steven Hawking's lab on the collision of black holes and then subsequently, put the other team in a virtual reality, and he developed the first super computing center and so how did he get an interest in microbiomics? He has the capacity to do high performance computing and the kind of advanced analytics that are required to look at a 100 times the volume of 3.2 billion base pairs of the human genome that are represented in the bacteria in our gut, and that has unleashed the whole science of microbiomics, which is going to really turn a lot of our assumptions of health and health care upside down. >> That's great, I mean, that's really transformational. So a lot of data. So I just wanted to let the audience know that we want to make this an interactive session, so I'll be asking for questions in a little bit, but I will start off with one question so that you can think about it. So I wanted to ask you, it looks like you've been thinking a lot about AI over the years. And I wanted to understand, even though AI's just really starting in health care, what are some of the new trends or the changes that you've seen in the last few years that'll impact how AI's being used going forward? >> So I'll start off. There was a paper published by a guy by the name of Tegmark at Harvard last summer that, for the first time, explained why neural networks are efficient beyond any mathematical model we predict. And the title of the paper's fun. It's called Deep Learning Versus Cheap Learning. So there were two sort of punchlines of the paper. One is is that the reason that mathematics doesn't explain the efficiency of neural networks is because there's a higher order of mathematics called physics. And the physics of the underlying data structures determined how efficient you could mine those data using machine learning tools. Much more so than any mathematical modeling. And so the second thing that was a reel from that paper is that the substrate of the data that you're operating on and the natural physics of those data have inherent levels of complexity that determine whether or not a 12th layer of neural net will get you where you want to go really fast, because when you do the modeling, for those math geeks in the audience, a factorial. So if there's 12 layers, there's 12 factorial permutations of different ways you could sequence the learning through those data. When you have 140 layers of a neural net, it's a much, much, much bigger number of permutations and so you end up being hardware-bound. And so, what Max Tegmark basically said is you can determine whether to do deep learning or cheap learning based upon the underlying physics of the data substrates you're operating on and have a good insight into how to optimize your hardware and software approach to that problem. >> So another way to put that is that neural networks represent the world in the way the world is sort of built. >> Exactly. >> It's kind of hierarchical. It's funny because, sort of in retrospect, like oh yeah, that kind of makes sense. But when you're thinking about it mathematically, we're like well, anything... The way a neural can represent any mathematical function, therfore, it's fully general. And that's the way we used to look at it, right? So now we're saying, well actually decomposing the world into different types of features that are layered upon each other is actually a much more efficient, compact representation of the world, right? I think this is actually, precisely the point of kind of what you're getting at. What's really exciting now is that what we were doing before was sort of building these bespoke solutions for different kinds of data. NLP, natural language processing. There's a whole field, 25 plus years of people devoted to figuring out features, figuring out what structures make sense in this particular context. Those didn't carry over at all to computer vision. Didn't carry over at all to time series analysis. Now, with neural networks, we've seen it at Nervana, and now part of Intel, solving customers' problems. We apply a very similar set of techniques across all these different types of data domains and solve them. All data in the real world seems to be hierarchical. You can decompose it into this hierarchy. And it works really well. Our brains are actually general structures. As a neuroscientist, you can look at different parts of your brain and there are differences. Something that takes in visual information, versus auditory information is slightly different but they're much more similar than they are different. So there is something invariant, something very common between all of these different modalities and we're starting to learn that. And this is extremely exciting to me trying to understand the biological machine that is a computer, right? We're figurig it out, right? >> One of the really fun things that Ray Chrisfall likes to talk about is, and it falls in the genre of biomimmicry, and how we actually replicate biologic evolution in our technical solutions so if you look at, and we're beginning to understand more and more how real neural nets work in our cerebral cortex. And it's sort of a pyramid structure so that the first pass of a broad base of analytics, it gets constrained to the next pass, gets constrained to the next pass, which is how information is processed in the brain. So we're discovering increasingly that what we've been evolving towards, in term of architectures of neural nets, is approximating the architecture of the human cortex and the more we understand the human cortex, the more insight we get to how to optimize neural nets, so when you think about it, with millions of years of evolution of how the cortex is structured, it shouldn't be a surprise that the optimization protocols, if you will, in our genetic code are profoundly efficient in how they operate. So there's a real role for looking at biologic evolutionary solutions, vis a vis technical solutions, and there's a friend of mine who worked with who worked with George Church at Harvard and actually published a book on biomimmicry and they wrote the book completely in DNA so if all of you have your home DNA decoder, you can actually read the book on your DNA reader, just kidding. >> There's actually a start up I just saw in the-- >> Read-Write DNA, yeah. >> Actually it's a... He writes something. What was it? (response from crowd member) Yeah, they're basically encoding information in DNA as a storage medium. (laughter) The company, right? >> Yeah, that same friend of mine who coauthored that biomimmicry book in DNA also did the estimate of the density of information storage. So a cubic centimeter of DNA can store an hexabyte of data. I mean that's mind blowing. >> Naveen: Highly done soon. >> Yeah that's amazing. Also you hit upon a really important point there, that one of the things that's changed is... Well, there are two major things that have changed in my perception from let's say five to 10 years ago, when we were using machine learning. You could use data to train models and make predictions to understand complex phenomena. But they had limited utility and the challenge was that if I'm trying to build on these things, I had to do a lot of work up front. It was called feature engineering. I had to do a lot of work to figure out what are the key attributes of that data? What are the 10 or 20 or 100 pieces of information that I should pull out of the data to feed to the model, and then the model can turn it into a predictive machine. And so, what's really exciting about the new generation of machine learning technology, and particularly deep learning, is that it can actually learn from example data those features without you having to do any preprogramming. That's why Naveen is saying you can take the same sort of overall approach and apply it to a bunch of different problems. Because you're not having to fine tune those features. So at the end of the day, the two things that have changed to really enable this evolution is access to more data, and I'd be curious to hear from you where you're seeing data come from, what are the strategies around that. So access to data, and I'm talking millions of examples. So 10,000 examples most times isn't going to cut it. But millions of examples will do it. And then, the other piece is the computing capability to actually take millions of examples and optimize this algorithm in a single lifetime. I mean, back in '91, when I started, we literally would have thousands of examples and it would take overnight to run the thing. So now in the world of millions, and you're putting together all of these combinations, the computing has changed a lot. I know you've made some revolutionary advances in that. But I'm curious about the data. Where are you seeing interesting sources of data for analytics? >> So I do some work in the genomics space and there are more viable permutations of the human genome than there are people who have ever walked the face of the earth. And the polygenic determination of a phenotypic expression translation, what are genome does to us in our physical experience in health and disease is determined by many, many genes and the interaction of many, many genes and how they are up and down regulated. And the complexity of disambiguating which 27 genes are affecting your diabetes and how are they up and down regulated by different interventions is going to be different than his. It's going to be different than his. And we already know that there's four or five distinct genetic subtypes of type II diabetes. So physicians still think there's one disease called type II diabetes. There's actually at least four or five genetic variants that have been identified. And so, when you start thinking about disambiguating, particularly when we don't know what 95 percent of DNA does still, what actually is the underlining cause, it will require this massive capability of developing these feature vectors, sometimes intuiting it, if you will, from the data itself. And other times, taking what's known knowledge to develop some of those feature vectors, and be able to really understand the interaction of the genome and the microbiome and the phenotypic data. So the complexity is high and because the variation complexity is high, you do need these massive members. Now I'm going to make a very personal pitch here. So forgive me, but if any of you have any role in policy at all, let me tell you what's happening right now. The Genomic Information Nondiscrimination Act, so called GINA, written by a friend of mine, passed a number of years ago, says that no one can be discriminated against for health insurance based upon their genomic information. That's cool. That should allow all of you to feel comfortable donating your DNA to science right? Wrong. You are 100% unprotected from discrimination for life insurance, long term care and disability. And it's being practiced legally today and there's legislation in the House, in mark up right now to completely undermine the existing GINA legislation and say that whenever there's another applicable statute like HIPAA, that the GINA is irrelevant, that none of the fines and penalties are applicable at all. So we need a ton of data to be able to operate on. We will not be getting a ton of data to operate on until we have the kind of protection we need to tell people, you can trust us. You can give us your data, you will not be subject to discrimination. And that is not the case today. And it's being further undermined. So I want to make a plea to any of you that have any policy influence to go after that because we need this data to help the understanding of human health and disease and we're not going to get it when people look behind the curtain and see that discrimination is occurring today based upon genetic information. >> Well, I don't like the idea of being discriminated against based on my DNA. Especially given how little we actually know. There's so much complexity in how these things unfold in our own bodies, that I think anything that's being done is probably childishly immature and oversimplifying. So it's pretty rough. >> I guess the translation here is that we're all unique. It's not just a Disney movie. (laughter) We really are. And I think one of the strengths that I'm seeing, kind of going back to the original point, of these new techniques is it's going across different data types. It will actually allow us to learn more about the uniqueness of the individual. It's not going to be just from one data source. They were collecting data from many different modalities. We're collecting behavioral data from wearables. We're collecting things from scans, from blood tests, from genome, from many different sources. The ability to integrate those into a unified picture, that's the important thing that we're getting toward now. That's what I think is going to be super exciting here. Think about it, right. I can tell you to visual a coin, right? You can visualize a coin. Not only do you visualize it. You also know what it feels like. You know how heavy it is. You have a mental model of that from many different perspectives. And if I take away one of those senses, you can still identify the coin, right? If I tell you to put your hand in your pocket, and pick out a coin, you probably can do that with 100% reliability. And that's because we have this generalized capability to build a model of something in the world. And that's what we need to do for individuals is actually take all these different data sources and come up with a model for an individual and you can actually then say what drug works best on this. What treatment works best on this? It's going to get better with time. It's not going to be perfect, because this is what a doctor does, right? A doctor who's very experienced, you're a practicing physician right? Back me up here. That's what you're doing. You basically have some categories. You're taking information from the patient when you talk with them, and you're building a mental model. And you apply what you know can work on that patient, right? >> I don't have clinic hours anymore, but I do take care of many friends and family. (laughter) >> You used to, you used to. >> I practiced for many years before I became a full-time geek. >> I thought you were a recovering geek. >> I am. (laughter) I do more policy now. >> He's off the wagon. >> I just want to take a moment and see if there's anyone from the audience who would like to ask, oh. Go ahead. >> We've got a mic here, hang on one second. >> I have tons and tons of questions. (crosstalk) Yes, so first of all, the microbiome and the genome are really complex. You already hit about that. Yet most of the studies we do are small scale and we have difficulty repeating them from study to study. How are we going to reconcile all that and what are some of the technical hurdles to get to the vision that you want? >> So primarily, it's been the cost of sequencing. Up until a year ago, it's $1000, true cost. Now it's $100, true cost. And so that barrier is going to enable fairly pervasive testing. It's not a real competitive market becaue there's one sequencer that is way ahead of everybody else. So the price is not $100 yet. The cost is below $100. So as soon as there's competition to drive the cost down, and hopefully, as soon as we all have the protection we need against discrimination, as I mentioned earlier, then we will have large enough sample sizes. And so, it is our expectation that we will be able to pool data from local sources. I chair the e-health work group at the Global Alliance for Genomics and Health which is working on this very issue. And rather than pooling all the data into a single, common repository, the strategy, and we're developing our five-year plan in a month in London, but the goal is to have a federation of essentially credentialed data enclaves. That's a formal method. HHS already does that so you can get credentialed to search all the data that Medicare has on people that's been deidentified according to HIPPA. So we want to provide the same kind of service with appropriate consent, at an international scale. And there's a lot of nations that are talking very much about data nationality so that you can't export data. So this approach of a federated model to get at data from all the countries is important. The other thing is a block-chain technology is going to be very profoundly useful in this context. So David Haussler of UC Santa Cruz is right now working on a protocol using an open block-chain, public ledger, where you can put out. So for any typical cancer, you may have a half dozen, what are called sematic variance. Cancer is a genetic disease so what has mutated to cause it to behave like a cancer? And if we look at those biologically active sematic variants, publish them on a block chain that's public, so there's not enough data there to reidentify the patient. But if I'm a physician treating a woman with breast cancer, rather than say what's the protocol for treating a 50-year-old woman with this cell type of cancer, I can say show me all the people in the world who have had this cancer at the age of 50, wit these exact six sematic variants. Find the 200 people worldwide with that. Ask them for consent through a secondary mechanism to donate everything about their medical record, pool that information of the core of 200 that exactly resembles the one sitting in front of me, and find out, of the 200 ways they were treated, what got the best results. And so, that's the kind of future where a distributed, federated architecture will allow us to query and obtain a very, very relevant cohort, so we can basically be treating patients like mine, sitting right in front of me. Same thing applies for establishing research cohorts. There's some very exciting stuff at the convergence of big data analytics, machine learning, and block chaining. >> And this is an area that I'm really excited about and I think we're excited about generally at Intel. They actually have something called the Collaborative Cancer Cloud, which is this kind of federated model. We have three different academic research centers. Each of them has a very sizable and valuable collection of genomic data with phenotypic annotations. So you know, pancreatic cancer, colon cancer, et cetera, and we've actually built a secure computing architecture that can allow a person who's given the right permissions by those organizations to ask a specific question of specific data without ever sharing the data. So the idea is my data's really important to me. It's valuable. I want us to be able to do a study that gets the number from the 20 pancreatic cancer patients in my cohort, up to the 80 that we have in the whole group. But I can't do that if I'm going to just spill my data all over the world. And there are HIPAA and compliance reasons for that. There are business reasons for that. So what we've built at Intel is this platform that allows you to do different kinds of queries on this genetic data. And reach out to these different sources without sharing it. And then, the work that I'm really involved in right now and that I'm extremely excited about... This also touches on something that both of you said is it's not sufficient to just get the genome sequences. You also have to have the phenotypic data. You have to know what cancer they've had. You have to know that they've been treated with this drug and they've survived for three months or that they had this side effect. That clinical data also needs to be put together. It's owned by other organizations, right? Other hospitals. So the broader generalization of the Collaborative Cancer Cloud is something we call the data exchange. And it's a misnomer in a sense that we're not actually exchanging data. We're doing analytics on aggregated data sets without sharing it. But it really opens up a world where we can have huge populations and big enough amounts of data to actually train these models and draw the thread in. Of course, that really then hits home for the techniques that Nervana is bringing to the table, and of course-- >> Stanford's one of your academic medical centers? >> Not for that Collaborative Cancer Cloud. >> The reason I mentioned Standford is because the reason I'm wearing this FitBit is because I'm a research subject at Mike Snyder's, the chair of genetics at Stanford, IPOP, intrapersonal omics profile. So I was fully sequenced five years ago and I get four full microbiomes. My gut, my mouth, my nose, my ears. Every three months and I've done that for four years now. And about a pint of blood. And so, to your question of the density of data, so a lot of the problem with applying these techniques to health care data is that it's basically a sparse matrix and there's a lot of discontinuities in what you can find and operate on. So what Mike is doing with the IPOP study is much the same as you described. Creating a highly dense longitudinal set of data that will help us mitigate the sparse matrix problem. (low volume response from audience member) Pardon me. >> What's that? (low volume response) (laughter) >> Right, okay. >> John: Lost the school sample. That's got to be a new one I've heard now. >> Okay, well, thank you so much. That was a great question. So I'm going to repeat this and ask if there's another question. You want to go ahead? >> Hi, thanks. So I'm a journalist and I report a lot on these neural networks, a system that's beter at reading mammograms than your human radiologists. Or a system that's better at predicting which patients in the ICU will get sepsis. These sort of fascinating academic studies that I don't really see being translated very quickly into actual hospitals or clinical practice. Seems like a lot of the problems are regulatory, or liability, or human factors, but how do you get past that and really make this stuff practical? >> I think there's a few things that we can do there and I think the proof points of the technology are really important to start with in this specific space. In other places, sometimes, you can start with other things. But here, there's a real confidence problem when it comes to health care, and for good reason. We have doctors trained for many, many years. School and then residencies and other kinds of training. Because we are really, really conservative with health care. So we need to make sure that technology's well beyond just the paper, right? These papers are proof points. They get people interested. They even fuel entire grant cycles sometimes. And that's what we need to happen. It's just an inherent problem, its' going to take a while. To get those things to a point where it's like well, I really do trust what this is saying. And I really think it's okay to now start integrating that into our standard of care. I think that's where you're seeing it. It's frustrating for all of us, believe me. I mean, like I said, I think personally one of the biggest things, I want to have an impact. Like when I go to my grave, is that we used machine learning to improve health care. We really do feel that way. But it's just not something we can do very quickly and as a business person, I don't actually look at those use cases right away because I know the cycle is just going to be longer. >> So to your point, the FDA, for about four years now, has understood that the process that has been given to them by their board of directors, otherwise known as Congress, is broken. And so they've been very actively seeking new models of regulation and what's really forcing their hand is regulation of devices and software because, in many cases, there are black box aspects of that and there's a black box aspect to machine learning. Historically, Intel and others are making inroads into providing some sort of traceability and transparency into what happens in that black box rather than say, overall we get better results but once in a while we kill somebody. Right? So there is progress being made on that front. And there's a concept that I like to use. Everyone knows Ray Kurzweil's book The Singularity Is Near? Well, I like to think that diadarity is near. And the diadarity is where you have human transparency into what goes on in the black box and so maybe Bob, you want to speak a little bit about... You mentioned that, in a prior discussion, that there's some work going on at Intel there. >> Yeah, absolutely. So we're working with a number of groups to really build tools that allow us... In fact Naveen probably can talk in even more detail than I can, but there are tools that allow us to actually interrogate machine learning and deep learning systems to understand, not only how they respond to a wide variety of situations but also where are there biases? I mean, one of the things that's shocking is that if you look at the clinical studies that our drug safety rules are based on, 50 year old white guys are the peak of that distribution, which I don't see any problem with that, but some of you out there might not like that if you're taking a drug. So yeah, we want to understand what are the biases in the data, right? And so, there's some new technologies. There's actually some very interesting data-generative technologies. And this is something I'm also curious what Naveen has to say about, that you can generate from small sets of observed data, much broader sets of varied data that help probe and fill in your training for some of these systems that are very data dependent. So that takes us to a place where we're going to start to see deep learning systems generating data to train other deep learning systems. And they start to sort of go back and forth and you start to have some very nice ways to, at least, expose the weakness of these underlying technologies. >> And that feeds back to your question about regulatory oversight of this. And there's the fascinating, but little known origin of why very few women are in clinical studies. Thalidomide causes birth defects. So rather than say pregnant women can't be enrolled in drug trials, they said any woman who is at risk of getting pregnant cannot be enrolled. So there was actually a scientific meritorious argument back in the day when they really didn't know what was going to happen post-thalidomide. So it turns out that the adverse, unintended consequence of that decision was we don't have data on women and we know in certain drugs, like Xanax, that the metabolism is so much slower, that the typical dosing of Xanax is women should be less than half of that for men. And a lot of women have had very serious adverse effects by virtue of the fact that they weren't studied. So the point I want to illustrate with that is that regulatory cycles... So people have known for a long time that was like a bad way of doing regulations. It should be changed. It's only recently getting changed in any meaningful way. So regulatory cycles and legislative cycles are incredibly slow. The rate of exponential growth in technology is exponential. And so there's impedance mismatch between the cycle time for regulation cycle time for innovation. And what we need to do... I'm working with the FDA. I've done four workshops with them on this very issue. Is that they recognize that they need to completely revitalize their process. They're very interested in doing it. They're not resisting it. People think, oh, they're bad, the FDA, they're resisting. Trust me, there's nobody on the planet who wants to revise these review processes more than the FDA itself. And so they're looking at models and what I recommended is global cloud sourcing and the FDA could shift from a regulatory role to one of doing two things, assuring the people who do their reviews are competent, and assuring that their conflicts of interest are managed, because if you don't have a conflict of interest in this very interconnected space, you probably don't know enough to be a reviewer. So there has to be a way to manage the conflict of interest and I think those are some of the keypoints that the FDA is wrestling with because there's type one and type two errors. If you underregulate, you end up with another thalidomide and people born without fingers. If you overregulate, you prevent life saving drugs from coming to market. So striking that balance across all these different technologies is extraordinarily difficult. If it were easy, the FDA would've done it four years ago. It's very complicated. >> Jumping on that question, so all three of you are in some ways entrepreneurs, right? Within your organization or started companies. And I think it would be good to talk a little bit about the business opportunity here, where there's a huge ecosystem in health care, different segments, biotech, pharma, insurance payers, etc. Where do you see is the ripe opportunity or industry, ready to really take this on and to make AI the competitive advantage. >> Well, the last question also included why aren't you using the result of the sepsis detection? We do. There were six or seven published ways of doing it. We did our own data, looked at it, we found a way that was superior to all the published methods and we apply that today, so we are actually using that technology to change clinical outcomes. As far as where the opportunities are... So it's interesting. Because if you look at what's going to be here in three years, we're not going to be using those big data analytics models for sepsis that we are deploying today, because we're just going to be getting a tiny aliquot of blood, looking for the DNA or RNA of any potential infection and we won't have to infer that there's a bacterial infection from all these other ancillary, secondary phenomenon. We'll see if the DNA's in the blood. So things are changing so fast that the opportunities that people need to look for are what are generalizable and sustainable kind of wins that are going to lead to a revenue cycle that are justified, a venture capital world investing. So there's a lot of interesting opportunities in the space. But I think some of the biggest opportunities relate to what Bob has talked about in bringing many different disparate data sources together and really looking for things that are not comprehensible in the human brain or in traditional analytic models. >> I think we also got to look a little bit beyond direct care. We're talking about policy and how we set up standards, these kinds of things. That's one area. That's going to drive innovation forward. I completely agree with that. Direct care is one piece. How do we scale out many of the knowledge kinds of things that are embedded into one person's head and get them out to the world, democratize that. Then there's also development. The underlying technology's of medicine, right? Pharmaceuticals. The traditional way that pharmaceuticals is developed is actually kind of funny, right? A lot of it was started just by chance. Penicillin, a very famous story right? It's not that different today unfortunately, right? It's conceptually very similar. Now we've got more science behind it. We talk about domains and interactions, these kinds of things but fundamentally, the problem is what we in computer science called NP hard, it's too difficult to model. You can't solve it analytically. And this is true for all these kinds of natural sorts of problems by the way. And so there's a whole field around this, molecular dynamics and modeling these sorts of things, that are actually being driven forward by these AI techniques. Because it turns out, our brain doesn't do magic. It actually doesn't solve these problems. It approximates them very well. And experience allows you to approximate them better and better. Actually, it goes a little bit to what you were saying before. It's like simulations and forming your own networks and training off each other. There are these emerging dynamics. You can simulate steps of physics. And you come up with a system that's much too complicated to ever solve. Three pool balls on a table is one such system. It seems pretty simple. You know how to model that, but it actual turns out you can't predict where a balls going to be once you inject some energy into that table. So something that simple is already too complex. So neural network techniques actually allow us to start making those tractable. These NP hard problems. And things like molecular dynamics and actually understanding how different medications and genetics will interact with each other is something we're seeing today. And so I think there's a huge opportunity there. We've actually worked with customers in this space. And I'm seeing it. Like Rosch is acquiring a few different companies in space. They really want to drive it forward, using big data to drive drug development. It's kind of counterintuitive. I never would've thought it had I not seen it myself. >> And there's a big related challenge. Because in personalized medicine, there's smaller and smaller cohorts of people who will benefit from a drug that still takes two billion dollars on average to develop. That is unsustainable. So there's an economic imperative of overcoming the cost and the cycle time for drug development. >> I want to take a go at this question a little bit differently, thinking about not so much where are the industry segments that can benefit from AI, but what are the kinds of applications that I think are most impactful. So if this is what a skilled surgeon needs to know at a particular time to care properly for a patient, this is where most, this area here, is where most surgeons are. They are close to the maximum knowledge and ability to assimilate as they can be. So it's possible to build complex AI that can pick up on that one little thing and move them up to here. But it's not a gigantic accelerator, amplifier of their capability. But think about other actors in health care. I mentioned a couple of them earlier. Who do you think the least trained actor in health care is? >> John: Patients. >> Yes, the patients. The patients are really very poorly trained, including me. I'm abysmal at figuring out who to call and where to go. >> Naveen: You know as much the doctor right? (laughing) >> Yeah, that's right. >> My doctor friends always hate that. Know your diagnosis, right? >> Yeah, Dr. Google knows. So the opportunities that I see that are really, really exciting are when you take an AI agent, like sometimes I like to call it contextually intelligent agent, or a CIA, and apply it to a problem where a patient has a complex future ahead of them that they need help navigating. And you use the AI to help them work through. Post operative. You've got PT. You've got drugs. You've got to be looking for side effects. An agent can actually help you navigate. It's like your own personal GPS for health care. So it's giving you the inforamation that you need about you for your care. That's my definition of Precision Medicine. And it can include genomics, of course. But it's much bigger. It's that broader picture and I think that a sort of agent way of thinking about things and filling in the gaps where there's less training and more opportunity, is very exciting. >> Great start up idea right there by the way. >> Oh yes, right. We'll meet you all out back for the next start up. >> I had a conversation with the head of the American Association of Medical Specialties just a couple of days ago. And what she was saying, and I'm aware of this phenomenon, but all of the medical specialists are saying, you're killing us with these stupid board recertification trivia tests that you're giving us. So if you're a cardiologist, you have to remember something that happens in one in 10 million people, right? And they're saying that irrelevant anymore, because we've got advanced decision support coming. We have these kinds of analytics coming. Precisely what you're saying. So it's human augmentation of decision support that is coming at blazing speed towards health care. So in that context, it's much more important that you have a basic foundation, you know how to think, you know how to learn, and you know where to look. So we're going to be human-augmented learning systems much more so than in the past. And so the whole recertification process is being revised right now. (inaudible audience member speaking) Speak up, yeah. (person speaking) >> What makes it fathomable is that you can-- (audience member interjects inaudibly) >> Sure. She was saying that our brain is really complex and large and even our brains don't know how our brains work, so... are there ways to-- >> What hope do we have kind of thing? (laughter) >> It's a metaphysical question. >> It circles all the way down, exactly. It's a great quote. I mean basically, you can decompose every system. Every complicated system can be decomposed into simpler, emergent properties. You lose something perhaps with each of those, but you get enough to actually understand most of the behavior. And that's really how we understand the world. And that's what we've learned in the last few years what neural network techniques can allow us to do. And that's why our brain can understand our brain. (laughing) >> Yeah, I'd recommend reading Chris Farley's last book because he addresses that issue in there very elegantly. >> Yeah we're seeing some really interesting technologies emerging right now where neural network systems are actually connecting other neural network systems in networks. You can see some very compelling behavior because one of the things I like to distinguish AI versus traditional analytics is we used to have question-answering systems. I used to query a database and create a report to find out how many widgets I sold. Then I started using regression or machine learning to classify complex situations from this is one of these and that's one of those. And then as we've moved more recently, we've got these AI-like capabilities like being able to recognize that there's a kitty in the photograph. But if you think about it, if I were to show you a photograph that happened to have a cat in it, and I said, what's the answer, you'd look at me like, what are you talking about? I have to know the question. So where we're cresting with these connected sets of neural systems, and with AI in general, is that the systems are starting to be able to, from the context, understand what the question is. Why would I be asking about this picture? I'm a marketing guy, and I'm curious about what Legos are in the thing or what kind of cat it is. So it's being able to ask a question, and then take these question-answering systems, and actually apply them so that's this ability to understand context and ask questions that we're starting to see emerge from these more complex hierarchical neural systems. >> There's a person dying to ask a question. >> Sorry. You have hit on several different topics that all coalesce together. You mentioned personalized models. You mentioned AI agents that could help you as you're going through a transitionary period. You mentioned data sources, especially across long time periods. Who today has access to enough data to make meaningful progress on that, not just when you're dealing with an issue, but day-to-day improvement of your life and your health? >> Go ahead, great question. >> That was a great question. And I don't think we have a good answer to it. (laughter) I'm sure John does. Well, I think every large healthcare organization and various healthcare consortiums are working very hard to achieve that goal. The problem remains in creating semantic interoperatability. So I spent a lot of my career working on semantic interoperatability. And the problem is that if you don't have well-defined, or self-defined data, and if you don't have well-defined and documented metadata, and you start operating on it, it's real easy to reach false conclusions and I can give you a classic example. It's well known, with hundreds of studies looking at when you give an antibiotic before surgery and how effective it is in preventing a post-op infection. Simple question, right? So most of the literature done prosectively was done in institutions where they had small sample sizes. So if you pool that, you get a little bit more noise, but you get a more confirming answer. What was done at a very large, not my own, but a very large institution... I won't name them for obvious reasons, but they pooled lots of data from lots of different hospitals, where the data definitions and the metadata were different. Two examples. When did they indicate the antibiotic was given? Was it when it was ordered, dispensed from the pharmacy, delivered to the floor, brought to the bedside, put in the IV, or the IV starts flowing? Different hospitals used a different metric of when it started. When did surgery occur? When they were wheeled into the OR, when they were prepped and drapped, when the first incision occurred? All different. And they concluded quite dramatically that it didn't matter when you gave the pre-op antibiotic and whether or not you get a post-op infection. And everybody who was intimate with the prior studies just completely ignored and discounted that study. It was wrong. And it was wrong because of the lack of commonality and the normalization of data definitions and metadata definitions. So because of that, this problem is much more challenging than you would think. If it were so easy as to put all these data together and operate on it, normalize and operate on it, we would've done that a long time ago. It's... Semantic interoperatability remains a big problem and we have a lot of heavy lifting ahead of us. I'm working with the Global Alliance, for example, of Genomics and Health. There's like 30 different major ontologies for how you represent genetic information. And different institutions are using different ones in different ways in different versions over different periods of time. That's a mess. >> Our all those issues applicable when you're talking about a personalized data set versus a population? >> Well, so N of 1 studies and single-subject research is an emerging field of statistics. So there's some really interesting new models like step wedge analytics for doing that on small sample sizes, recruiting people asynchronously. There's single-subject research statistics. You compare yourself with yourself at a different point in time, in a different context. So there are emerging statistics to do that and as long as you use the same sensor, you won't have a problem. But people are changing their remote sensors and you're getting different data. It's measured in different ways with different sensors at different normalization and different calibration. So yes. It even persists in the N of 1 environment. >> Yeah, you have to get started with a large N that you can apply to the N of 1. I'm actually going to attack your question from a different perspective. So who has the data? The millions of examples to train a deep learning system from scratch. It's a very limited set right now. Technology such as the Collaborative Cancer Cloud and The Data Exchange are definitely impacting that and creating larger and larger sets of critical mass. And again, not withstanding the very challenging semantic interoperability questions. But there's another opportunity Kay asked about what's changed recently. One of the things that's changed in deep learning is that we now have modules that have been trained on massive data sets that are actually very smart as certain kinds of problems. So, for instance, you can go online and find deep learning systems that actually can recognize, better than humans, whether there's a cat, dog, motorcycle, house, in a photograph. >> From Intel, open source. >> Yes, from Intel, open source. So here's what happens next. Because most of that deep learning system is very expressive. That combinatorial mixture of features that Naveen was talking about, when you have all these layers, there's a lot of features there. They're actually very general to images, not just finding cats, dogs, trees. So what happens is you can do something called transfer learning, where you take a small or modest data set and actually reoptimize it for your specific problem very, very quickly. And so we're starting to see a place where you can... On one end of the spectrum, we're getting access to the computing capabilities and the data to build these incredibly expressive deep learning systems. And over here on the right, we're able to start using those deep learning systems to solve custom versions of problems. Just last weekend or two weekends ago, in 20 minutes, I was able to take one of those general systems and create one that could recognize all different kinds of flowers. Very subtle distinctions, that I would never be able to know on my own. But I happen to be able to get the data set and literally, it took 20 minutes and I have this vision system that I could now use for a specific problem. I think that's incredibly profound and I think we're going to see this spectrum of wherever you are in your ability to get data and to define problems and to put hardware in place to see really neat customizations and a proliferation of applications of this kind of technology. >> So one other trend I think, I'm very hopeful about it... So this is a hard problem clearly, right? I mean, getting data together, formatting it from many different sources, it's one of these things that's probably never going to happen perfectly. But one trend I think that is extremely hopeful to me is the fact that the cost of gathering data has precipitously dropped. Building that thing is almost free these days. I can write software and put it on 100 million cell phones in an instance. You couldn't do that five years ago even right? And so, the amount of information we can gain from a cell phone today has gone up. We have more sensors. We're bringing online more sensors. People have Apple Watches and they're sending blood data back to the phone, so once we can actually start gathering more data and do it cheaper and cheaper, it actually doesn't matter where the data is. I can write my own app. I can gather that data and I can start driving the correct inferences or useful inferences back to you. So that is a positive trend I think here and personally, I think that's how we're going to solve it, is by gathering from that many different sources cheaply. >> Hi, my name is Pete. I've very much enjoyed the conversation so far but I was hoping perhaps to bring a little bit more focus into Precision Medicine and ask two questions. Number one, how have you applied the AI technologies as you're emerging so rapidly to your natural language processing? I'm particularly interested in, if you look at things like Amazon Echo or Siri, or the other voice recognition systems that are based on AI, they've just become incredibly accurate and I'm interested in specifics about how I might use technology like that in medicine. So where would I find a medical nomenclature and perhaps some reference to a back end that works that way? And the second thing is, what specifically is Intel doing, or making available? You mentioned some open source stuff on cats and dogs and stuff but I'm the doc, so I'm looking at the medical side of that. What are you guys providing that would allow us who are kind of geeks on the software side, as well as being docs, to experiment a little bit more thoroughly with AI technology? Google has a free AI toolkit. Several other people have come out with free AI toolkits in order to accelerate that. There's special hardware now with graphics, and different processors, hitting amazing speeds. And so I was wondering, where do I go in Intel to find some of those tools and perhaps learn a bit about the fantastic work that you guys are already doing at Kaiser? >> Let me take that first part and then we'll be able to talk about the MD part. So in terms of technology, this is what's extremely exciting now about what Intel is focusing on. We're providing those pieces. So you can actually assemble and build the application. How you build that application specific for MDs and the use cases is up to you or the one who's filling out the application. But we're going to power that technology for multiple perspectives. So Intel is already the main force behind The Data Center, right? Cloud computing, all this is already Intel. We're making that extremely amenable to AI and setting the standard for AI in the future, so we can do that from a number of different mechanisms. For somebody who wants to develop an application quickly, we have hosted solutions. Intel Nervana is kind of the brand for these kinds of things. Hosted solutions will get you going very quickly. Once you get to a certain level of scale, where costs start making more sense, things can be bought on premise. We're supplying that. We're also supplying software that makes that transition essentially free. Then taking those solutions that you develop in the cloud, or develop in The Data Center, and actually deploying them on device. You want to write something on your smartphone or PC or whatever. We're actually providing those hooks as well, so we want to make it very easy for developers to take these pieces and actually build solutions out of them quickly so you probably don't even care what hardware it's running on. You're like here's my data set, this is what I want to do. Train it, make it work. Go fast. Make my developers efficient. That's all you care about, right? And that's what we're doing. We're taking it from that point at how do we best do that? We're going to provide those technologies. In the next couple of years, there's going to be a lot of new stuff coming from Intel. >> Do you want to talk about AI Academy as well? >> Yeah, that's a great segway there. In addition to this, we have an entire set of tutorials and other online resources and things we're going to be bringing into the academic world for people to get going quickly. So that's not just enabling them on our tools, but also just general concepts. What is a neural network? How does it work? How does it train? All of these things are available now and we've made a nice, digestible class format that you can actually go and play with. >> Let me give a couple of quick answers in addition to the great answers already. So you're asking why can't we use medical terminology and do what Alexa does? Well, no, you may not be aware of this, but Andrew Ian, who was the AI guy at Google, who was recruited by Google, they have a medical chat bot in China today. I don't speak Chinese. I haven't been able to use it yet. There are two similar initiatives in this country that I know of. There's probably a dozen more in stealth mode. But Lumiata and Health Cap are doing chat bots for health care today, using medical terminology. You have the compound problem of semantic normalization within language, compounded by a cross language. I've done a lot of work with an international organization called Snowmed, which translates medical terminology. So you're aware of that. We can talk offline if you want, because I'm pretty deep into the semantic space. >> Go google Intel Nervana and you'll see all the websites there. It's intel.com/ai or nervanasys.com. >> Okay, great. Well this has been fantastic. I want to, first of all, thank all the people here for coming and asking great questions. I also want to thank our fantastic panelists today. (applause) >> Thanks, everyone. >> Thank you. >> And lastly, I just want to share one bit of information. We will have more discussions on AI next Tuesday at 9:30 AM. Diane Bryant, who is our general manager of Data Centers Group will be here to do a keynote. So I hope you all get to join that. Thanks for coming. (applause) (light electronic music)
SUMMARY :
And I'm excited to share with you He is the VP and general manager for the And it's pretty obvious that most of the useful data in that the technologies that we were developing So the mission is really to put and analyze it so you can actually understand So the field of microbiomics that I referred to earlier, so that you can think about it. is that the substrate of the data that you're operating on neural networks represent the world in the way And that's the way we used to look at it, right? and the more we understand the human cortex, What was it? also did the estimate of the density of information storage. and I'd be curious to hear from you And that is not the case today. Well, I don't like the idea of being discriminated against and you can actually then say what drug works best on this. I don't have clinic hours anymore, but I do take care of I practiced for many years I do more policy now. I just want to take a moment and see Yet most of the studies we do are small scale And so that barrier is going to enable So the idea is my data's really important to me. is much the same as you described. That's got to be a new one I've heard now. So I'm going to repeat this and ask Seems like a lot of the problems are regulatory, because I know the cycle is just going to be longer. And the diadarity is where you have and deep learning systems to understand, And that feeds back to your question about regulatory and to make AI the competitive advantage. that the opportunities that people need to look for to what you were saying before. of overcoming the cost and the cycle time and ability to assimilate Yes, the patients. Know your diagnosis, right? and filling in the gaps where there's less training We'll meet you all out back for the next start up. And so the whole recertification process is being are there ways to-- most of the behavior. because he addresses that issue in there is that the systems are starting to be able to, You mentioned AI agents that could help you So most of the literature done prosectively So there are emerging statistics to do that that you can apply to the N of 1. and the data to build these And so, the amount of information we can gain And the second thing is, what specifically is Intel doing, and the use cases is up to you that you can actually go and play with. You have the compound problem of semantic normalization all the websites there. I also want to thank our fantastic panelists today. So I hope you all get to join that.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Diane Bryant | PERSON | 0.99+ |
Bob Rogers | PERSON | 0.99+ |
Kay Erin | PERSON | 0.99+ |
John | PERSON | 0.99+ |
David Haussler | PERSON | 0.99+ |
China | LOCATION | 0.99+ |
six | QUANTITY | 0.99+ |
Chris Farley | PERSON | 0.99+ |
Naveen Rao | PERSON | 0.99+ |
100% | QUANTITY | 0.99+ |
Bob | PERSON | 0.99+ |
10 | QUANTITY | 0.99+ |
Ray Kurzweil | PERSON | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
London | LOCATION | 0.99+ |
Mike | PERSON | 0.99+ |
John Madison | PERSON | 0.99+ |
American Association of Medical Specialties | ORGANIZATION | 0.99+ |
four | QUANTITY | 0.99+ |
ORGANIZATION | 0.99+ | |
three months | QUANTITY | 0.99+ |
HHS | ORGANIZATION | 0.99+ |
Andrew Ian | PERSON | 0.99+ |
20 minutes | QUANTITY | 0.99+ |
$100 | QUANTITY | 0.99+ |
first paper | QUANTITY | 0.99+ |
Congress | ORGANIZATION | 0.99+ |
95 percent | QUANTITY | 0.99+ |
second author | QUANTITY | 0.99+ |
UC Santa Cruz | ORGANIZATION | 0.99+ |
100-dollar | QUANTITY | 0.99+ |
200 ways | QUANTITY | 0.99+ |
two billion dollars | QUANTITY | 0.99+ |
George Church | PERSON | 0.99+ |
Health Cap | ORGANIZATION | 0.99+ |
Naveen | PERSON | 0.99+ |
25 plus years | QUANTITY | 0.99+ |
12 layers | QUANTITY | 0.99+ |
27 genes | QUANTITY | 0.99+ |
12 years | QUANTITY | 0.99+ |
Kay | PERSON | 0.99+ |
140 layers | QUANTITY | 0.99+ |
first author | QUANTITY | 0.99+ |
one question | QUANTITY | 0.99+ |
200 people | QUANTITY | 0.99+ |
20 | QUANTITY | 0.99+ |
First | QUANTITY | 0.99+ |
CIA | ORGANIZATION | 0.99+ |
NLP | ORGANIZATION | 0.99+ |
Today | DATE | 0.99+ |
two questions | QUANTITY | 0.99+ |
yesterday | DATE | 0.99+ |
Pete | PERSON | 0.99+ |
Medicare | ORGANIZATION | 0.99+ |
Legos | ORGANIZATION | 0.99+ |
Northern California | LOCATION | 0.99+ |
Echo | COMMERCIAL_ITEM | 0.99+ |
Each | QUANTITY | 0.99+ |
100 times | QUANTITY | 0.99+ |
nervanasys.com | OTHER | 0.99+ |
$1000 | QUANTITY | 0.99+ |
Ray Chrisfall | PERSON | 0.99+ |
Nervana | ORGANIZATION | 0.99+ |
Data Centers Group | ORGANIZATION | 0.99+ |
Global Alliance | ORGANIZATION | 0.99+ |
Global Alliance for Genomics and Health | ORGANIZATION | 0.99+ |
millions | QUANTITY | 0.99+ |
intel.com/ai | OTHER | 0.99+ |
four years | QUANTITY | 0.99+ |
Stanford | ORGANIZATION | 0.99+ |
10,000 examples | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
one disease | QUANTITY | 0.99+ |
Two examples | QUANTITY | 0.99+ |
Steven Hawking | PERSON | 0.99+ |
five years ago | DATE | 0.99+ |
first | QUANTITY | 0.99+ |
two sort | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
One | QUANTITY | 0.99+ |
first time | QUANTITY | 0.99+ |
Malcolm Gladwell, Best-selling Author - QuickBooks Connect 2016 - #QBConnect #theCUBE
>> Voiceover: Live from San Jose, California, in the heart of Silicon Valley, it's the Cube. Covering QuickBooks Connect 2016, sponsored by Intuit QuickBooks. Now, here are your hosts, Jeff Frick and John Walls. >> Welcome back here on the Cube as we continue our coverage here at Quickbooks Connect 2016 live from San Jose at the Convention Center. 5,000 attendees, the third year of this event, more than ever, and certainly that explosive growth is personified in what's happening here. On this floor and the key note station, and of course at home, if you're a small business owner you know exactly what we're talking about. Along with Jeff Frick, I'm John Walls and we're joined now by probably one of the most popular authors, most widely read authors in America today. Malcolm Gladwell, five times New York Times Bestseller Author. Congratulations on that. And the Revisionist History Podcast, which we love. I love the Wilt Chamberlain podcast, Big Man Can't Shoot. Thanks for joining us. Great to have you. >> Delighted to be here. >> So, first off, tell us about, and the whole spirit of this show is about the entrepreneurial capabilities of so many people in the workplace today. What's your thought about entrepreneurism if you will, and what does it take to be a good outside the box thinker? Like so many of these folks are. >> Well there ... The explosion ... Here we are in the middle of Silicon Valley and what this part of the country has done to change the culture of the entire world's economy in the last 20 years, 25 years is nothing short of incredible. Entrepreneurship has gone from something that people thought of as the province of wackos and weirdos and strange people to a kind of thing that kids aspire to do and be. That's an amazing transformation. And I think when we ... What's happened over the course of that transformation is we've discovered that the definition of what it takes to be good is a lot broader than we thought. That many different kinds of people using many different kinds of strategies can be effective at starting businesses and achieving. I think that's been the great take home lesson of this entrepreneurial explosion of the last generation. >> I think probably in all of your works, there are pieces of it that you could extract and apply to this world, but what really struck me I think about David and Goliath, about advantages, disadvantages and making the most of your strengths basically, how do you see that translating or how would you want to communicate that to somebody, a small business owner, who thinks "Man, I'm up against the wall"? "How am I going to cut through the clutter?" "How am I going to get there?" All this sweat equity. But yet, there are advantages that they have. >> Yeah. Yeah, because this goes to this issue of learning strategies that there's a kind of learning called compensation learning, where you are learning out of weakness, not out of strength. You're learning from your failures and that kind of learning is a lot harder to do, but it's a lot more powerful. So the task of the small business owner, who is facing a whole series of disadvantages and weaknesses relative to much larger competitors, there's no question, it's a harder way to go. But, if you can pull it off, you'll end up in a much stronger position. If you can be one of those people who can do compensation learning, and in that book I talk, for example, about how many entrepreneurs are dyslexic, and that's a beautiful example of that. Some portion of people who suffer from quite a serious learning disorder, not all of them, some portion of them manage to turn that around into an advantage. To take something, to take a basic inability to read, and turn that into developing skills or delegation and leadership and problem solving and developing an incredible resilience, the ability to cope with failure. They turn a weakness into a strength and they end up being far more powerful than they would be as a result. And when I interviewed all these successful, dyslexic entrepreneurs for that book, what was amazing was that all of them said, "I did not succeed despite my disability, I succeeded because of it." And that's the crux of it. And so I think there is a silver lining to many of the clouds that small business owners face. >> It's a really powerful statement because so often, people are using drugs and medication and other things to kind of normalize people that are maybe not in the mean, that are on the fringe. But in fact, it's their ability to put a different lens, and see things differently that opens up an opportunity that the regular person just trucking down the road didn't see right in front of them. >> That's what I meant when I said earlier, talking about how our kind of definition of what it takes to be a successful entrepreneur is expanding. I think we're beginning to understand that lots of traits that we once thought of as just problematic have unexpected benefits. Like I remember once reading someone who was putting out that basically, most of the great research scientists in the world have OCD. And you kind of have to have OCD if you want to be ... 'Cause what are you doing? You're spending hours and hours in the lab doing the same incredibly precise experiment over and over and over again, and measuring your results to the slightest. That's OCD behavior that has found a beautiful home. Right? Has found a world where you need to be that way, right? And I read that as like, "That's lovely." These are people who we drugged up and pushed off to the fringes two generations ago, and now we've found a home for them in labs where they're doing incredibly productive and satisfying work. >> Yeah, I think you profiled in one of the podcasts, a cancer researcher who you said nobody really likes the guy, he's kind of an ordinary guy, but he was just so laser focused on the very specific problem that he was trying to solve. He didn't really care. That's what he was all about. >> Yeah, no, this has been a lovely development in our understanding of human capacity. >> So where do the ideas come from? I'm one of the many fans and I've read, and every time I read one of your books, it never ceases to amaze me how much you make me think. Which is, I think, why we're all so attracted to it. Because it seems so obvious, right? After you present this beautiful, elegant case, like "I never thought of that." Where do those ideas come from? What motivates you to say "I'm going to write blank. I'm going to do tipping point." >> I wish I had a system, 'cause right now I'm planning the next season of my podcast, so I need 10 more ideas for that, and I'm starting to write a new book so I need 80,000 words for that. And I'm wondering, I wish I had a big bucket full of ideas. (laughter) So I'm running around with my head cut off talking to people, but I spent the summer ... I probably read 40 books this summer to do with ... Apart from, I'm not talking about novels and fillers, and serious books that I'm trying to get. And I've been going around talking to people, just talking to interesting people trying to work out what I'm interested in. And trying to just uncover interesting things that will prompt me to go in cool new directions. There is a kind of, you have to let your mind ... It's like, the farmer lets his field go fallow for a while. You've got to have a fallow period where you just let everything regenerate and then you plant the crop again. >> But somehow reading 40 books doesn't sound like, to me, you're letting your mind go fallow. >> Well I didn't have a ... I was literally just lying around reading books. It seemed pretty fallow to me. >> What was your favorite one out of that read? Or the most enlightening one out of that read? >> I got on these weird side tracks this summer. I became obsessed with Churchill's Best Friend. Churchill had a best friend who betrays him. And it's this incredibly moving story. And I don't know how it fits in what I want to do, but I want to try and make it fit, 'cause it's such a weird and troubling story about this, I mean a truly transcendent figure in history who has a best friend who stabs him in the back with consequences for the world. Anyway, so I read like seven bizarre, weird, obscure books about this guy. And I was like "There's something there I think." >> He's out there, yeah. >> Alright, so we'll pick something that was a little more topical. Last night, they had a drink making robot machine over in the corner making drinks. And it just brings up, as we get into more automation, more connected systems. We had the huge knockout of the web last week from the East coast. As you look at the future, there's the happy future, where the machines do all the hard work and we get to sit around and read books like you did, which is fantastic. And then there's the darker potential future, where the machines take everyone's jobs. What are people going to do? And if it can make drinks and it can diagnose disease and read every manual that came out. How do people fit? And then there's the middle ground, right? The best chess player is the best chess player and a machine, not either or. So I'm just curious to get your thoughts as we look to the next big wave of AI and machine learning and automation, how you see that shaking out. >> I think it's important not to overstate how much of our lives we will be willing to let machines take over. So it's been very interesting for me as a writer, to observe, for example, what happened with eBooks over the last 10 years. So eBooks come along and everyone says, "The printed book is over. It's going to all going to be on ... Why would you go and lug around a big, heavy book when you can get for a fraction of the cost something that'll be ..." And so there were all these gloom and doom, and expectations, and what happens? Well, it turns out that eBooks are still a fairly sizeable portion of the market place. But it turns out that most people actually want to read a book, a physical object, that that's more pleasurable somehow, that the interaction with this thing, this pages and paper, is pleasing. It's part of the experience. And I think that's a useful ... No, that's not a robot and that's not AI, but it's an important reminder that the interactions and the activities that make up our lives are not just functional activities. They are opportunities for enjoyment and engagement, and part of the reason you go to a restaurant is not just to eat the food, but to engage with the people in the restaurant. Part of the pleasure is the person who brings you the wine bottle and gives you a little spiel. Now, I can replace that person with a robot, but the question is do you want to? Now, you can do it. And I can imagine a future where the robot brings you the best wine in the world and does some algorithm and gives you the finest wine. But I don't know, if I'm having a nice night out and I'm paying 60 dollars a plate for my dinner, I kind of want the human interaction. I mean, it's part of the pleasure. Same thing with self-driving cars. It baffles me as a kind of car guy how everyone assumes that "Oh, well, by 2020, it'll all be self-driving cars." Wait a minute, what if I enjoy driving a car? We've forgotten this. It's actually quite a pleasant thing to go and to make decisions unconsciously and consciously and drive down the road. And I like a manual transmission, I like the feel of driving a car. I don't want to give that up. Why should I have to give that up? So it's like, we can't get ahead of ourselves. You mentioned the chess thing, which is a great example of this. Can you make a machine that will beat a person at chess? Yes, you can. But it's not chess. Chess is a gameplay between two people. That's why it's interesting. If it's played between two machines no one will watch it! So it's this absurd thing. I can also make a machine that can run faster than Usain Bolt. It's called a car. Do I want to watch a race between a car and Usain Bolt? No. Why? Because what's pleasurable is watching human beings race. >> But Jeff hit on something, and then you touched on it with the car, and I think about GPS. And how it wasn't that long ago, and I kind of sound like my grandfather now or my father, that we just drove around, right? And if you came to the traffic, "Oh God, I've hit traffic." But now we use applications that take us, and they're using their intelligence. Is it possible, can you see with this generation of kids coming up now, that artificial intelligence kind of makes our personal thinking obsolete? And we don't process like we do, we don't evaluate, we don't analyze, and so we're raising a whole different kind of human, because of the interaction with technology or what we can sign to technology, because we give up on it. >> Well it'd be different. I think that, so let's stick with cars for a moment. I think now we have a world where a whole class of people drive their car to work in the morning. And when they're driving their car, the number of things they can do with their imagination and mind is limited. They can listen to music or the news or a podcast, or they can just sit there, but they can't ... They can maybe talk on a phone even though they shouldn't, but they can't do work and they can't lie in the back and take a nap, and they can't daydream, and they can't have a meaningful interaction with more than one person. What we're going to move to is a world where some people will give up whatever kind of pleasure or interaction that came from driving a car, and replace it with another kind of interaction. So driving a car becomes ... The time that you're in a car becomes a place where an infinite number of things can happen, as opposed to five things can happen. And I sort of think that's what the world looks like, is we get this incredibly complicated mix. Medicine becomes some mixture of the computer is going to do all the easy stuff, but half of medicine is about being reassured. It's about your personal fears. It's not about the diagnosis, or which drug you take. And for that stuff, I imagine that we're going to have much longer, deeper, more meaningful conversations with our doctors 15 years from now, when the computer has taken all the easy stuff off the table, or the AI, the robot. So in many ways, that world allows for much richer, personal interactions than the one we're in now. The doctor really will have ... My doctor has no time for me now. He's like "I got to move around." >> "Got to go." >> In ten years, it's possible my doctor will be able to sit down with me for half an hour or 45 minutes twice a year and really talk about what's going on with me and that's the promise of the future. I don't think we're going to have a situation where everything's done by the robot. >> Well this is one of those occasions where I truly wish we had tons of more time, but you have a busy schedule and so we're going to allow you to go on, but thank you so much ... >> Thank you. It was super fun. >> John: For sharing this time with us. We've thoroughly enjoyed it. >> Jeff: Look forward to the KeyNote later this afternoon as well. >> And we look forward to the next 80,000 words, so good luck with that too! >> Thank you. >> Malcolm Gladwell, joining us here on the Cube. Back with more from San Jose right after this. (upbeat music)
SUMMARY :
in the heart of Silicon Valley, it's the Cube. And the Revisionist History Podcast, which we love. and the whole spirit of this show is about that the definition of what it takes and apply to this world, but what really struck me the ability to cope with failure. and other things to kind of normalize people and pushed off to the fringes two generations ago, nobody really likes the guy, he's kind of an ordinary guy, Yeah, no, this has been a lovely development it never ceases to amaze me how much you make me think. I probably read 40 books this summer to do with ... to me, you're letting your mind go fallow. It seemed pretty fallow to me. And I don't know how it fits in what I want to do, We had the huge knockout of the web last week and part of the reason you go to a restaurant because of the interaction with technology It's not about the diagnosis, or which drug you take. and that's the promise of the future. we're going to allow you to go on, but thank you so much ... It was super fun. John: For sharing this time with us. Jeff: Look forward to the KeyNote later Back with more from San Jose right after this.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Jeff Frick | PERSON | 0.99+ |
John Walls | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Jeff | PERSON | 0.99+ |
40 books | QUANTITY | 0.99+ |
60 dollars | QUANTITY | 0.99+ |
Churchill | PERSON | 0.99+ |
San Jose | LOCATION | 0.99+ |
Malcolm Gladwell | PERSON | 0.99+ |
80,000 words | QUANTITY | 0.99+ |
David | PERSON | 0.99+ |
Silicon Valley | LOCATION | 0.99+ |
45 minutes | QUANTITY | 0.99+ |
last week | DATE | 0.99+ |
Big Man Can't Shoot | TITLE | 0.99+ |
America | LOCATION | 0.99+ |
five things | QUANTITY | 0.99+ |
Goliath | PERSON | 0.99+ |
half an hour | QUANTITY | 0.99+ |
ten years | QUANTITY | 0.99+ |
two people | QUANTITY | 0.99+ |
5,000 attendees | QUANTITY | 0.99+ |
Usain Bolt | PERSON | 0.99+ |
San Jose, California | LOCATION | 0.99+ |
five times | QUANTITY | 0.99+ |
two machines | QUANTITY | 0.99+ |
one | QUANTITY | 0.98+ |
third year | QUANTITY | 0.98+ |
Last night | DATE | 0.98+ |
25 years | QUANTITY | 0.98+ |
Convention Center | LOCATION | 0.98+ |
KeyNote | TITLE | 0.98+ |
more than one person | QUANTITY | 0.97+ |
2020 | DATE | 0.97+ |
Quickbooks Connect 2016 | EVENT | 0.97+ |
10 more ideas | QUANTITY | 0.97+ |
seven bizarre, weird, obscure books | QUANTITY | 0.96+ |
2016 | DATE | 0.94+ |
Intuit QuickBooks | ORGANIZATION | 0.94+ |
Wilt Chamberlain | PERSON | 0.93+ |
15 years | QUANTITY | 0.92+ |
QuickBooks Connect 2016 | EVENT | 0.91+ |
first | QUANTITY | 0.91+ |
today | DATE | 0.9+ |
big | EVENT | 0.89+ |
two generations ago | DATE | 0.89+ |
QuickBooks Connect | TITLE | 0.88+ |
this summer | DATE | 0.88+ |
last 20 years | DATE | 0.85+ |
twice a year | QUANTITY | 0.84+ |
later this afternoon | DATE | 0.84+ |
New York Times | ORGANIZATION | 0.81+ |
one of your books | QUANTITY | 0.81+ |
last 10 years | DATE | 0.8+ |
one of the podcasts | QUANTITY | 0.75+ |
half | QUANTITY | 0.75+ |
tons of more time | QUANTITY | 0.74+ |
#QBConnect | TITLE | 0.73+ |
fans | QUANTITY | 0.73+ |
Revisionist History | TITLE | 0.61+ |
wave of | EVENT | 0.56+ |
medicine | QUANTITY | 0.55+ |
many people | QUANTITY | 0.54+ |
Cube | COMMERCIAL_ITEM | 0.48+ |
#theCUBE | TITLE | 0.34+ |
Cube | ORGANIZATION | 0.28+ |