Dave Green, Sr Director, IT & Informatics, Genomic Health
>> Announcer: Live, from Las vegas, it's theCUBE! Covering Informatica World 2018. Brought to you by Informatica. >> Hey, welcome back everyone to theCUBE, live here in Las Vegas at The Venetian. I'm John Furrier the co-host of theCUBE, we're here at Informatica Word 2018 with Peter Burris, analyst at Wikibon, and SiliconANGLE, and theCUBE, our next guest is Dave Green, Senior Director of IT Informatics, Genomic Health, welcome to theCUBE! >> Thank you, great to be here. >> You guys are doin' a ton of data so, we want to get into it, very cool, obviously, a lot of testing, lot of proprietary work, you know, this is where the power of data can come in. But first before we get into it, just take a quick minute to describe the company, what you do there, and what the mission is. >> Absolutely yep. So I work for Genomic Health, the company was founded in the year 2000, and it was founded on the premise, there was a lot of science and technology around the areas of genetic testing, but it hadn't been applied to the world of cancer, and cancer specifically. So we launched the first product on the market in 2004, and it's focused on breast cancer, based in the US. And the job the company has is to really help physicians sit down with their patient, and inform the treatment decision that the physician and patient have to make together. So the situation is, let's use breast cancer again, if a woman's been diagnosed with breast cancer, they'll often go for a biopsy, the cancer cells are actually extracted form the body. We get a sample of those in our lab, we run trough a lot of diagnostic testing, and based upon a lot of clinical research, evidence, and studies we've done over the years, it's a lot of data there in itself. We can help predict the likelihood of recurrence of cancer, over five years, 10 years, based upon different types of treatment options. And we basically, even though we do diagnostic testing, we're really an information provider, an information company, that's the we see ourselves. The idea is to sit down, have the physician sit down with the patient, we deliver a report at the end of the day. So all this testing, all the data crunching, everything we do, ends up with a relatively simple report, there's a risk curve, and based upon that, different types of treatment options make sense for the patient. So if maybe a woman decided do I need chemotherapy yes or no well chemotherapy's incredibly expensive from a healthcare economics perspective, it's incredibly invasive in terms of side-effects, you know, people know about hair loss, but there's a ton of other really, really detrimental side effects too. So if there's no clinical benefit from going through chemotherapy, we can help inform that decision, right? Give the patient confidence that they know it's the right thing to do, and I'm on a certain side of a risk curve, or there may be better options. Drug therapies these days, every single day, there's a new drug comin' on the market to combat certain types of cancer in some way. So that's our job, that's the gist of what we do. >> So what do you do with the data, so what's the strategic initiatives around data? Obviously data is key to this, 'cause you need to analyze data but, just give us an order of magnitude, a taste or a sample of some of the things that you're doing with data. >> Right, so I mean, we're growing as a company, it's a growing space, so far as, there's more and more people throughout the world with cancer every day, we're growing internationally too. So we have data that we receive from all over the world. In terms of some of the key initiatives we have underway. So we're doing a lot more in the space of partnering, we're built up a commercial infrastructure over the course of years. And that commercial infrastructure goes with the IT infrastructure that matches to it. So we're using that commercial channel to really expand, the number of tests we're able to bring to market, and we do that by partnering with smaller labs. They're not able to build the same infrastructure themselves so we can use all of the capabilities that we've built up, we bring them into our ecosystem and extend the reach that we have to bring in other types of product tests. So we look at things like data, one of the initiatives we have right now is it's very difficult space to operate, right? Healthcare is incredibly complex, from a data perspective, it's messy, there are people involved, there's humans involved, literally we're all unique, we're all individual. The job we have, and in many of these cases, where we bring in third-party labs as well, is to take care of all of the processing of the commercial, and clinical related information we need to get from patients. We need to move through, and make sure we get the right information the first time through. And we use that information, then, to trigger the rest of the testing process. It's both clinical, it's business, it's related to healthcare insurance providers, government mandated information we have to collect as well, we've got to bring all these different facets together. >> You've got to a lot of moving parts and dynamics, what's the relationship with Informatica? As a customer, you're using which product? Could you just take a minute to explain? >> Yep, absolutely, yeah, so we're a customer. The couple main products we use, so, we use their master data management product, it's very important for us, that we know and recognize the right physician, right? The physicianal link to the patient, we've got to get that right, We've got to have accurate information, >> John: That's a big one. >> That's a big one, yeah. >> And we use that for the older information we get in, and we also use it for the billing site too, so ultimately we get paid, 'cause we want to reinvest in our patients, we want to put them through more of our tests, that's one angle, so incredibly important for us. >> John: It's a critical component to the business model. >> It's, yeah, absolutely, it's mission critical. If we get that wrong, forget the rest of the tests, forget the science, the technology, all of the cool stuff we do, it's a basic fundamental thing we have to get right. >> Well, you didn't just blow a campaign, you blew a life. >> Yeah! Or at least we're slowing down, getting the right information, the actual information, >> John: That'll ruin your reputation, I mean, everything, dominoes will just fall. >> It's incredible important for us to get right, that's still one thing we've got from Informatica. The other thing we've partnered with them from, is where we, we look at the integrations between the default applications that we have. We have some data that we have to process on-premise, we have a laboratory information system, that's real-time, critical processing, it's interfacing with robots in a lab, and things like this, so that's got to interface with the likes of Salesforce, which is our CRM, which is where we receive all of your information, and where we clinical information from the physician on behalf of the patient. These things have to connect together, the data has to integrate, we have to make sense of it, we have to logically know, what information is flowing through what business process throughout the company, and we use Informatica to be able to do that as well. >> Peter: Well healthcare is at the vanguard of so many things, it's the vanguard of ethics, because of the role that people play. It's at the vanguard of big data, it was one of the first clear, you know, broadly understood, the drive to understand the genome was fundamentally a big data problem, and people said, wow, I didn't realize we could do that with data! It's also at the vanguard of understanding the relationship between analog and digital and the fact that, this is all an analog experience that has to be turned into a digital experience, so we can do things with it. You must watch much of what's going on around here and say, yeah, we've gone through that, what kind of advice and counsel can you give to folks who are perhaps just entering into new ways of thinking about using data, new ways of applying data, new ways of understanding that relation between analog and digital. How would you advise your peers to think differently? >> Yeah, so one of the things I've certainly noted, in walking around and talking to some of my peers at Informatica World this week was just, a bit of some of the frustration actually, from the technical side. And that becomes, because they see the technical solutions, they see the data and the opportunity, but what we've not done, in many cases, with this technology is be able to explain that to the business people that we're working with, and establishing that business partnership. So I think, been patient, looking to educate, looking for quick wins, opportunities to show what data can do, how transformative, you know, data can be in terms of, how business people work every single day. The connection I've certainly seen in my company, no different, it's somewhat ironic that we, we have this treasure trove of clinical information that we've built up over time. We've not been looking at our business, the way we run our business the same way. So, in some ways we've been able to, to, well we do it in this area, the clinical space, let's replicate that and transform, bring it through to the business side too. >> Evidence-based business management. >> Exactly right, yeah. So I think being persistent, looking for the ability to educate, looking for quick wins, and looking to use the technology to show what's possible, help lead the way, and be consistent and patient on that journey. And it's a journey, it's not a one project, it's not a I just bring in a tool, life is good, it has to be much more than that. So that's what I've learned, at least, what I've seen so far today, this week. >> Dave, thanks for taking the time to come on theCUBE and share your story. >> Sure thing. >> Genomic Health, great work, growing international, they got data challenges, they're solving them, and they're getting, they have to get 'em right, and this is, we're hearing more of this. Great story, thank you for coming on. >> Dave: Appreciate it, thank you. >> I'm John Furrier, Peter Burris, here for day two of coverage of Informatica World, stay with us, we'll be back after this break. (bubbly music)
SUMMARY :
Brought to you by Informatica. I'm John Furrier the co-host of theCUBE, to describe the company, that's the we see ourselves. some of the things that and extend the reach that we have that we know and recognize and we also use it for component to the business model. all of the cool stuff we do, a campaign, you blew a life. John: That'll ruin your reputation, the data has to integrate, the drive to understand the genome the way we run our business the same way. looking for the ability to Dave, thanks for taking the time have to get 'em right, of Informatica World, stay with us,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Peter Burris | PERSON | 0.99+ |
Dave Green | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Dave Green | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
2004 | DATE | 0.99+ |
Peter | PERSON | 0.99+ |
US | LOCATION | 0.99+ |
Dave | PERSON | 0.99+ |
Informatica | ORGANIZATION | 0.99+ |
Wikibon | ORGANIZATION | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
Genomic Health | ORGANIZATION | 0.99+ |
10 years | QUANTITY | 0.99+ |
Las vegas | LOCATION | 0.99+ |
SiliconANGLE | ORGANIZATION | 0.99+ |
first product | QUANTITY | 0.99+ |
this week | DATE | 0.99+ |
over five years | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
couple | QUANTITY | 0.98+ |
Informatics | ORGANIZATION | 0.98+ |
Informatica World | ORGANIZATION | 0.98+ |
theCUBE | ORGANIZATION | 0.98+ |
first | QUANTITY | 0.98+ |
one | QUANTITY | 0.98+ |
2000 | DATE | 0.97+ |
first time | QUANTITY | 0.97+ |
one angle | QUANTITY | 0.97+ |
one project | QUANTITY | 0.95+ |
today | DATE | 0.95+ |
Informatica World 2018 | EVENT | 0.93+ |
Informatica World | ORGANIZATION | 0.91+ |
one thing | QUANTITY | 0.9+ |
day two | QUANTITY | 0.82+ |
single day | QUANTITY | 0.79+ |
Informatica Word 2018 | EVENT | 0.73+ |
Genomic Health | ORGANIZATION | 0.73+ |
ton of data | QUANTITY | 0.65+ |
The Venetian | LOCATION | 0.61+ |
more | QUANTITY | 0.53+ |
Salesforce | TITLE | 0.52+ |
products | QUANTITY | 0.5+ |
more people | QUANTITY | 0.48+ |
Paola Peraza Calderon & Viraj Parekh, Astronomer | Cube Conversation
(soft electronic music) >> Hey everyone, welcome to this CUBE conversation as part of the AWS Startup Showcase, season three, episode one, featuring Astronomer. I'm your host, Lisa Martin. I'm in the CUBE's Palo Alto Studios, and today excited to be joined by a couple of guests, a couple of co-founders from Astronomer. Viraj Parekh is with us, as is Paola Peraza-Calderon. Thanks guys so much for joining us. Excited to dig into Astronomer. >> Thank you so much for having us. >> Yeah, thanks for having us. >> Yeah, and we're going to be talking about the role of data orchestration. Paola, let's go ahead and start with you. Give the audience that understanding, that context about Astronomer and what it is that you guys do. >> Mm-hmm. Yeah, absolutely. So, Astronomer is a, you know, we're a technology and software company for modern data orchestration, as you said, and we're the driving force behind Apache Airflow. The Open Source Workflow Management tool that's since been adopted by thousands and thousands of users, and we'll dig into this a little bit more. But, by data orchestration, we mean data pipeline, so generally speaking, getting data from one place to another, transforming it, running it on a schedule, and overall just building a central system that tangibly connects your entire ecosystem of data services, right. So what, that's Redshift, Snowflake, DVT, et cetera. And so tangibly, we build, we at Astronomer here build products powered by Apache Airflow for data teams and for data practitioners, so that they don't have to. So, we sell to data engineers, data scientists, data admins, and we really spend our time doing three things. So, the first is that we build Astro, our flagship cloud service that we'll talk more on. But here, we're really building experiences that make it easier for data practitioners to author, run, and scale their data pipeline footprint on the cloud. And then, we also contribute to Apache Airflow as an open source project and community. So, we cultivate the community of humans, and we also put out open source developer tools that actually make it easier for individual data practitioners to be productive in their day-to-day jobs, whether or not they actually use our product and and pay us money or not. And then of course, we also have professional services and education and all of these things around our commercial products that enable folks to use our products and use Airflow as effectively as possible. So yeah, super, super happy with everything we've done and hopefully that gives you an idea of where we're starting. >> Awesome, so when you're talking with those, Paola, those data engineers, those data scientists, how do you define data orchestration and what does it mean to them? >> Yeah, yeah, it's a good question. So, you know, if you Google data orchestration you're going to get something about an automated process for organizing silo data and making it accessible for processing and analysis. But, to your question, what does that actually mean, you know? So, if you look at it from a customer's perspective, we can share a little bit about how we at Astronomer actually do data orchestration ourselves and the problems that it solves for us. So, as many other companies out in the world do, we at Astronomer need to monitor how our own customers use our products, right? And so, we have a weekly meeting, for example, that goes through a dashboard and a dashboarding tool called Sigma where we see the number of monthly customers and how they're engaging with our product. But, to actually do that, you know, we have to use data from our application database, for example, that has behavioral data on what they're actually doing in our product. We also have data from third party API tools, like Salesforce and HubSpot, and other ways in which our customer, we actually engage with our customers and their behavior. And so, our data team internally at Astronomer uses a bunch of tools to transform and use that data, right? So, we use FiveTran, for example, to ingest. We use Snowflake as our data warehouse. We use other tools for data transformations. And even, if we at Astronomer don't do this, you can imagine a data team also using tools like, Monte Carlo for data quality, or Hightouch for Reverse ETL, or things like that. And, I think the point here is that data teams, you know, that are building data-driven organizations have a plethora of tooling to both ingest the right data and come up with the right interfaces to transform and actually, interact with that data. And so, that movement and sort of synchronization of data across your ecosystem is exactly what data orchestration is responsible for. Historically, I think, and Raj will talk more about this, historically, schedulers like KRON and Oozie or Control-M have taken a role here, but we think that Apache Airflow has sort of risen over the past few years as the defacto industry standard for writing data pipelines that do tasks, that do data jobs that interact with that ecosystem of tools in your organization. And so, beyond that sort of data pipeline unit, I think where we see it is that data acquisition is not only writing those data pipelines that move your data, but it's also all the things around it, right, so, CI/CD tool and Secrets Management, et cetera. So, a long-winded answer here, but I think that's how we talk about it here at Astronomer and how we're building our products. >> Excellent. Great context, Paola. Thank you. Viraj, let's bring you into the conversation. Every company these days has to be a data company, right? They've got to be a software company- >> Mm-hmm. >> whether it's my bank or my grocery store. So, how are companies actually doing data orchestration today, Viraj? >> Yeah, it's a great question. So, I think one thing to think about is like, on one hand, you know, data orchestration is kind of a new category that we're helping define, but on the other hand, it's something that companies have been doing forever, right? You need to get data moving to use it, you know. You've got it all in place, aggregate it, cleaning it, et cetera. So, when you look at what companies out there are doing, right. Sometimes, if you're a more kind of born in the cloud company, as we say, you'll adopt all these cloud native tooling things your cloud provider gives you. If you're a bank or another sort of institution like that, you know, you're probably juggling an even wider variety of tools. You're thinking about a cloud migration. You might have things like Kron running in one place, Uzi running somewhere else, Informatics running somewhere else, while you're also trying to move all your workloads to the cloud. So, there's quite a large spectrum of what the current state is for companies. And then, kind of like Paola was saying, Apache Airflow started in 2014, and it was actually started by Airbnb, and they put out this blog post that was like, "Hey here's how we use Apache Airflow to orchestrate our data across all their sources." And really since then, right, it's almost been a decade since then, Airflow emerged as the open source standard, and there's companies of all sorts using it. And, it's really used to tie all these tools together, especially as that number of tools increases, companies move to hybrid cloud, hybrid multi-cloud strategies, and so on and so forth. But you know, what we found is that if you go to any company, especially a larger one and you say like, "Hey, how are you doing data orchestration?" They'll probably say something like, "Well, I have five data teams, so I have eight different ways I do data orchestration." Right. This idea of data orchestration's been there but the right way to do it, kind of all the abstractions you need, the way your teams need to work together, and so on and so forth, hasn't really emerged just yet, right? It's such a quick moving space that companies have to combine what they were doing before with what their new business initiatives are today. So, you know, what we really believe here at Astronomer is Airflow is the core of how you solve data orchestration for any sort of use case, but it's not everything. You know, it needs a little more. And, that's really where our commercial product, Astro comes in, where we've built, not only the most tried and tested airflow experience out there. We do employ a majority of the Airflow Core Committers, right? So, we're kind of really deep in the project. We've also built the right things around developer tooling, observability, and reliability for customers to really rely on Astro as the heart of the way they do data orchestration, and kind of think of it as the foundational layer that helps tie together all the different tools, practices and teams large companies have to do today. >> That foundational layer is absolutely critical. You've both mentioned open source software. Paola, I want to go back to you, and just give the audience an understanding of how open source really plays into Astronomer's mission as a company, and into the technologies like Astro. >> Mm-hmm. Yeah, absolutely. I mean, we, so we at Astronomers started using Airflow and actually building our products because Airflow is open source and we were our own customers at the beginning of our company journey. And, I think the open source community is at the core of everything we do. You know, without that open source community and culture, I think, you know, we have less of a business, and so, we're super invested in continuing to cultivate and grow that. And, I think there's a couple sort of concrete ways in which we do this that personally make me really excited to do my own job. You know, for one, we do things like we organize meetups and we sponsor the Airflow Summit and there's these sort of baseline community efforts that I think are really important and that reminds you, hey, there just humans trying to do their jobs and learn and use both our technology and things that are out there and contribute to it. So, making it easier to contribute to Airflow, for example, is another one of our efforts. As Viraj mentioned, we also employ, you know, engineers internally who are on our team whose full-time job is to make the open source project better. Again, regardless of whether or not you're a customer of ours or not, we want to make sure that we continue to cultivate the Airflow project in and of itself. And, we're also building developer tooling that might not be a part of the Apache Open Source project, but is still open source. So, we have repositories in our own sort of GitHub organization, for example, with tools that individual data practitioners, again customers are not, can use to make them be more productive in their day-to-day jobs with Airflow writing Dags for the most common use cases out there. The last thing I'll say is how important I think we've found it to build sort of educational resources and documentation and best practices. Airflow can be complex. It's been around for a long time. There's a lot of really, really rich feature sets. And so, how do we enable folks to actually use those? And that comes in, you know, things like webinars, and best practices, and courses and curriculum that are free and accessible and open to the community are just some of the ways in which I think we're continuing to invest in that open source community over the next year and beyond. >> That's awesome. It sounds like open source is really core, not only to the mission, but really to the heart of the organization. Viraj, I want to go back to you and really try to understand how does Astronomer fit into the wider modern data stack and ecosystem? Like what does that look like for customers? >> Yeah, yeah. So, both in the open source and with our commercial customers, right? Folks everywhere are trying to tie together a huge variety of tools in order to start making sense of their data. And you know, I kind of think of it almost like as like a pyramid, right? At the base level, you need things like data reliability, data, sorry, data freshness, data availability, and so on and so forth, right? You just need your data to be there. (coughs) I'm sorry. You just need your data to be there, and you need to make it predictable when it's going to be there. You need to make sure it's kind of correct at the highest level, some quality checks, and so on and so forth. And oftentimes, that kind of takes the case of ELT or ETL use cases, right? Taking data from somewhere and moving it somewhere else, usually into some sort of analytics destination. And, that's really what businesses can do to just power the core parts of getting insights into how their business is going, right? How much revenue did I had? What's in my pipeline, salesforce, and so on and so forth. Once that kind of base foundation is there and people can get the data they need, how they need it, it really opens up a lot for what customers can do. You know, I think one of the trendier things out there right now is MLOps, and how do companies actually put machine learning into production? Well, when you think about it you kind of have to squint at it, right? Like, machine learning pipelines are really just any other data pipeline. They just have a certain set of needs that might not not be applicable to ELT pipelines. And, when you kind of have a common layer to tie together all the ways data can move through your organization, that's really what we're trying to make it so companies can do. And, that happens in financial services where, you know, we have some customers who take app data coming from their mobile apps, and actually run it through their fraud detection services to make sure that all the activity is not fraudulent. We have customers that will run sports betting models on our platform where they'll take data from a bunch of public APIs around different sporting events that are happening, transform all of that in a way their data scientist can build models with it, and then actually bet on sports based on that output. You know, one of my favorite use cases I like to talk about that we saw in the open source is we had there was one company whose their business was to deliver blood transfusions via drone into remote parts of the world. And, it was really cool because they took all this data from all sorts of places, right? Kind of orchestrated all the aggregation and cleaning and analysis that happened had to happen via airflow and the end product would be a drone being shot out into a real remote part of the world to actually give somebody blood who needed it there. Because it turns out for certain parts of the world, the easiest way to deliver blood to them is via drone and not via some other, some other thing. So, these kind of, all the things people do with the modern data stack is absolutely incredible, right? Like you were saying, every company's trying to be a data-driven company. What really energizes me is knowing that like, for all those best, super great tools out there that power a business, we get to be the connective tissue, or the, almost like the electricity that kind of ropes them all together and makes so people can actually do what they need to do. >> Right. Phenomenal use cases that you just described, Raj. I mean, just the variety alone of what you guys are able to do and impact is so cool. So Paola, when you're with those data engineers, those data scientists, and customer conversations, what's your pitch? Why use Astro? >> Mm-hmm. Yeah, yeah, it's a good question. And honestly, to piggyback off of Viraj, there's so many. I think what keeps me so energized is how mission critical both our product and data orchestration is, and those use cases really are incredible and we work with customers of all shapes and sizes. But, to answer your question, right, so why use Astra? Why use our commercial products? There's so many people using open source, why pay for something more than that? So, you know, the baseline for our business really is that Airflow has grown exponentially over the last five years, and like we said has become an industry standard that we're confident there's a huge opportunity for us as a company and as a team. But, we also strongly believe that being great at running Airflow, you know, doesn't make you a successful company at what you do. What makes you a successful company at what you do is building great products and solving problems and solving pin points of your own customers, right? And, that differentiating value isn't being amazing at running Airflow. That should be our job. And so, we want to abstract those customers from meaning to do things like manage Kubernetes infrastructure that you need to run Airflow, and then hiring someone full-time to go do that. Which can be hard, but again doesn't add differentiating value to your team, or to your product, or to your customers. So, folks to get away from managing that infrastructure sort of a base, a base layer. Folks who are looking for differentiating features that make their team more productive and allows them to spend less time tweaking Airflow configurations and more time working with the data that they're getting from their business. For help, getting, staying up with Airflow releases. There's a ton of, we've actually been pretty quick to come out with new Airflow features and releases, and actually just keeping up with that feature set and working strategically with a partner to help you make the most out of those feature sets is a key part of it. And, really it's, especially if you're an organization who currently is committed to using Airflow, you likely have a lot of Airflow environments across your organization. And, being able to see those Airflow environments in a single place and being able to enable your data practitioners to create Airflow environments with a click of a button, and then use, for example, our command line to develop your Airflow Dags locally and push them up to our product, and use all of the sort of testing and monitoring and observability that we have on top of our product is such a key. It sounds so simple, especially if you use Airflow, but really those things are, you know, baseline value props that we have for the customers that continue to be excited to work with us. And of course, I think we can go beyond that and there's, we have ambitions to add whole, a whole bunch of features and expand into different types of personas. >> Right? >> But really our main value prop is for companies who are committed to Airflow and want to abstract themselves and make use of some of the differentiating features that we now have at Astronomer. >> Got it. Awesome. >> Thank you. One thing, one thing I'll add to that, Paola, and I think you did a good job of saying is because every company's trying to be a data company, companies are at different parts of their journey along that, right? And we want to meet customers where they are, and take them through it to where they want to go. So, on one end you have folks who are like, "Hey, we're just building a data team here. We have a new initiative. We heard about Airflow. How do you help us out?" On the farther end, you know, we have some customers that have been using Airflow for five plus years and they're like, "Hey, this is awesome. We have 10 more teams we want to bring on. How can you help with this? How can we do more stuff in the open source with you? How can we tell our story together?" And, it's all about kind of taking this vast community of data users everywhere, seeing where they're at, and saying like, "Hey, Astro and Airflow can take you to the next place that you want to go." >> Which is incredibly- >> Mm-hmm. >> and you bring up a great point, Viraj, that every company is somewhere in a different place on that journey. And it's, and it's complex. But it sounds to me like a lot of what you're doing is really stripping away a lot of the complexity, really enabling folks to use their data as quickly as possible, so that it's relevant and they can serve up, you know, the right products and services to whoever wants what. Really incredibly important. We're almost out of time, but I'd love to get both of your perspectives on what's next for Astronomer. You give us a a great overview of what the company's doing, the value in it for customers. Paola, from your lens as one of the co-founders, what's next? >> Yeah, I mean, I think we'll continue to, I think cultivate in that open source community. I think we'll continue to build products that are open sourced as part of our ecosystem. I also think that we'll continue to build products that actually make Airflow, and getting started with Airflow, more accessible. So, sort of lowering that barrier to entry to our products, whether that's price wise or infrastructure requirement wise. I think making it easier for folks to get started and get their hands on our product is super important for us this year. And really it's about, I think, you know, for us, it's really about focused execution this year and all of the sort of core principles that we've been talking about. And continuing to invest in all of the things around our product that again, enable teams to use Airflow more effectively and efficiently. >> And that efficiency piece is, everybody needs that. Last question, Viraj, for you. What do you see in terms of the next year for Astronomer and for your role? >> Yeah, you know, I think Paola did a really good job of laying it out. So it's, it's really hard to disagree with her on anything, right? I think executing is definitely the most important thing. My own personal bias on that is I think more than ever it's important to really galvanize the community around airflow. So, we're going to be focusing on that a lot. We want to make it easier for our users to get get our product into their hands, be that open source users or commercial users. And last, but certainly not least, is we're also really excited about Data Lineage and this other open source project in our umbrella called Open Lineage to make it so that there's a standard way for users to get lineage out of different systems that they use. When we think about what's in store for data lineage and needing to audit the way automated decisions are being made. You know, I think that's just such an important thing that companies are really just starting with, and I don't think there's a solution that's emerged that kind of ties it all together. So, we think that as we kind of grow the role of Airflow, right, we can also make it so that we're helping solve, we're helping customers solve their lineage problems all in Astro, which is our kind of the best of both worlds for us. >> Awesome. I can definitely feel and hear the enthusiasm and the passion that you both bring to Astronomer, to your customers, to your team. I love it. We could keep talking more and more, so you're going to have to come back. (laughing) Viraj, Paola, thank you so much for joining me today on this showcase conversation. We really appreciate your insights and all the context that you provided about Astronomer. >> Thank you so much for having us. >> My pleasure. For my guests, I'm Lisa Martin. You're watching this Cube conversation. (soft electronic music)
SUMMARY :
to this CUBE conversation Thank you so much and what it is that you guys do. and hopefully that gives you an idea and the problems that it solves for us. to be a data company, right? So, how are companies actually kind of all the abstractions you need, and just give the And that comes in, you of the organization. and analysis that happened that you just described, Raj. that you need to run Airflow, that we now have at Astronomer. Awesome. and I think you did a good job of saying and you bring up a great point, Viraj, and all of the sort of core principles and for your role? and needing to audit the and all the context that you (soft electronic music)
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Viraj Parekh | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
Paola | PERSON | 0.99+ |
Viraj | PERSON | 0.99+ |
2014 | DATE | 0.99+ |
Astronomer | ORGANIZATION | 0.99+ |
Paola Peraza-Calderon | PERSON | 0.99+ |
Paola Peraza Calderon | PERSON | 0.99+ |
Airflow | ORGANIZATION | 0.99+ |
Airbnb | ORGANIZATION | 0.99+ |
five plus years | QUANTITY | 0.99+ |
Astro | ORGANIZATION | 0.99+ |
Raj | PERSON | 0.99+ |
Uzi | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
first | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
Kron | ORGANIZATION | 0.99+ |
10 more teams | QUANTITY | 0.98+ |
Astronomers | ORGANIZATION | 0.98+ |
Astra | ORGANIZATION | 0.98+ |
one | QUANTITY | 0.98+ |
Airflow | TITLE | 0.98+ |
Informatics | ORGANIZATION | 0.98+ |
Monte Carlo | TITLE | 0.98+ |
this year | DATE | 0.98+ |
HubSpot | ORGANIZATION | 0.98+ |
one company | QUANTITY | 0.97+ |
Astronomer | TITLE | 0.97+ |
next year | DATE | 0.97+ |
Apache | ORGANIZATION | 0.97+ |
Airflow Summit | EVENT | 0.97+ |
AWS | ORGANIZATION | 0.95+ |
both worlds | QUANTITY | 0.93+ |
KRON | ORGANIZATION | 0.93+ |
CUBE | ORGANIZATION | 0.92+ |
M | ORGANIZATION | 0.92+ |
Redshift | TITLE | 0.91+ |
Snowflake | TITLE | 0.91+ |
five data teams | QUANTITY | 0.91+ |
GitHub | ORGANIZATION | 0.91+ |
Oozie | ORGANIZATION | 0.9+ |
Data Lineage | ORGANIZATION | 0.9+ |
Jesse Cugliotta & Nicholas Taylor | The Future of Cloud & Data in Healthcare
(upbeat music) >> Welcome back to Supercloud 2. This is Dave Vellante. We're here exploring the intersection of data and analytics in the future of cloud and data. In this segment, we're going to look deeper into the life sciences business with Jesse Cugliotta, who leads the Healthcare and Life Sciences industry practice at Snowflake. And Nicholas Nick Taylor, who's the executive director of Informatics at Ionis Pharmaceuticals. Gentlemen, thanks for coming in theCUBE and participating in the program. Really appreciate it. >> Thank you for having us- >> Thanks for having me. >> You're very welcome, okay, we're go really try to look at data sharing as a use case and try to understand what's happening in the healthcare industry generally and specifically, how Nick thinks about sharing data in a governed fashion whether tapping the capabilities of multiple clouds is advantageous long term or presents more challenges than the effort is worth. And to start, Jesse, you lead this industry practice for Snowflake and it's a challenging and vibrant area. It's one that's hyper-focused on data privacy. So the first question is, you know there was a time when healthcare and other regulated industries wouldn't go near the cloud. What are you seeing today in the industry around cloud adoption and specifically multi-cloud adoption? >> Yeah, for years I've heard that healthcare and life sciences has been cloud diverse, but in spite of all of that if you look at a lot of aspects of this industry today, they've been running in the cloud for over 10 years now. Particularly when you look at CRM technologies or HR or HCM, even clinical technologies like EDC or ETMF. And it's interesting that you mentioned multi-cloud as well because this has always been an underlying reality especially within life sciences. This industry grows through acquisition where companies are looking to boost their future development pipeline either by buying up smaller biotechs, they may have like a late or a mid-stage promising candidate. And what typically happens is the larger pharma could then use their commercial muscle and their regulatory experience to move it to approvals and into the market. And I think the last few decades of cheap capital certainly accelerated that trend over the last couple of years. But this typically means that these new combined institutions may have technologies that are running on multiple clouds or multiple cloud strategies in various different regions to your point. And what we've often found is that they're not planning to standardize everything onto a single cloud provider. They're often looking for technologies that embrace this multi-cloud approach and work seamlessly across them. And I think this is a big reason why we, here at Snowflake, we've seen such strong momentum and growth across this industry because healthcare and life science has actually been one of our fastest growing sectors over the last couple of years. And a big part of that is in fact that we run on not only all three major cloud providers, but individual accounts within each and any one of them, they had the ability to communicate and interoperate with one another, like a globally interconnected database. >> Great, thank you for that setup. And so Nick, tell us more about your role and Ionis Pharma please. >> Sure. So I've been at Ionis for around five years now. You know, when when I joined it was, the IT department was pretty small. There wasn't a lot of warehousing, there wasn't a lot of kind of big data there. We saw an opportunity with Snowflake pretty early on as a provider that would be a lot of benefit for us, you know, 'cause we're small, wanted something that was fairly hands off. You know, I remember the days where you had to get a lot of DBAs in to fine tune your databases, make sure everything was running really, really well. The notion that there's, you know, no indexes to tune, right? There's very few knobs and dials, you can turn on Snowflake. That was appealing that, you know, it just kind of worked. So we found a use case to bring the platform in. We basically used it as a logging replacement as a Splunk kind of replacement with a platform called Elysium Analytics as a way to just get it in the door and give us the opportunity to solve a real world use case, but also to help us start to experiment using Snowflake as a platform. It took us a while to A, get the funding to bring it in, but B, build the momentum behind it. But, you know, as we experimented we added more data in there, we ran a few more experiments, we piloted in few more applications, we really saw the power of the platform and now, we are becoming a commercial organization. And with that comes a lot of major datasets. And so, you know, we really see Snowflake as being a very important part of our ecology going forward to help us build out our infrastructure. >> Okay, and you are running, your group runs on Azure, it's kind of mono cloud, single cloud, but others within Ionis are using other clouds, but you're not currently, you know, collaborating in terms of data sharing. And I wonder if you could talk about how your data needs have evolved over the past decade. I know you came from another highly regulated industry in financial services. So what's changed? You sort of touched on this before, you had these, you know, very specialized individuals who were, you know, DBAs, and, you know, could tune databases and the like, so that's evolved, but how has generally your needs evolved? Just kind of make an observation over the last, you know, five or seven years. What have you seen? >> Well, we, I wasn't in a group that did a lot of warehousing. It was more like online trade capture, but, you know, it was very much on-prem. You know, being in the cloud is very much a dirty word back then. I know that's changed since I've left. But in, you know, we had major, major teams of everyone who could do everything, right. As I mentioned in the pharma organization, there's a lot fewer of us. So the data needs there are very different, right? It's, we have a lot of SaaS applications. One of the difficulties with bringing a lot of SaaS applications on board is obviously data integration. So making sure the data is the same between them. But one of the big problems is joining the data across those SaaS applications. So one of the benefits, one of the things that we use Snowflake for is to basically take data out of these SaaS applications and load them into a warehouse so we can do those joins. So we use technologies like Boomi, we use technologies like Fivetran, like DBT to bring this data all into one place and start to kind of join that basically, allow us to do, run experiments, do analysis, basically take better, find better use for our data that was siloed in the past. You mentioned- >> Yeah. And just to add on to Nick's point there. >> Go ahead. >> That's actually something very common that we're seeing across the industry is because a lot of these SaaS applications that you mentioned, Nick, they're with from vendors that are trying to build their own ecosystem in walled garden. And by definition, many of them do not want to integrate with one another. So from a, you know, from a data platform vendor's perspective, we see this as a huge opportunity to help organizations like Ionis and others kind of deal with the challenges that Nick is speaking about because if the individual platform vendors are never going to make that part of their strategy, we see it as a great way to add additional value to these customers. >> Well, this data sharing thing is interesting. There's a lot of walled gardens out there. Oracle is a walled garden, AWS in many ways is a walled garden. You know, Microsoft has its walled garden. You could argue Snowflake is a walled garden. But the, what we're seeing and the whole reason behind the notion of super-cloud is we're creating an abstraction layer where you actually, in this case for this use case, can share data in a governed manner. Let's forget about the cross-cloud for a moment. I'll come back to that, but I wonder, Nick, if you could talk about how you are sharing data, again, Snowflake sort of, it's, I look at Snowflake like the app store, Apple, we're going to control everything, we're going to guarantee with data clean rooms and governance and the standards that we've created within that platform, we're going to make sure that it's safe for you to share data in this highly regulated industry. Are you doing that today? And take us through, you know, the considerations that you have in that regard. >> So it's kind of early days for us in Snowflake in general, but certainly in data sharing, we have a couple of examples. So data marketplace, you know, that's a great invention. It's, I've been a small IT shop again, right? The fact that we are able to just bring down terabyte size datasets straight into our Snowflake and run analytics directly on that is huge, right? The fact that we don't have to FTP these massive files around run jobs that may break, being able to just have that on tap is huge for us. We've recently been talking to one of our CRO feeds- CRO organizations about getting their data feeds in. Historically, this clinical trial data that comes in on an FTP file, we have to process it, take it through the platforms, put it into the warehouse. But one of the CROs that we talked to recently when we were reinvestigate in what data opportunities they have, they were a Snowflake customer and we are, I think, the first production customer they have, have taken that feed. So they're basically exposing their tables of data that historically came in these FTP files directly into our Snowflake instance now. We haven't taken advantage of that. It only actually flipped the switch about three or four weeks ago. But that's pretty big for us again, right? We don't have to worry about maintaining those jobs that take those files in. We don't have to worry about the jobs that take those and shove them on the warehouse. We now have a feed that's directly there that we can use a tool like DBT to push through directly into our model. And then the third avenue that's came up, actually fairly recently as well was genetics data. So genetics data that's highly, highly regulated. We had to be very careful with that. And we had a conversation with Snowflake about the data white rooms practice, and we see that as a pretty interesting opportunity. We are having one organization run genetic analysis being able to send us those genetic datasets, but then there's another organization that's actually has the in quotes "metadata" around that, so age, ethnicity, location, et cetera. And being able to join those two datasets through some kind of mechanism would be really beneficial to the organization. Being able to build a data white room so we can put that genetic data in a secure place, anonymize it, and then share the amalgamated data back out in a way that's able to be joined to the anonymized metadata, that could be pretty huge for us as well. >> Okay, so this is interesting. So you talk about FTP, which was the common way to share data. And so you basically, it's so, I got it now you take it and do whatever you want with it. Now we're talking, Jesse, about sharing the same copy of live data. How common is that use case in your industry? >> It's become very common over the last couple of years. And I think a big part of it is having the right technology to do it effectively. You know, as Nick mentioned, historically, this was done by people sending files around. And the challenge with that approach, of course, while there are multiple challenges, one, every time you send a file around your, by definition creating a copy of the data because you have to pull it out of your system of record, put it into a file, put it on some server where somebody else picks it up. And by definition at that point you've lost governance. So this creates challenges in general hesitation to doing so. It's not that it hasn't happened, but the other challenge with it is that the data's no longer real time. You know, you're working with a copy of data that was as fresh as at the time at that when that was actually extracted. And that creates limitations in terms of how effective this can be. What we're starting to see now with some of our customers is live sharing of information. And there's two aspects of that that are important. One is that you're not actually physically creating the copy and sending it to someone else, you're actually exposing it from where it exists and allowing another consumer to interact with it from their own account that could be in another region, some are running in another cloud. So this concept of super-cloud or cross-cloud could becoming realized here. But the other important aspect of it is that when that other- when that other entity is querying your data, they're seeing it in a real time state. And this is particularly important when you think about use cases like supply chain planning, where you're leveraging data across various different enterprises. If I'm a manufacturer or if I'm a contract manufacturer and I can see the actual inventory positions of my clients, of my distributors, of the levels of consumption at the pharmacy or the hospital that gives me a lot of indication as to how my demand profile is changing over time versus working with a static picture that may have been from three weeks ago. And this has become incredibly important as supply chains are becoming more constrained and the ability to plan accurately has never been more important. >> Yeah. So the race is on to solve these problems. So it start, we started with, hey, okay, cloud, Dave, we're going to simplify database, we're going to put it in the cloud, give virtually infinite resources, separate compute from storage. Okay, check, we got that. Now we've moved into sort of data clean rooms and governance and you've got an ecosystem that's forming around this to make it safer to share data. And then, you know, nirvana, at least near term nirvana is we're going to build data applications and we're going to be able to share live data and then you start to get into monetization. Do you see, Nick, in the near future where I know you've got relationships with, for instance, big pharma like AstraZeneca, do you see a situation where you start sharing data with them? Is that in the near term? Is that more long term? What are the considerations in that regard? >> I mean, it's something we've been thinking about. We haven't actually addressed that yet. Yeah, I could see situations where, you know, some of these big relationships where we do need to share a lot of data, it would be very nice to be able to just flick a switch and share our data assets across to those organizations. But, you know, that's a ways off for us now. We're mainly looking at bringing data in at the moment. >> One of the things that we've seen in financial services in particular, and Jesse, I'd love to get your thoughts on this, is companies like Goldman or Capital One or Nasdaq taking their stack, their software, their tooling actually putting it on the cloud and facing it to their customers and selling that as a new monetization vector as part of their digital or business transformation. Are you seeing that Jesse at all in healthcare or is it happening today or do you see a day when that happens or is healthier or just too scary to do that? >> No, we're seeing the early stages of this as well. And I think it's for some of the reasons we talked about earlier. You know, it's a much more secure way to work with a colleague if you don't have to copy your data and potentially expose it. And some of the reasons that people have historically copied that data is that they needed to leverage some sort of algorithm or application that a third party was providing. So maybe someone was predicting the ideal location and run a clinical trial for this particular rare disease category where there are only so many patients around the world that may actually be candidates for this disease. So you have to pick the ideal location. Well, sending the dataset to do so, you know, would involve a fairly complicated process similar to what Nick was mentioning earlier. If the company who was providing the logic or the algorithm to determine that location could bring that algorithm to you and you run it against your own data, that's a much more ideal and a much safer and more secure way for this industry to actually start to work with some of these partners and vendors. And that's one of the things that we're looking to enable going into this year is that, you know, the whole concept should be bring the logic to your data versus your data to the logic and the underlying sharing mechanisms that we've spoken about are actually what are powering that today. >> And so thank you for that, Jesse. >> Yes, Dave. >> And so Nick- Go ahead please. >> Yeah, if I could add, yeah, if I could add to that, that's something certainly we've been thinking about. In fact, we'd started talking to Snowflake about that a couple of years ago. We saw the power there again of the platform to be able to say, well, could we, we were thinking in more of a data share, but could we share our data out to say an AI/ML vendor, have them do the analytics and then share the data, the results back to us. Now, you know, there's more powerful mechanisms to do that within the Snowflake ecosystem now, but you know, we probably wouldn't need to have onsite AI/ML people, right? Some of that stuff's very sophisticated, expensive resources, hard to find, you know, it's much better for us to find a company that would be able to build those analytics, maintain those analytics for us. And you know, we saw an opportunity to do that a couple years ago and we're kind of excited about the opportunity there that we can just basically do it with a no op, right? We share the data route, we have the analytics done, we get the result back and it's just fairly seamless. >> I mean, I could have a whole another Cube session on this, guys, but I mean, I just did a a session with Andy Thurai, a Constellation research about how difficult it's been for organization to get ROI because they don't have the expertise in house so they want to either outsource it or rely on vendor R&D companies to inject that AI and machine intelligence directly into applications. My follow-up question to you Nick is, when you think about, 'cause Jesse was talking about, you know, let the data basically stay where it is and you know bring the compute to that data. If that data lives on different clouds, and maybe it's not your group, but maybe it's other parts of Ionis or maybe it's your partners like AstraZeneca, or you know, the AI/ML partners and they're potentially on other clouds or that data is on other clouds. Do you see that, again, coming back to super-cloud, do you see it as an advantage to be able to have a consistent experience across those clouds? Or is that just kind of get in the way and make things more complex? What's your take on that, Nick? >> Well, from the vendors, so from the client side, it's kind of seamless with Snowflake for us. So we know for a fact that one of the datasets we have at the moment, Compile, which is a, the large multi terabyte dataset I was talking about. They're on AWS on the East Coast and we are on Azure on the West Coast. And they had to do a few tweaks in the background to make sure the data was pushed over from, but from my point of view, the data just exists, right? So for me, I think it's hugely beneficial that Snowflake supports this kind of infrastructure, right? We don't have to jump through hoops to like, okay, well, we'll download it here and then re-upload it here. They already have the mechanism in the background to do these multi-cloud shares. So it's not important for us internally at the moment. I could see potentially at some point where we start linking across different groups in the organization that do have maybe Amazon or Google Cloud, but certainly within our providers. We know for a fact that they're on different services at the moment and it just works. >> Yeah, and we learned from Benoit Dageville, who came into the studio on August 9th with first Supercloud in 2022 that Snowflake uses a single global instance across regions and across clouds, yeah, whether or not you can query across you know, big regions, it just depends, right? It depends on latency. You might have to make a copy or maybe do some tweaks in the background. But guys, we got to jump, I really appreciate your time. Really thoughtful discussion on the future of data and cloud, specifically within healthcare and pharma. Thank you for your time. >> Thanks- >> Thanks for having us. >> All right, this is Dave Vellante for theCUBE team and my co-host, John Furrier. Keep it right there for more action at Supercloud 2. (upbeat music)
SUMMARY :
and analytics in the So the first question is, you know And it's interesting that you Great, thank you for that setup. get the funding to bring it in, over the last, you know, So one of the benefits, one of the things And just to add on to Nick's point there. that you mentioned, Nick, and the standards that we've So data marketplace, you know, And so you basically, it's so, And the challenge with Is that in the near term? bringing data in at the moment. One of the things that we've seen that algorithm to you and you And so Nick- the results back to us. Or is that just kind of get in the way in the background to do on the future of data and cloud, All right, this is Dave Vellante
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Jesse Cugliotta | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Goldman | ORGANIZATION | 0.99+ |
AstraZeneca | ORGANIZATION | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
Capital One | ORGANIZATION | 0.99+ |
Jesse | PERSON | 0.99+ |
Andy Thurai | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
August 9th | DATE | 0.99+ |
Nick | PERSON | 0.99+ |
Nasdaq | ORGANIZATION | 0.99+ |
Nicholas Nick Taylor | PERSON | 0.99+ |
five | QUANTITY | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Ionis | ORGANIZATION | 0.99+ |
Dave | PERSON | 0.99+ |
Ionis Pharma | ORGANIZATION | 0.99+ |
Nicholas Taylor | PERSON | 0.99+ |
Ionis Pharmaceuticals | ORGANIZATION | 0.99+ |
Snowflake | ORGANIZATION | 0.99+ |
first question | QUANTITY | 0.99+ |
Benoit Dageville | PERSON | 0.99+ |
Apple | ORGANIZATION | 0.99+ |
seven years | QUANTITY | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
2022 | DATE | 0.99+ |
today | DATE | 0.99+ |
over 10 years | QUANTITY | 0.98+ |
Snowflake | TITLE | 0.98+ |
one | QUANTITY | 0.98+ |
One | QUANTITY | 0.98+ |
two aspects | QUANTITY | 0.98+ |
first | QUANTITY | 0.98+ |
this year | DATE | 0.97+ |
each | QUANTITY | 0.97+ |
two datasets | QUANTITY | 0.97+ |
West Coast | LOCATION | 0.97+ |
four weeks ago | DATE | 0.97+ |
around five years | QUANTITY | 0.97+ |
three | QUANTITY | 0.95+ |
first production | QUANTITY | 0.95+ |
East Coast | LOCATION | 0.95+ |
third avenue | QUANTITY | 0.95+ |
one organization | QUANTITY | 0.94+ |
theCUBE | ORGANIZATION | 0.94+ |
couple years ago | DATE | 0.93+ |
single cloud | QUANTITY | 0.92+ |
single cloud provider | QUANTITY | 0.92+ |
hree weeks ago | DATE | 0.91+ |
one place | QUANTITY | 0.88+ |
Azure | TITLE | 0.86+ |
last couple of years | DATE | 0.85+ |
Kazuhiro Gomi & Yoshihisa Yamamoto | Upgrade 2020 The NTT Research Summit
>> Announcer: From around the globe, it's theCUBE. Covering the UPGRADE 2020, the NTT Research Summit. Presented by NTT research. >> Hey, welcome back everybody. Jeff Frick here with theCUBE. Welcome back to our ongoing coverage of UPGRADE 2020. It's the NTT Research Labs Summit, and it's all about upgrading reality. Heavy duty basic research around a bunch of very smart topics. And we're really excited to have our next guest to kind of dive in. I promise you, it'll be the deepest conversation you have today, unless you watch a few more of these segments. So our first guest we're welcoming back Kazuhiro Gomi He's the president and CEO of NTT research, Kaza great to see you. >> Good to see you. And joining him is Yoshi Yamamoto. He is a fellow for NTT Research and also the director of the Physics and Informatics Lab. Yoshi, great to meet you as well. >> Nice to meet you. >> So I was teasing the crew earlier, Yoshi, when I was doing some background work on you and I pulled up your Wikipedia page and I was like, okay guys, read this thing and tell me what a, what Yoshi does. You that have been knee deep in quantum computing and all of the supporting things around quantum heavy duty kind of next gen computing. I wonder if you can kind of share a little bit, you know, your mission running this labs and really thinking so far in advance of what we, you know, kind of experience and what we work with today and this new kind of basic research. >> NTT started the research on quantum computing back in 1986 87. So it is already more than 30 years. So, the company invested in this field. We have accumulated a lot of sort of our ideas, knowledge, technology in this field. And probably, it is the right time to establish the connection, close connection to US academia. And in this way, we will jointly sort of advance our research capabilities towards the future. The goal is still, I think, a long way to go. But by collaborating with American universities, and students we can accelerate NTT effort in this area. >> So, you've been moving, you've been working on quantum for 30 years. I had no idea that that research has been going on for such a very long time. We hear about it in the news and we hear about it a place like IBM and iSensor has a neat little demo that they have in the new sales force period. What, what is, what makes quantum so exciting and the potential to work so hard for so long? And what is it going to eventually open up for us when we get it to commercial availability? >> The honest answer to that question is we don't know yet. Still, I think after 30 years I think of hard working on quantum Physics and Computing. Still we don't know clean applications are even, I think we feel that the current, all the current efforts, are not necessarily, I think, practical from the engineering viewpoint. So, it is still a long way to go. But the reason why NTT has been continuously working on the subject is basically the very, sort of bottom or fundamental side of the present day communication and the computing technology. There is always a quantum principle and it is very important for us to understand the quantum principles and quantum limit for communication and computing first of all. And if we are lucky, maybe we can make a breakthrough for the next generation communication and computing technology based on quantum principles. >> Right. >> But the second, is really I think just a guess, and hope, researcher's hope and nothing very solid yet. >> Right? Well, Kazu I want to go, go to you cause it really highlights the difference between, you know, kind of basic hardcore fundamental research versus building new applications or building new products or building new, you know, things that are going to be, you know, commercially viable and you can build an ROI and you can figure out what the customers are going to buy. It really reflects that this is very different. This is very, very basic with very, very long lead times and very difficult execution. So when, you know, for NTT to spend that money and invest that time and people for long, long periods of time with not necessarily a clean ROI at the end, that really, it's really an interesting statement in terms of this investment and thinking about something big like upgrading reality. >> Yeah, so that's what this, yeah, exactly that you talked about what the basic research is, and from NTT perspective, yeah, we feel like we, as Dr. Yamamoto, he just mentioned that we've been investing into 30 plus years of a time in this field and, you know, and we, well, I can talk about why this is important. And some of them is that, you know, that the current computer that everybody uses, we are certainly, well, there might be some more areas of improvement, but we will someday in, I don't know, four years, five years, 10 years down the road, there might be some big roadblock in terms of more capacity, more powers and stuff. We may run into some issues. So we need to be prepared for those kinds of things. So, yes we are in a way of fortunate that we are, we have a great team to, and a special and an expertise in this field. And, you know, we have, we can spend some resource towards that. So why not? We should just do that in preparation for that big, big wall so to speak. I guess we are expecting to kind of run into, five, 10 years down the road. So let's just looking into it, invest some resources into it. So that's where we are, we're here. And again, I I'm, from my perspective, we are very fortunate that we have all the resources that we can do. >> It's great. Right, as they give it to you. Dr. Yamamoto, I wonder if you can share what it's like in terms of the industry and academic working together. You look at the presentations that are happening here at the event. All the great academic institutions are very well represented, very deep papers. You at NTT, you spend some time at Stanford, talk about how it is working between this joint development with great academic institutions, as well as the great company. >> Traditionally in the United States, there has been always two complementary opportunities for training next generation scientists and engineers. One opportunity is junior faculty position or possible position in academia, where main emphasis is education. The other opportunity is junior researcher position in industrial lab where apparently the focus emphasis is research. And eventually we need two types of intellectual leaders from two different career paths. When they sort of work together, with a strong educational background and a strong research background, maybe we can make wonderful breakthrough I think. So it is very important to sort of connect between two institutions. However, in the recent past, particularly after Better Lab disappeared, basic research activity in industrial lab decreases substantially. And we hope MTT research can contribute to the building of fundamental science in industry side. And for that purpose cross collaboration with research Universities are very important. So the first task we have been working so far, is to build up this industry academia connection. >> Huge compliment NTT to continue to fund the basic research. Cause as you said, there's a lot of companies that were in it before and are not in it any more. And when you often read the history of, of, of computing and a lot of different things, you know, it goes back to a lot of times, some basic, some basic research. And just for everyone to know what we're talking about, I want to read a couple of, of sessions that you could attend and learn within Dr. Yamamoto space. So it's Coherent nonlinear dynamics combinatorial optimization. That's just one session. I love it. Physics successfully implements Lagrange multiplier optimization. I love it. Photonics accelerators for machine learning. I mean, it's so it's so interesting to read basic research titles because, you know, it's like a micro-focus of a subset. It's not quantum computing, it's all these little smaller pieces of the quantum computing stack. And then obviously very deep and rich. Deep dives into those, those topics. And so, again, Kazu, this is the first one that's going to run after the day, the first physics lab. But then you've got the crypto cryptography and information security lab, as well as the medical and health information lab. You started with physics and informatics. Is that the, is that the history? Is that the favorite child you can lead that day off on day two of the event. >> We did throw a straw and Dr. Yamamoto won it Just kidding (all laugh) >> (indistinct), right? It's always fair. >> But certainly this quantum, Well, all the topics certainly are focuses that the basic research, that's definitely a commonality. But I think the quantum physics is in a way kind of very symbolic to kind of show that the, what the basic research is. And many people has a many ideas associated with the term basic research. But I think that the quantum physics is certainly one of the strong candidates that many people may think of. So well, and I think this is definitely a good place to start for this session, from my perspective. >> Right. >> Well, and it almost feels like that's kind of the foundational even for the other sessions, right? So you talk about medical or you talk about cryptography in information, still at the end of the day, there's going to be compute happening to drive those processes. Whether it's looking at, at, at medical slides or trying to do diagnosis, or trying to run a bunch of analysis against huge data sets, which then goes back to, you know, ultimately algorithms and ultimately compute, and this opening up of this entirely different set of, of horsepower. But Dr. Yamamoto, I'm just curious, how did you get started down this path of, of this crazy 30 year journey on quantum computing. >> The first quantum algorithm was invented by David Deutsch back in 1985. These particular algorithm turned out later the complete failure, not useful at all. And he spent seven years, actually, to fix loophole and invented the first successful algorithm that was 1992. Even though the first algorithm was a complete failure, that paper actually created a lot of excitement among the young scientists at NTT Basic Research Lab, immediately after the paper appeared. And 1987 is actually, I think, one year later. So this paper appeared. And we, sort of agreed that maybe one of the interesting future direction is quantum information processing. And that's how it started. It's it's spontaneous sort of activity, I think among young scientists of late twenties and early thirties at the time. >> And what do you think Dr. Yamamoto that people should think about? If, if, if again, if we're at a, at a cocktail party, not with not with a bunch of, of people that, that intimately know the topic, how do you explain it to them? How, how should they think about this great opportunity around quantum that's kept you engaged for decades and decades and decades. >> The quantum is everywhere. Namely, I think this world I think is fundamentally based on and created from quantum substrate. At the very bottom of our, sort of world, consist of electrons and photons and atoms and those fundamental particles sort of behave according to quantum rule. And which is a very different from classical reality, namely the world where we are living every day. The relevant question which is also interesting is how our classical world or classical reality surfaces from the general or universal quantum substrate where our intuition never works. And that sort of a fundamental question actually opens the possibility I think by utilizing quantum principle or quantum classical sort of crossover principle, we can revolutionize the current limitation in communication and computation. That's basically the start point. We start from quantum substrate. Under classical world the surface is on top of quantum substrate exceptional case. And we build the, sort of communication and computing machine in these exceptional sort of world. But equally dig into quantum substrate, new opportunities is open for us. That's somewhat the fundamental question. >> That's great. >> Well, I'm not, yeah, we can't get too deep cause you'll lose me, you'll lose me long before, before you get to the bottom of the, of the story, but, you know, I really appreciate it. And of course back to you this is your guys' first event. It's a really bold statement, right? Upgrade reality. I just wonder if, when you look at the, at the registrant's and you look at the participation and what do you kind of anticipate, how much of the anticipation is, is kind of people in the business, you know, kind of celebrating and, and kind of catching up to the latest research and how much of it is going to be really inspirational for those next, you know, early 20 somethings who are looking to grab, you know, an exciting field to hitch their wagon to, and to come away after this, to say, wow, this is something that really hooked me and I want to get down and really kind of advance this technology a little bit, further advance this research a little bit further. >> So yeah, for, from my point of view for this event, I'm expecting, there are quite wide range of people. I'm, I'm hoping that are interested in to this event. Like you mentioned that those are the, you know, the business people who wants to know what NTT does, and then what, you know, the wider spectrum of NTT does. And then, and also, especially like today's events and onwards, very specific to each topic. And we go into very deep dive. And, and so to, to this session, especially in a lot of participants from the academia's world, for each, each subject, including students, and then some other, basically students and professors and teachers and all those people as well. So, so that's are my expectations. And then from that program arrangement perspective, that's always something in my mind that how do we address those different kind of segments of the people. And we all welcoming, by the way, for those people. So to me to, so yesterday was the general sessions where I'm kind of expecting more that the business, and then perhaps some other more and more general people who're just curious what NTT is doing. And so instead of going too much details, but just to give you the ideas that the what's that our vision is and also, you know, a little bit of fla flavor is a good word or not, but give you some ideas of what we are trying to do. And then the better from here for the next three days, obviously for the academic people, and then those who are the experts in each field, probably day one is not quite deep enough. Not quite addressing what they want to know. So day two, three, four are the days that designed for that kind of requirements and expectations. >> Right? And, and are most of the presentations built on academic research, that's been submitted to journals and other formal, you know, peer review and peer publication types of activities. So this is all very formal, very professional, and very, probably accessible to people that know where to find this information. >> Mmh. >> Yeah, it's great. >> Yeah. >> Well, I, I have learned a ton about NTT and a ton about this crazy basic research that you guys are doing, and a ton about the fact that I need to go back to school if I ever want to learn any of this stuff, because it's, it's a fascinating tale and it's it's great to know as we've seen these other basic research companies, not necessarily academic but companies kind of go away. We mentioned Xerox PARC and Bell Labs that you guys have really picked up that mantle. Not necessarily picked it up, you're already doing it yourselves. but really continuing to carry that mantle so that we can make these fundamental, basic building block breakthroughs to take us to the next generation. And as you say, upgrade the future. So again, congratulations. Thanks for sharing this story and good luck with all those presentations. >> Thank you very much. >> Thank you. >> Thank you. Alright, Yoshi, Kazu I'm Jeff, NTT UPGRADE 2020. We're going to upgrade the feature. Thanks for watching. See you next time. (soft music)
SUMMARY :
the NTT Research Summit. It's the NTT Research Labs Summit, and also the director of the and all of the supporting things And probably, it is the right time to establish the connection, and the potential to and the computing technology. But the second, is that are going to be, you that the current computer that are happening here at the event. So the first task we Is that the favorite child and Dr. Yamamoto won it It's always fair. that the basic research, that's for the other sessions, right? and invented the first successful that intimately know the topic, At the very bottom of our, sort of world, And of course back to you this and then what, you know, the And, and are most of that you guys have really See you next time.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Yoshi Yamamoto | PERSON | 0.99+ |
Yoshi | PERSON | 0.99+ |
Kazuhiro Gomi | PERSON | 0.99+ |
Jeff Frick | PERSON | 0.99+ |
1985 | DATE | 0.99+ |
Yamamoto | PERSON | 0.99+ |
1992 | DATE | 0.99+ |
David Deutsch | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
seven years | QUANTITY | 0.99+ |
NTT | ORGANIZATION | 0.99+ |
NTT Basic Research Lab | ORGANIZATION | 0.99+ |
10 years | QUANTITY | 0.99+ |
Bell Labs | ORGANIZATION | 0.99+ |
five years | QUANTITY | 0.99+ |
five | QUANTITY | 0.99+ |
1987 | DATE | 0.99+ |
NTT Research | ORGANIZATION | 0.99+ |
30 year | QUANTITY | 0.99+ |
Jeff | PERSON | 0.99+ |
first algorithm | QUANTITY | 0.99+ |
30 years | QUANTITY | 0.99+ |
two institutions | QUANTITY | 0.99+ |
Yoshihisa Yamamoto | PERSON | 0.99+ |
Kazu | PERSON | 0.99+ |
one year later | DATE | 0.99+ |
United States | LOCATION | 0.99+ |
yesterday | DATE | 0.99+ |
more than 30 years | QUANTITY | 0.99+ |
one session | QUANTITY | 0.99+ |
four years | QUANTITY | 0.99+ |
Xerox PARC | ORGANIZATION | 0.99+ |
two types | QUANTITY | 0.99+ |
NTT research | ORGANIZATION | 0.99+ |
30 plus years | QUANTITY | 0.99+ |
first guest | QUANTITY | 0.98+ |
NTT Research Summit | EVENT | 0.98+ |
three | QUANTITY | 0.98+ |
One opportunity | QUANTITY | 0.98+ |
first task | QUANTITY | 0.98+ |
first event | QUANTITY | 0.98+ |
first successful algorithm | QUANTITY | 0.98+ |
NTT Research Labs Summit | EVENT | 0.97+ |
second | QUANTITY | 0.97+ |
each subject | QUANTITY | 0.97+ |
iSensor | ORGANIZATION | 0.97+ |
today | DATE | 0.97+ |
Dr. | PERSON | 0.97+ |
four | QUANTITY | 0.97+ |
30 years | QUANTITY | 0.96+ |
one | QUANTITY | 0.96+ |
first one | QUANTITY | 0.96+ |
late twenties | DATE | 0.96+ |
Physics and Informatics Lab | ORGANIZATION | 0.96+ |
each | QUANTITY | 0.96+ |
a ton | QUANTITY | 0.95+ |
each topic | QUANTITY | 0.95+ |
day two | QUANTITY | 0.95+ |
2020 | DATE | 0.93+ |
Better Lab | ORGANIZATION | 0.92+ |
each field | QUANTITY | 0.92+ |
first physics lab | QUANTITY | 0.87+ |
US | LOCATION | 0.86+ |
1986 87 | DATE | 0.86+ |
decades and | QUANTITY | 0.85+ |
first quantum | QUANTITY | 0.83+ |
UPGRADE 2020 | EVENT | 0.79+ |
Stanford | ORGANIZATION | 0.79+ |
two complementary | QUANTITY | 0.79+ |
Kaza | PERSON | 0.78+ |
Kazuhiro Gomi, NTT | Upgrade 2020 The NTT Research Summit
>> Narrator: From around the globe, it's theCUBE, covering the Upgrade 2020, the NTT Research Summit presented by NTT Research. >> Hey, welcome back everybody. Jeff Frick here with theCUBE. We're in Palo Alto studio for our ongoing coverage of the Upgrade 2020, it's the NTT Research conference. It's our first year covering the event, it's actually the first year for the event inaugural, a year for the events, we're really, really excited to get into this. It's basic research that drives a whole lot of innovation, and we're really excited to have our next guest. He is Kazuhiro Gomi, he is the President and CEO of NTT Research. Kazu, great to see you. >> Hi, good to see you. >> Yeah, so let's jump into it. So this event, like many events was originally scheduled I think for March at Berkeley, clearly COVID came along and you guys had to make some changes. I wonder if you can just share a little bit about your thinking in terms of having this event, getting this great information out, but having to do it in a digital way and kind of rethinking the conference strategy. >> Sure, yeah. So NTT Research, we started our operations about a year ago, July, 2019. and I always wanted to show the world that to give a update of what we have done in the areas of basic and fundamental research. So we plan to do that in March, as you mentioned, however, that the rest of it to some extent history, we needed to cancel the event and then decided to do this time of the year through virtual. Something we learned, however, not everything is bad, by doing this virtual we can certainly reach out to so many peoples around the globe at the same time. So we're taking, I think, trying to get the best out of it. >> Right, right, so you've got a terrific lineup. So let's jump into a little bit. So first thing just about NTT Research, we're all familiar, if you've been around for a little while about Bell Labs, we're fortunate to have Xerox PARC up the street here in Palo Alto, these are kind of famous institutions doing basic research. People probably aren't as familiar at least in the states around NTT basic research. But when you think about real bottom line basic research and how it contributes ultimately, it gets into products, and solutions, and health care, and all kinds of places. How should people think about basic research and its role in ultimately coming to market in products, and services, and all different things. But you're getting way down into the weeds into the really, really basic hardcore technology. >> Sure, yeah, so let me just from my perspective, define the basic research versus some other research and development. For us that the basic research means that we don't necessarily have any like a product roadmap or commercialization roadmap, we just want to look at the fundamental core technology of all things. And from the timescale perspective obviously, not that we're not looking at something new, thing, next year, next six months, that kind of thing. We are looking at five years or sometimes longer than that, potentially 10 years down the road. But you mentioned about the Bell Lab and Xerox PARC. Yeah, well, they used to be such organizations in the United States, however, well, arguably those days have kind of gone, but so that's what's going on in the United States. In Japan, NTT has have done quite a bit of basic research over the years. And so we wanted to, I think because that a lot of the cases that we can talk about the end of the Moore's laws and then the, we are kind of scary time for that. The energy consumptions on ITs We need to make some huge, big, fundamental change has to happen to sustain our long-term development of the ideas and basically for the sake of human beings. >> Right, right. >> So NTT sees that and also we've been doing quite a bit of basic research in Japan. So we recognize this is a time that the let's expand this activities and then by doing, as a part of doing so is open up the research lab in Silicon Valley, where certainly we can really work better, work easier to with that the global talents in this field. So that's how we started this endeavor, like I said, last year. And so far, it's a tremendous progress that we have made, so that's where we are. >> That's great, so just a little bit more specific. So you guys are broken down into three labs as I understand, you've got the Physics, the PHI, which is Physics and Informatics, the CIS lab Cryptography and Information Security, and the MEI lab Medical and Health Informatics, and the conference has really laid out along those same tracks, really day one is a whole lot of stuff, or excuse me, they do to run the Physics and Informatics day. The next day is really Cryptography and Information Security, and then the Medical and Health Informatics. So those are super interesting but very diverse kind of buckets of fundamental research. And you guys are attacking all three of those pillars. >> Yup, so day one, general session, is that we cover the whole, all the topics. And but just that whole general topics. I think some people, those who want to understand what NTT research is all about, joining day one will be a great day to be, to understand more holistic what we are doing. However, given the type of research topic that we are tackling, we need the deep dive conversations, very specific to each topic by the specialist and the experts in each field. Therefore we have a day two, three, and four for a specific topics that we're going to talk about. So that's a configuration of this conference. >> Right, right, and I love. I just have to read a few of the session breakout titles 'cause I think they're just amazing and I always love learning new vocabulary words. Coherent nonlinear dynamics and combinatorial optimization language multipliers, indistinguishability obfuscation from well-founded assumptions, fully deniable communications and computation. I mean, a brief history of the quasi-adaptive NIZKs, which I don't even know what that stands for. (Gomi laughing) Really some interesting topics. But the other thing that jumps out when you go through the sessions is the representation of universities and really the topflight university. So you've got people coming from MIT, CalTech, Stanford, Notre Dame, Michigan, the list goes on and on. Talk to us about the role of academic institutions and how NTT works in conjunction with academic institutions, and how at this basic research level kind of the commercial academic interests align and come together, and work together to really move this basic research down the road. >> Sure, so the working with academic, especially at the top-notch universities are crucial for us. Obviously, that's where the experts in each field of the basic research doing their super activities and we definitely need to get connected, and then we need to accelerate our activities and together with the entities researchers. So that has been kind of one of the number one priority for us to jumpstart and get some going. So as you mentioned, Jeff, that we have a lineup of professors and researchers from each top-notch universities joining to this event and talking at a generous, looking at different sessions. So I'm sure that those who are listening in to those sessions, you will learn well what's going on from the NTT's mind or NTT researchers mind to tackle each problem. But at the same time you will get to hear that top level researchers and professors in each field. So I believe this is going to be a kind of unique, certainly session that to understand what's it's like in a research field of quantum computing, encryptions, and then medical informatics of the world. >> Right. >> So that's, I am sure it's going to be a pretty great lineups. >> Oh, absolutely, a lot of information exchange. And I'm not going to ask you to pick your favorite child 'cause that would be unfair, but what I am going to do is I noticed too that you also write for the Forbes Technology Council members. So you're publishing on Forbes, and one of the articles that you publish relatively recently was about biological digital twins. And this is a topic that I'm really interested in. We used to do a lot of stuff with GE and there was always a lot of conversation about digital twins, for turbines, and motors, and kind of all this big, heavy industrial equipment so that you could get ahead of the curve in terms of anticipating maintenance and basically kind of run simulations of its lifetime. Need concept, now, and that's applied to people in biology, whether that's your heart or maybe it's a bigger system, your cardiovascular system, or the person as a whole. I mean, that just opens up so much interesting opportunities in terms of modeling people and being able to run simulations. If they do things different, I would presume, eat different, walk a little bit more, exercise a little bit more. And you wrote about it, I wonder if you could share kind of your excitement about the potential for digital twins in the medical space. >> Sure, so I think that the benefit is very clear for a lot of people, I would hope that the ones, basically, the computer system can simulate or emulate your own body, not just a generic human body, it's the body for Kazu Gomi at the age of whatever. (Jeff laughing) And so if you get that precise simulation of your body you can do a lot of things. Oh, you, meaning I think a medical professional can do a lot of thing. You can predict what's going to happen to my body in the next year, six months, whatever. Or if I'm feeling sick or whatever the reasons and then the doctor wants to prescribe a few different medicines, but you can really test it out a different kind of medicines, not to you, but to the twin, medical twin then obviously is safer to do some kind of specific medicines or whatever. So anyway, those are the kind of visions that we have. And I have to admit that there's a lot of things, technically we have to overcome, and it will take a lot of years to get there. But I think it's a pretty good goal to define, so we said we did it and I talked with a couple of different experts and I am definitely more convinced that this is a very nice goal to set. However, well, just talking about the goal, just talking about those kinds of futuristic thing, you may just end up with a science fiction. So we need to be more specific, so we have the very researchers are breaking down into different pieces, how to get there, again, it's going to be a pretty long journey, but we're starting from that, they're try to get the digital twin for the cardiovascular system, so basically the create your own heart. Again, the important part is that this model of my heart is very similar to your heart, Jeff, but it's not identical it is somehow different. >> Right, right. >> So we are looking on it and there are certainly some, we're not the only one thinking something like this, there are definitely like-minded researchers in the world. So we are gathered together with those folks and then come up with the exchanging the ideas and coming up with that, the plans, and ideas, that's where we are. But like you said, this is really a exciting goal and exciting project. >> Right, and I like the fact that you consistently in all the background material that I picked up preparing for this today, this focus on tech for good and tech for helping the human species do better down the road. In another topic, in other blog post, you talked about and specifically what are 15 amazing technologies contributing to the greater good and you highlighted cryptography. So there's a lot of interesting conversations around encryption and depending kind of commercialization of quantum computing and how that can break all the existing kind of encryption. And there's going to be this whole renaissance in cryptography, why did you pick that amongst the entire pallet of technologies you can pick from, what's special about cryptography for helping people in the future? >> Okay, so encryption, I think most of the people, just when you hear the study of the encryption, you may think what the goal of these researchers or researches, you may think that you want to make your encryption more robust and more difficult to break. That you can probably imagine that's the type of research that we are doing. >> Jeff: Right. >> And yes, yes, we are doing that, but that's not the only direction that we are working on. Our researchers are working on different kinds of encryptions and basically encryptions controls that you can just reveal, say part of the data being encrypted, or depending upon that kind of attribute of whoever has the key, the information being revealed are slightly different. Those kinds of encryption, well, it's kind of hard to explain verbally, but functional encryption they call is becoming a reality. And I believe those inherit data itself has that protection mechanism, and also controlling who has access to the information is one of the keys to address the current status. Current status, what I mean by that is, that they're more connected world we are going to have, and more information are created through IOT and all that kind of stuff, more sensors out there, I think. So it is great on the one side that we can do a lot of things, but at the same time there's a tons of concerns from the perspective of privacy, and securities, and stuff, and then how to make those things happen together while addressing the concern and the leverage or the benefit you can create super complex accessing systems. But those things, I hate to say that there are some inherently bringing in some vulnerabilities and break at some point, which we don't want to see. >> Right. >> So I think having those securities and privacy mechanism in that the file itself is I think that one of the key to address those issues, again, get the benefit of that they're connected in this, and then while maintaining the privacy and security for the future. >> Right. >> So and then that's, in the end will be the better for everyone and a better society. So I couldn't pick other (Gomi and Jeff laughing) technology but I felt like this is easier for me to explain to a lot of people. So that's mainly the reasons that I went back launching. >> Well, you keep publishing, so I'm sure you'll work your way through most of the technologies over a period of time, but it's really good to hear there's a lot of talk about security not enough about privacy. There's usually the regs and the compliance laws lag, what's kind of happening in the marketplace. So it's good to hear that's really a piece of the conversation because without the privacy the other stuff is not as attractive. And we're seeing all types of issues that are coming up and the regs are catching up. So privacy is a super important piece. But the other thing that is so neat is to be exposed not being an academic, not being in this basic research every day, but have the opportunity to really hear at this level of detail, the amount of work that's being done by big brain smart people to move these basic technologies along, we deal often in kind of higher level applications versus the stuff that's really going on under the cover. So really a great opportunity to learn more and hear from, and probably understand some, understand not all about some of these great, kind of baseline technologies, really good stuff. >> Yup. >> Yeah, so thank-you for inviting us for the first one. And we'll be excited to sit in on some sessions and I'm going to learn. What's that one phrase that I got to learn? The N-I-K-Z-T. NIZKs. (laughs) >> NIZKs. (laughs) >> Yeah, NIZKs, the brief history of quasi-adaptive NI. >> Oh, all right, yeah, yeah. (Gomi and Jeff laughing) >> All right, Kazuhiro, I give you the final word- >> You will find out, yeah. >> You've been working on this thing for over a year, I'm sure you're excited to finally kind of let it out to the world, I wonder if you have any final thoughts you want to share before we send people back off to their sessions. >> Well, let's see, I'm sure if you're watching this video, you are almost there for that actual summit. It's about to start and so hope you enjoy the summit and in a physical, well, I mentioned about the benefit of this virtual, we can reach out to many people, but obviously there's also a flip side of the coin as well. With a physical, we can get more spontaneous conversations and more in-depth discussion, certainly we can do it, perhaps not today. It's more difficult to do it, but yeah, I encourage you to, I think I encouraged my researchers NTT side as well to basic communicate with all of you potentially and hopefully then to have more in-depth, meaningful conversations just starting from here. So just feel comfortable, perhaps just feel comfortable to reach out to me and then all the other NTT folks. And then now, also that the researchers from other organizations, I'm sure they're looking for this type of interactions moving forward as well, yeah. >> Terrific, well, thank-you for that open invitation and you heard it everybody, reach out, and touch base, and communicate, and engage. And it's not quite the same as being physical in the halls, but that you can talk to a whole lot more people. So Kazu, again, thanks for inviting us. Congratulations on the event and really glad to be here covering it. >> Yeah, thank-you very much, Jeff, appreciate it. >> All right, thank-you. He's Kazu, I'm Jeff, we are at the Upgrade 2020, the NTT Research Summit. Thanks for watching, we'll see you next time. (upbeat music)
SUMMARY :
the NTT Research Summit of the Upgrade 2020, it's and you guys had to make some changes. and then decided to do this time and health care, and all kinds of places. of the cases that we can talk that the let's expand this and the MEI lab Medical and the experts in each field. and really the topflight university. But at the same time you will get to hear it's going to be a pretty great lineups. and one of the articles that so basically the create your own heart. researchers in the world. Right, and I like the fact and more difficult to break. is one of the keys to and security for the future. So that's mainly the reasons but have the opportunity to really hear and I'm going to learn. NIZKs. Yeah, NIZKs, the brief (Gomi and Jeff laughing) it out to the world, and hopefully then to have more in-depth, and really glad to be here covering it. Yeah, thank-you very the NTT Research Summit.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Jeff | PERSON | 0.99+ |
Kazuhiro Gomi | PERSON | 0.99+ |
CalTech | ORGANIZATION | 0.99+ |
NTT | ORGANIZATION | 0.99+ |
Jeff Frick | PERSON | 0.99+ |
Japan | LOCATION | 0.99+ |
Kazu | PERSON | 0.99+ |
Silicon Valley | LOCATION | 0.99+ |
March | DATE | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
three | QUANTITY | 0.99+ |
five years | QUANTITY | 0.99+ |
Bell Lab | ORGANIZATION | 0.99+ |
Gomi | PERSON | 0.99+ |
Bell Labs | ORGANIZATION | 0.99+ |
Kazu Gomi | PERSON | 0.99+ |
four | QUANTITY | 0.99+ |
Kazuhiro | PERSON | 0.99+ |
United States | LOCATION | 0.99+ |
next year | DATE | 0.99+ |
Moore | PERSON | 0.99+ |
10 years | QUANTITY | 0.99+ |
NTT Research | ORGANIZATION | 0.99+ |
GE | ORGANIZATION | 0.99+ |
Berkeley | LOCATION | 0.99+ |
Forbes Technology Council | ORGANIZATION | 0.99+ |
last year | DATE | 0.99+ |
Xerox PARC | ORGANIZATION | 0.99+ |
Stanford | ORGANIZATION | 0.99+ |
NTT Research Summit | EVENT | 0.99+ |
15 amazing technologies | QUANTITY | 0.99+ |
July, 2019 | DATE | 0.99+ |
MIT | ORGANIZATION | 0.98+ |
each topic | QUANTITY | 0.98+ |
NTT Research | EVENT | 0.98+ |
Upgrade 2020 | EVENT | 0.98+ |
one | QUANTITY | 0.98+ |
first year | QUANTITY | 0.97+ |
each field | QUANTITY | 0.97+ |
today | DATE | 0.97+ |
three labs | QUANTITY | 0.96+ |
each problem | QUANTITY | 0.96+ |
Michigan | LOCATION | 0.96+ |
next six months | DATE | 0.95+ |
Notre Dame | ORGANIZATION | 0.95+ |
first one | QUANTITY | 0.95+ |
a year ago | DATE | 0.94+ |
one side | QUANTITY | 0.91+ |
one phrase | QUANTITY | 0.9+ |
over a year | QUANTITY | 0.9+ |
a year | QUANTITY | 0.9+ |
Physics and Informatics | EVENT | 0.89+ |
twin | QUANTITY | 0.87+ |
first thing | QUANTITY | 0.86+ |
each top- | QUANTITY | 0.86+ |
day one | QUANTITY | 0.84+ |
CIS | ORGANIZATION | 0.83+ |
six | QUANTITY | 0.82+ |
Medical and Health Informatics | ORGANIZATION | 0.8+ |
one of | QUANTITY | 0.72+ |
Forbes | ORGANIZATION | 0.71+ |
Leicester Clinical Data Science Initiative
>>Hello. I'm Professor Toru Suzuki Cherif cardiovascular medicine on associate dean of the College of Life Sciences at the University of Leicester in the United Kingdom, where I'm also director of the Lester Life Sciences accelerator. I'm also honorary consultant cardiologist within our university hospitals. It's part of the national health system NHS Trust. Today, I'd like to talk to you about our Lester Clinical Data Science Initiative. Now brief background on Lester. It's university in hospitals. Lester is in the center of England. The national health system is divided depending on the countries. The United Kingdom, which is comprised of, uh, England, Scotland to the north, whales to the west and Northern Ireland is another part in a different island. But national health system of England is what will be predominantly be discussed. Today has a history of about 70 years now, owing to the fact that we're basically in the center of England. Although this is only about one hour north of London, we have a catchment of about 100 miles, which takes us from the eastern coast of England, bordering with Birmingham to the west north just south of Liverpool, Manchester and just south to the tip of London. We have one of the busiest national health system trust in the United Kingdom, with a catchment about 100 miles and one million patients a year. Our main hospital, the General Hospital, which is actually called the Royal Infirmary, which can has an accident and emergency, which means Emergency Department is that has one of the busiest emergency departments in the nation. I work at Glen Field Hospital, which is one of the main cardiovascular hospitals of the United Kingdom and Europe. Academically, the Medical School of the University of Leicester is ranked 20th in the world on Lee, behind Cambridge, Oxford Imperial College and University College London. For the UK, this is very research. Waited, uh, ranking is Therefore we are very research focused universities as well for the cardiovascular research groups, with it mainly within Glenn Field Hospital, we are ranked as the 29th Independent research institution in the world which places us. A Suffield waited within our group. As you can see those their top ranked this is regardless of cardiology, include institutes like the Broad Institute and Whitehead Institute. Mitt Welcome Trust Sanger, Howard Hughes Medical Institute, Kemble, Cold Spring Harbor and as a hospital we rank within ah in this field in a relatively competitive manner as well. Therefore, we're very research focused. Hospital is well now to give you the unique selling points of Leicester. We're we're the largest and busiest national health system trust in the United Kingdom, but we also have a very large and stable as well as ethnically diverse population. The population ranges often into three generations, which allows us to do a lot of cohort based studies which allows us for the primary and secondary care cohorts, lot of which are well characterized and focused on genomics. In the past. We also have a biomedical research center focusing on chronic diseases, which is funded by the National Institutes of Health Research, which funds clinical research the hospitals of United Kingdom on we also have a very rich regional life science cluster, including med techs and small and medium sized enterprises. Now for this, the bottom line is that I am the director of the letter site left Sciences accelerator, >>which is tasked with industrial engagement in the local national sectors but not excluding the international sectors as well. Broadly, we have academics and clinicians with interest in health care, which includes science and engineering as well as non clinical researchers. And prior to the cove it outbreak, the government announced the £450 million investment into our university hospitals, which I hope will be going forward now to give you a brief background on where the scientific strategy the United Kingdom lies. Three industrial strategy was brought out a za part of the process which involved exiting the European Union, and part of that was the life science sector deal. And among this, as you will see, there were four grand challenges that were put in place a I and data economy, future of mobility, clean growth and aging society and as a medical research institute. A lot of the focus that we have been transitioning with within my group are projects are focused on using data and analytics using artificial intelligence, but also understanding how chronic diseases evolved as part of the aging society, and therefore we will be able to address these grand challenges for the country. Additionally, the national health system also has its long term plans, which we align to. One of those is digitally enabled care and that this hope you're going mainstream over the next 10 years. And to do this, what is envision will be The clinicians will be able to access and interact with patient records and care plants wherever they are with ready access to decision support and artificial intelligence, and that this will enable predictive techniques, which include linking with clinical genomic as well as other data supports, such as image ing a new medical breakthroughs. There has been what's called the Topol Review that discusses the future of health care in the United Kingdom and preparing the health care workforce for the delivery of the digital future, which clearly discusses in the end that we would be using automated image interpretation. Is using artificial intelligence predictive analytics using artificial intelligence as mentioned in the long term plans. That is part of that. We will also be engaging natural language processing speech recognition. I'm reading the genome amusing. Genomic announced this as well. We are in what is called the Midland's. As I mentioned previously, the Midland's comprised the East Midlands, where we are as Lester, other places such as Nottingham. We're here. The West Midland involves Birmingham, and here is ah collective. We are the Midlands. Here we comprise what is called the Midlands engine on the Midland's engine focuses on transport, accelerating innovation, trading with the world as well as the ultra connected region. And therefore our work will also involve connectivity moving forward. And it's part of that. It's part of our health care plans. We hope to also enable total digital connectivity moving forward and that will allow us to embrace digital data as well as collectivity. These three key words will ah Linkous our health care systems for the future. Now, to give you a vision for the future of medicine vision that there will be a very complex data set that we will need to work on, which will involve genomics Phanom ICS image ing which will called, uh oh mix analysis. But this is just meaning that is, uh complex data sets that we need to work on. This will integrate with our clinical data Platforms are bioinformatics, and we'll also get real time information of physiology through interfaces and wearables. Important for this is that we have computing, uh, processes that will now allow this kind of complex data analysis in real time using artificial intelligence and machine learning based applications to allow visualization Analytics, which could be out, put it through various user interfaces to the clinician and others. One of the characteristics of the United Kingdom is that the NHS is that we embrace data and captured data from when most citizens have been born from the cradle toe when they die to the grave. And it's important that we were able to link this data up to understand the journey of that patient. Over time. When they come to hospital, which is secondary care data, we will get disease data when they go to their primary care general practitioner, we will be able to get early check up data is Paula's follow monitoring monitoring, but also social care data. If this could be linked, allow us to understand how aging and deterioration as well as frailty, uh, encompasses thes patients. And to do this, we have many, many numerous data sets available, including clinical letters, blood tests, more advanced tests, which is genetics and imaging, which we can possibly, um, integrate into a patient journey which will allow us to understand the digital journey of that patient. I have called this the digital twin patient cohort to do a digital simulation of patient health journeys using data integration and analytics. This is a technique that has often been used in industrial manufacturing to understand the maintenance and service points for hardware and instruments. But we would be using this to stratify predict diseases. This'll would also be monitored and refined, using wearables and other types of complex data analysis to allow for, in the end, preemptive intervention to allow paradigm shifting. How we undertake medicine at this time, which is more reactive rather than proactive as infrastructure we are presently working on putting together what's it called the Data Safe haven or trusted research environment? One which with in the clinical environment, the university hospitals and curated and data manner, which allows us to enable data mining off the databases or, I should say, the trusted research environment within the clinical environment. Hopefully, we will then be able to anonymous that to allow ah used by academics and possibly also, uh, partnering industry to do further data mining and tool development, which we could then further field test again using our real world data base of patients that will be continually, uh, updating in our system. In the cardiovascular group, we have what's called the bricks cohort, which means biomedical research. Informatics Center for Cardiovascular Science, which was done, started long time even before I joined, uh, in 2010 which has today almost captured about 10,000 patients arm or who come through to Glenn Field Hospital for various treatments or and even those who have not on. We asked for their consent to their blood for genetics, but also for blood tests, uh, genomics testing, but also image ing as well as other consent. Hable medical information s so far there about 10,000 patients and we've been trying to extract and curate their data accordingly. Again, a za reminder of what the strengths of Leicester are. We have one of the largest and busiest trust with the very large, uh, patient cohort Ah, focused dr at the university, which allows for chronic diseases such as heart disease. I just mentioned our efforts on heart disease, uh which are about 10,000 patients ongoing right now. But we would wish thio include further chronic diseases such as diabetes, respiratory diseases, renal disease and further to understand the multi modality between these diseases so that we can understand how they >>interact as well. Finally, I like to talk about the lesser life science accelerator as well. This is a new project that was funded by >>the U started this January for three years. I'm the director for this and all the groups within the College of Life Sciences that are involved with healthcare but also clinical work are involved. And through this we hope to support innovative industrial partnerships and collaborations in the region, a swells nationally and further on into internationally as well. I realized that today is a talked to um, or business and commercial oriented audience. And we would welcome interest from your companies and partners to come to Leicester toe work with us on, uh, clinical health care data and to drive our agenda forward for this so that we can enable innovative research but also product development in partnership with you moving forward. Thank you for your time.
SUMMARY :
We have one of the busiest national health system trust in the United Kingdom, with a catchment as part of the aging society, and therefore we will be able to address these grand challenges for Finally, I like to talk about the lesser the U started this January for three years.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
National Institutes of Health Research | ORGANIZATION | 0.99+ |
Howard Hughes Medical Institute | ORGANIZATION | 0.99+ |
Birmingham | LOCATION | 0.99+ |
2010 | DATE | 0.99+ |
Broad Institute | ORGANIZATION | 0.99+ |
England | LOCATION | 0.99+ |
College of Life Sciences | ORGANIZATION | 0.99+ |
Whitehead Institute | ORGANIZATION | 0.99+ |
United Kingdom | LOCATION | 0.99+ |
Toru Suzuki Cherif | PERSON | 0.99+ |
Europe | LOCATION | 0.99+ |
London | LOCATION | 0.99+ |
£450 million | QUANTITY | 0.99+ |
Lester | ORGANIZATION | 0.99+ |
three years | QUANTITY | 0.99+ |
Oxford Imperial College | ORGANIZATION | 0.99+ |
Leicester | LOCATION | 0.99+ |
European Union | ORGANIZATION | 0.99+ |
Informatics Center for Cardiovascular Science | ORGANIZATION | 0.99+ |
Scotland | LOCATION | 0.99+ |
Glenn Field Hospital | ORGANIZATION | 0.99+ |
Manchester | LOCATION | 0.99+ |
Today | DATE | 0.99+ |
Nottingham | LOCATION | 0.99+ |
Cold Spring Harbor | ORGANIZATION | 0.99+ |
today | DATE | 0.99+ |
General Hospital | ORGANIZATION | 0.99+ |
one | QUANTITY | 0.99+ |
Glen Field Hospital | ORGANIZATION | 0.99+ |
Kemble | ORGANIZATION | 0.99+ |
Royal Infirmary | ORGANIZATION | 0.99+ |
about 100 miles | QUANTITY | 0.99+ |
Northern Ireland | LOCATION | 0.99+ |
Lester Life Sciences | ORGANIZATION | 0.99+ |
Liverpool | LOCATION | 0.99+ |
UK | LOCATION | 0.98+ |
about 70 years | QUANTITY | 0.98+ |
Midland | LOCATION | 0.98+ |
about 10,000 patients | QUANTITY | 0.98+ |
University of Leicester | ORGANIZATION | 0.98+ |
NHS Trust | ORGANIZATION | 0.98+ |
Mitt Welcome Trust Sanger | ORGANIZATION | 0.98+ |
Paula | PERSON | 0.98+ |
West Midland | LOCATION | 0.98+ |
about 10,000 patients | QUANTITY | 0.97+ |
East Midlands | LOCATION | 0.97+ |
about one hour | QUANTITY | 0.97+ |
NHS | ORGANIZATION | 0.97+ |
20th | QUANTITY | 0.97+ |
United Kingdom | LOCATION | 0.96+ |
University College London | ORGANIZATION | 0.96+ |
One | QUANTITY | 0.95+ |
one million patients a year | QUANTITY | 0.93+ |
Suffield | ORGANIZATION | 0.92+ |
Three industrial strategy | QUANTITY | 0.92+ |
three generations | QUANTITY | 0.92+ |
Lester Clinical Data Science Initiative | ORGANIZATION | 0.89+ |
Lee | LOCATION | 0.88+ |
January | DATE | 0.88+ |
Medical School of the | ORGANIZATION | 0.87+ |
University of Leicester | ORGANIZATION | 0.87+ |
Midlands | LOCATION | 0.87+ |
Lester | LOCATION | 0.87+ |
three key words | QUANTITY | 0.86+ |
Topol Review | TITLE | 0.85+ |
Leicester | ORGANIZATION | 0.83+ |
Leicester Clinical Data Science Initiative | ORGANIZATION | 0.82+ |
four grand challenges | QUANTITY | 0.82+ |
Emergency Department | ORGANIZATION | 0.8+ |
twin patient | QUANTITY | 0.73+ |
29th Independent research | QUANTITY | 0.69+ |
next 10 years | DATE | 0.66+ |
Neural Audio Captioning and Its Application to Stethoscopic Sounds
>> Hello, I'm Kunio Kashino from Biomedical Informatics Research Center of NTT Basic Research Laboratories. I'd like to talk about neural audio captioning and its application to stethoscopic sounds. First, I'd like to think about what captioning is in comparison with classification. When there is a picture of a cat, you will recognize it as a cat. This is a classification or object recognition. Captioning on the other hand is to describe what's going on in a more complex scene. This is an example of a visual case, but the same can be thought of for sound. When you hear a car on the street, you can recognize it as a car. You can also explain the sound when someones hitting a toy tambourine. Of these, the generation of explanatory notes on sounds or audio captioning is a new field of research that has just emerged. (soft music) This is an experimental system that we proposed last year. It listened to sound for two seconds and provides an explanation for the sound of that section. (soft music) Moving the slider to the left produces a short, concise description. Moving it to the right produces a longer, more detailed description. (soft music) The descriptions are not always perfect, but you can see how it works. Here are some early works in this field of study. In 2017, Drossos conducted a study that gave sound a string of words. But there are still a lot of overlap with the classification task at that time. At around the same time, Ikala who was my student at the university of Tokyo proposed a system that could express sounds in onomatopoeic terms as a sequence of phonemes. Recently more works have been reported including those describing more complex scenes in normal sentences and using sentences for sound retrieval. Let's go over the differences between classification and the captioning once again. Classification is the process of classifying or quantizing features in a fixed number of classes. Captioning on the other hand means converting the features. For example, the time series of sound features is translated into the times series of words. Classification requires that classes be determine in advance, but captioning does not. In classification relationships between classes are not usually considered but in captioning relationships between elements are important not just what is there. In the medical context classification corresponds to diagnosis while in captioning we've addressed the explanation and inference rather than diagnosis. Of course, diagnosis is an important act in medical care and neural classification neural captioning is necessarily better than the other. Captioning would be useful to express the comparisons, degree, time course and changes and the relationship between cause and effect. For example, it would be difficult to prepare a class for the situation represented by a sentence of over the past few days pneumonia has gradually spread and worsened. Therefore both of them should be utilized according to the purpose. Now let's consider the challenges of captioning. If you look at this picture everyone will say, it's a picture of a cat. Yes, it is. No one called this a grey and white animal with two round eyes and triangular ears. Similarly, when a characteristic noise is heard from the lungs as the person breathes, you may just say wrong car hire present. And wrong described the noise in detail. That is it's a good idea to use the label, if it's appropriate. As long as the person you are talking to can understand it. Another challenge with captioning is that the exact same description may or may not be appropriate depending on the situation. When you were walking down on each section and a car pops up, it's important to say it's a car and it's inappropriate to discuss the engine sound quality. But when you bring a car to a repair shop and have it checked you have to describe the engine sound in detail. Just saying that the engines running is obviously not enough. It is important to note that appropriate expressions vary and only one best answer cannot be determined. With these issues in mind, we configured a neuro audio captioning model. We call this system CSCG or Conditional Sequence-to-sequence Caption Generator. The system extracts a time series of acoustic features from biological sounds such as hard sounds converts them into a series of words and outputs them with class labels. The green parts are neuro networks. They were so trained that the system outputs both captions and labels simultaneously. The behavior of the sentence decoder is controlled by conditioning it with the auxiliarity input, in order to cope with the fact that the appropriate captions can vary. In the current experimentation we employ your parameter called Specificity. It is the amount of information contained in the words, in the entire caption. In other words the more number of words and the more infrequent or more specific words are used, the higher the variable specificity. And now our experiments the entire network was trained using a set of heart sounds. The sound samples were extracted from sound sources that collected 55 difficult cases. For each case, the signal was about one minute in length. So we extracted sound samples by windowing the signal. In one case four cycles worth of signal were work cut at the timing synchronized with the heartbeats. In another case, signals of six seconds in length were cut out at regular time intervals of three seconds. Class levels and seven kinds of explanations sentences were given manually for each case. This table shows the classification accuracy. We organized categories as general overview description of sound and presence or absence of 12 difficult heart diseases. We then prepared two to six classes for each category. As a result, we found that it is possible to classify with a fairly high accuracy of 94% or more in the case of beats synchronous windowing and 88% or more in the case of regular windowing. This graph shows the effect of the specificity control. The horizontal axis represent the specified specificity of level of detail. In the vertical axis we present the amount of information contained in the rear outfit captions. As you can see, the data is distributed along a straight line with a slope of one indicating that the specificity control is working correctly. Let's take a look at generated captions. This table shows the examples with varying specificity input for three types of sound sources, normal, large split of second sounds and coronary artery disease. If the specified specificity is small then the generator sentence is short. If the specificity value is greater you can see that detailed and long sentences are being generated. In this table, all captions are confirmed to be appropriate for the sound by humor observations. However the system does not always produce the correct output for now. Sometimes it may produce a wrong caption or a statement containing a linguistic error but generally speaking we consider the result promising. In this talk, I first discussed the problem of audio captioning in comparison with classification. It is not just a sound recognition and therefore a new topic in the research field. Then I proposed an automatic audio captioning system based on the conditional sequence-to-sequence model and tested it with heart sounds. The system features a multitasking configuration for classification then the captioning. And allows us to adjust the level of detail in the description according to the purpose. The evaluation results are promising. In the future, we intend to enrich the learning data and improve the system configuration to make it a practical system in the near future. Thank you very much.
SUMMARY :
is that the exact same description may
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Kunio Kashino | PERSON | 0.99+ |
2017 | DATE | 0.99+ |
two | QUANTITY | 0.99+ |
NTT Basic Research Laboratories | ORGANIZATION | 0.99+ |
94% | QUANTITY | 0.99+ |
three seconds | QUANTITY | 0.99+ |
six seconds | QUANTITY | 0.99+ |
two seconds | QUANTITY | 0.99+ |
88% | QUANTITY | 0.99+ |
last year | DATE | 0.99+ |
55 difficult cases | QUANTITY | 0.99+ |
six classes | QUANTITY | 0.99+ |
one case | QUANTITY | 0.99+ |
Ikala | PERSON | 0.99+ |
each case | QUANTITY | 0.99+ |
First | QUANTITY | 0.99+ |
each category | QUANTITY | 0.99+ |
three types | QUANTITY | 0.99+ |
Biomedical Informatics Research Center | ORGANIZATION | 0.99+ |
seven kinds | QUANTITY | 0.98+ |
both | QUANTITY | 0.98+ |
first | QUANTITY | 0.98+ |
Drossos | PERSON | 0.97+ |
two round eyes | QUANTITY | 0.96+ |
12 difficult heart diseases | QUANTITY | 0.95+ |
second sounds | QUANTITY | 0.95+ |
each section | QUANTITY | 0.94+ |
about one minute | QUANTITY | 0.93+ |
four cycles | QUANTITY | 0.89+ |
Tokyo | ORGANIZATION | 0.81+ |
both captions | QUANTITY | 0.73+ |
university | ORGANIZATION | 0.71+ |
one best answer | QUANTITY | 0.7+ |
ears | QUANTITY | 0.59+ |
one | QUANTITY | 0.56+ |
Adam Mariano, Highpoint Solutions | Informatica World 2019
(upbeat music) >> Live, from Las Vegas it's theCUBE. Covering Informatica World 2019. Brought to you by Informatica. >> Welcome back everyone to theCUBE's live coverage of Informatica World 2019. I'm your host Rebecca Knight along with my co-host John Furrier. We are joined by Adam Mariano, he is the Vice-President Health Informatics at HighPoint Solutions. Thanks for coming on theCUBE! >> Thank you for having me. >> So tell our viewers a little bit about HighPoint Solutions, what the company does and what you do there. >> Sure, HighPoint is a consulting firm in the Healthcare and Life Sciences spaces. If it's data and it moves we probably can assist with it. We do a lot of data management, we implement the full Infomatica stack. We've been an Infomatica partner for about 13 years, we were their North American partner of the year last year. We're part of a much larger organization, IQVIA, which is a merger of IMS quintiles, large data asset holder, big clinical research organization. So we're very much steeped in the healthcare data space. >> And what do you do there as Vice President of Health and Formatics? >> I'm in an interesting role. Last year I was on the road 51 weeks. So I was at over a hundred facilities, I go out and help our customers or prospective customers or just people we've met in the space, get strategic about how they're going to leverage data as a corporate asset, figure out how they're going to use it for clinical insight, how they're going to use it for operational support in payer spaces. And really think about how they're going to execute on their next strategy for big data, cloud strategy, digital re-imaginment of the health care space and the like. >> So we know that healthcare is one of the industries that has always had so much data, similar to financial services. How are the organizations that you're working with, how are they beginning to wrap their brains around this explosion of data? >> Well it's been an interesting two years, the last augur two years there isn't a single conversation that hasn't started with governance. And so it's been an interesting space for us. We're a big MDM proponent, we're a big quality proponent, and you're seeing folks come back to basics again, which is I need data quality, I need data management from a metadata perspective, I need to really get engaged from a master data management perspective, and they're really looking for integrated metadata and governance process. Healthcare's been late to the game for about five or six years behind other industries. I think now that everybody's sort of gone through meaningful use and digital transformation on some level, we're now arcing towards consumerism. Which really requires a big deep-dive in the data. >> Adam, data governance has been discussed at length in the industry, certainly recently everyone knows GDPR's one year anniversary, et cetera, et cetera. But the role of data is really critical applications for SAS and new kinds of use cases, and the term Data Provisioning as a service has been kicked around. So I'd love to get your take on what that means, what is the definition, what does it mean? Data Provisioning as a service. >> The industry's changed. We've sort of gone through that boomerang, alright, we started deep in the sort of client server, standard warehouse space. Everything was already BMS. We then, everybody moved to appliances, then everybody came back and decided Hadoop, which is now 15 year old technology, was the way to go. Now everybody's drifting to Cloud, and you're trying to figure out how am I going to provision data to all these self-service users who are now in the sort of bring your own tools space. I'd like to use Tablo, I'd like to use Click. I like SAS. People want to write code to build their own data science. How can you provision to all those people, and do so through a standard fashion with the same metadata with the same process? and there isn't a way to do that without some automation at this point. It's really just something you can't scale, without having an integrated data flow. >> And what's the benefits of data provisioning as a service? What's the impact of that, what does it enable? >> So the biggest impact is time to market. So if you think about warehousing projects, historically a six month, year-long project, I can now bring data to people in three weeks. In two days, in a couple of hours. So thinking about how I do ingestion, if you think about the Informatica stack, something like EDC using enterprise data catalog to automatically ingest data, pushing that out into IDQ for quality. Proving that along to AXON for data governance and process and then looking at enterprise data lake for actual self-service provisioning. Allowing users to go in and look at their own data assets like a store, pick things off the shelf, combine them, and then publish them to their favorite tools. That premise is going to have to show up everywhere. It's going to have to show up on AWS, and on Amazon, and on Azure. It's going to have to show up on Google, it's going to have to show up regardless of what tool you're using. And if you're going to scale data science in a real meaningful way without having to stack a bunch of people doing data munging, this is the way it's going to have to go. >> Now you are a former nurse, and you now-- >> I'm still a nurse, technically. >> You're still a nurse! >> Once a nurse, always a nurse. Don't upset the nurses. >> I've got an ear thing going on, can you help me out here? (laughter) >> So you have this really unique vantage point, in the sense that you are helping these organizations do a better job with their data, and you also have a deep understanding of what it's like to be the medical personnel on the other side, who has to really implement these changes, and these changes will really change how they get their jobs done. How would you say, how does that change the way you think about what you do? And then also what would you say are the biggest differences for the nurses that are on the floor today, in the hospital serving patients? >> I think, in America we think about healthcare we often talked about Doctors, we only talk about nurses in nursing shortages. Nurses deliver all the care. Physicians see at this point, the way that medicine is running, physicians see patients an average two to four minutes. You really think about what that translates to if you're not doing a surgery on somebody, it's enough time to talk to them about their problem, look at their chart and leave. And so nursing care is the point of care, we have a lot of opportunity to create deflection and how care is delivered. I can change quality outcomes, I can change safety problems, I can change length of stay, by impacting how long people keep IVs in after they're no longer being used. And so understanding the way nursing care is delivered, and the lack of transparency that exists with EMR systems, and analytics, there's an opportunity for us to really create an open space for nursing quality. So we're talking a lot now to chief nursing officers, who are never a target of analytics discussion. They don't necessarily have the budget to do a lot of these things, but they're the people who have the biggest point of control and change in the way care is delivered in a hospital system. >> Care is also driven by notifications and data. >> Absolutely. >> So you can't go in a hospital without hearing all kinds of beeps and things. In AI and all the things we've been hearing there's now so many signals, the question is what they pay attention to? >> Exactly. >> This becomes a really interesting thing, because you can get notifications, if everything's instrumented, this is where kind of machine learning, and understanding workflows, outcomes play a big part. This is the theme of the show. It's not just the data and coding, it's what are you looking for? What's the problem statement or what's the outcome or scenario where you want the right notification, at the right time or a resource, is the operating room open? Maybe get someone in. These kinds of new dynamics are enabled by data, what's your take on all this? >> I think you've got some interesting things going on, there's a lot of signal to noise ratio in healthcare. Everybody is trying to build an algorithm for something. Whether that's who's going to overstay their visit, who's going to be readmitted, what's the risk for somebody developing sepsis? Who's likely to follow up on a pharmacy refill for their medication? We're getting into the space where you're going to have to start to accept correlation as opposed to causation, right? We don't have time to wait around for a six month study, or a three year study where you employ 15,000 patients. I've got three years of history, I've got a current census for the last year. I want to figure out, when do I have the biggest risk for falls in a hospital unit? Low staffing, early in their career physicians and nurses? High use of psychotropic meds? There are things that, if you've been in the space, you can pretty much figure out which should go into the algorithm. And then being pragmatic about what data hospitals can actually bring in to use as part of that process. >> So what you're getting at is really domain expertise is just as valuable as coding and wrangling data, and engineering data. >> In healthcare if you don't have SMEs you're not going to get anything practical done. And so we take a lot of these solutions, as one of the interesting touch points of our organization, I think it's where we shine, is bringing that subject matter expertise into a space where pure technology is not going to get it done. It's great if you know how to do MDM. But if you don't know how to do MDM in healthcare, you're going to miss all the critical use cases. So it really - being able to engage that user base, and the SMEs and bring people like nurses to the forefront of the conversation around analytics and how data will be used to your point, which signals to pay attention to. It's critical. >> Supply chains, another big one. >> Yeah. >> Impact there? >> Well it's the new domain in MDM. It's the one that was ignored for a long time. I think people had a hard time seeing the value. It's funny I spoke at 10 o'clock today, about supply chain, that was the session that I had with Nathan Rayne from BJC. We've been helping them embark on their supply chain journey. And from all the studies you look at it's one of the easiest places to find ROI with MBM. There's an unbelievable amount of ways- >> Low hanging fruit. >> $24.5 billion in waste a year in supply chain. It's just astronomical. And it's really easy things, it's about just in time supplies, am I overstocking, am I losing critical supplies for tissue samples, that cost sometimes a $100,000, because a room has been delayed. And therefore that tissue sits out, it ends up expiring, it has to be thrown away. I'll bring up Nathan's name again, but he speaks to a use case that we talked about, which is they needed a supply at a hospital within the system, 30 miles away another hospital had that supply. The supply costs $40,000. You can only buy them in packs of six. The hospital that needed the supply was unaware that one existed in the system, they ordered a new pack of six. So you have a $240,000 price that you could have resolved with a $100 Uber ride, right? And so the reality is that supply could have been shipped, could have been used, but because that wasn't automated and because there was no awareness you couldn't leverage that. Those use cases abound. You can get into the length of stay, you can get into quality of safety, there's a lot of great places to create wins with supply chain in the MDM space. >> One of the conversations we're having a lot in theCUBE, and we're having here at Informatica World, it centers around the skills gap. And you have a interesting perspective on this, because you are also a civil rights attorney who is helping underserved people with their H1B visas. Can you talk a little bit about the visa situation, and what you're seeing particularly as it relates to the skills gap? >> We're in an odd time. We'll leave it at that. I won't make a lot of commentary. >> Yes. >> I'm a civil rights and immigration attorney, and on the immigration side I do a lot of pro bono work with primarily communities of color, but communities at risk looking to help adjust their immigration status. And what you've had is a lot of fear. And so you have, well you might have an H1B holder here, you may have somebody who's on a provisional visa, or family members, and because those family members can no longer come over, people are going home. And you're getting people who are now returning. So we're seeing a negative immigration of places like Mexico, you're seeing a lot of people take their money, and their learnings and go back to India and start companies there and work remotely. So we're seeing a big up-tick in people who are looking for staffing again. I think the last quarter or so has been a pretty big ramp-up. And I think there's going to continue to be this hole, we're going to have to find new sources of talent if we can't bring people in to do the jobs. We're still also, I think it just speaks to our STEM education the fact that we're not teaching kids. I have a 28 year old daughter who loves technology, but I can tell you, her education when she was a kid, was lacking in this technology space. I think it's really an opportunity for us to think about how do we train young people to be in the new data economy. There's certainly an opportunity there today. >> And what about the, I mean you said you were talking about your daughter's education. What would you have directed her toward? What kinds of, when you look ahead to the jobs of the future, particularly having had various careers yourself, what would you say the kids today should be studying? >> That's two questions. So my daughter, I told her do what makes you happy. But I also made her learn Sequel. >> Be happy, but learn Sequel. >> But learn sequel. >> Okay! >> And for kids today I would say look, if you have an affinity and you think you enjoy the computer space, so you think about coding, you like HTML, you like social media. There are a plethora of jobs in that space and none of them require you to be an architect. You can be a BA, you can be a quality assurance person, you can be a PM. You can do analysis work. You can do data design, you can do interface design, there's a lot of space in there. I think we often reject kids who don't go to college, or don't have that opportunity. I think there's an opportunity for us to reach down into urban centers and really think about how we make alternate pathways for kids to get into the space. I think all the academies out there, you're seeing rise, Udemy, and a of of these other places that are offering academy based programs that are three, six months long and they're placing all of their students into jobs. So I don't think that the arc that we've always chased which is you've got to come from a brand named school to get into the space, I don't think it's that important. I think what's important is can I get you the clinical skill, so that you've understood how to move data around, how to process it, how to do testing, how to do design, and then I can bring you into the space and bring you in as an entry level employee. That premise I think is not part of the American dream but it should be. >> Absolutely, looking for talent in these unexpected places. >> College is not the only in point. We're back to having I think vocational schools for the new data economy, which don't exist yet. That's an opportunity for sure. >> And you said earlier, domain expertise, in healthcare as an example, points to what we've been hearing here at the conference, is that with data understanding outcomes and value of the data actually is just as important, as standing up, wrangling data, because if you don't have the data-- >> You make a great point. The other thing I tell young people in my practice, young people I interact with, people who are new to the space is, okay I hear you want to be a data scientist. Learn the business. So if you don't know healthcare get a healthcare education. Come be on this project as a BA. I know you don't want to be a BA, that's fine. Get over it. But come be here and learn the business, learn the dialogue, learn the economy of the business, learn who the players are, learn how data moves through the space, learn what is the actual business about. What does delivering care actually look like? If you're on the payer side, what does claims processing look like from an end to end perspective? Once you understand that I can put you in any role. >> And you know digital four's new non-linear ways to learn, we've got video, I see young kids on YouTube, you can learn anything now. >> Absolutely. >> And scale up your learning at a pace and if you get stuck you can just keep getting through it no-- >> And there are free courses everywhere at this point. Google has a lot of free courses, Amazon will let you train for free on their platform. It's really an opportunity-- >> I think you're right about vocational specialism is actually a positive trend. You know look at the college University scandals these days, is it really worth it? (laughter) >> I got my nursing license through a vocational school originally. But the nursing school, they didn't have any technology at that point. >> But you're a great use case. (laughter) Excellent Adam, thank you so much for coming on theCUBE it's been a pleasure talking to you. >> Thank you. >> I'm Rebecca Knight for John Furrier. You are watching theCUBE. (techno music)
SUMMARY :
Brought to you by Informatica. We are joined by Adam Mariano, he is the Vice-President and what you do there. in the Healthcare and Life Sciences spaces. And really think about how they're going to execute How are the organizations that you're working with, I need to really get engaged from a master data So I'd love to get your take on what that means, It's really just something you can't scale, So the biggest impact is time to market. Once a nurse, always a nurse. the way you think about what you do? They don't necessarily have the budget to do In AI and all the things we've been hearing it's what are you looking for? We're getting into the space where you're going to have So what you're getting at is really But if you don't know how to do MDM in healthcare, And from all the studies you look at And so the reality is that supply could have been shipped, And you have a interesting perspective on this, I won't make a lot of commentary. And I think there's going to continue to be this hole, I mean you said you were talking about your So my daughter, I told her do what makes you happy. the computer space, so you think about coding, in these unexpected places. for the new data economy, which don't exist yet. So if you don't know healthcare get a healthcare education. And you know digital four's new Amazon will let you train for free on their platform. You know look at the college University scandals But the nursing school, they didn't have on theCUBE it's been a pleasure talking to you. I'm Rebecca Knight for John Furrier.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Rebecca Knight | PERSON | 0.99+ |
Adam Mariano | PERSON | 0.99+ |
America | LOCATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
IQVIA | ORGANIZATION | 0.99+ |
Nathan Rayne | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
$100 | QUANTITY | 0.99+ |
two questions | QUANTITY | 0.99+ |
$240,000 | QUANTITY | 0.99+ |
Adam | PERSON | 0.99+ |
$40,000 | QUANTITY | 0.99+ |
six month | QUANTITY | 0.99+ |
HighPoint | ORGANIZATION | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
30 miles | QUANTITY | 0.99+ |
India | LOCATION | 0.99+ |
Last year | DATE | 0.99+ |
Informatica | ORGANIZATION | 0.99+ |
three | QUANTITY | 0.99+ |
Nathan | PERSON | 0.99+ |
Mexico | LOCATION | 0.99+ |
three year | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
15 year | QUANTITY | 0.99+ |
three weeks | QUANTITY | 0.99+ |
ORGANIZATION | 0.99+ | |
AWS | ORGANIZATION | 0.99+ |
BJC | ORGANIZATION | 0.99+ |
15,000 patients | QUANTITY | 0.99+ |
last year | DATE | 0.99+ |
two days | QUANTITY | 0.99+ |
HighPoint Solutions | ORGANIZATION | 0.99+ |
$24.5 billion | QUANTITY | 0.99+ |
$100,000 | QUANTITY | 0.99+ |
51 weeks | QUANTITY | 0.99+ |
three years | QUANTITY | 0.99+ |
Informatica World | ORGANIZATION | 0.99+ |
four minutes | QUANTITY | 0.99+ |
two years | QUANTITY | 0.98+ |
Uber | ORGANIZATION | 0.98+ |
last quarter | DATE | 0.98+ |
Infomatica | ORGANIZATION | 0.98+ |
28 year old | QUANTITY | 0.98+ |
about 13 years | QUANTITY | 0.98+ |
YouTube | ORGANIZATION | 0.98+ |
six months | QUANTITY | 0.97+ |
today | DATE | 0.97+ |
one | QUANTITY | 0.97+ |
about five | QUANTITY | 0.96+ |
IMS | ORGANIZATION | 0.96+ |
2019 | DATE | 0.96+ |
Udemy | ORGANIZATION | 0.96+ |
one year anniversary | QUANTITY | 0.96+ |
MBM | ORGANIZATION | 0.95+ |
Vice President | PERSON | 0.95+ |
SAS | ORGANIZATION | 0.95+ |
10 o'clock today | DATE | 0.95+ |
six years | QUANTITY | 0.94+ |
Highpoint Solutions | ORGANIZATION | 0.94+ |
a year | QUANTITY | 0.94+ |
theCUBE | ORGANIZATION | 0.93+ |
Vice-President | PERSON | 0.92+ |
Health Informatics | ORGANIZATION | 0.89+ |
One | QUANTITY | 0.88+ |
six | QUANTITY | 0.87+ |
Informatica World 2019 | EVENT | 0.86+ |
North American | OTHER | 0.83+ |
single conversation | QUANTITY | 0.82+ |
IDQ | TITLE | 0.79+ |
AXON | ORGANIZATION | 0.76+ |
over a hundred facilities | QUANTITY | 0.74+ |
EDC | TITLE | 0.69+ |
Tablo | TITLE | 0.68+ |
Azure | ORGANIZATION | 0.67+ |
couple of hours | QUANTITY | 0.67+ |
Health | ORGANIZATION | 0.65+ |
packs of | QUANTITY | 0.64+ |
Infomatica | TITLE | 0.64+ |
Chhandomay Mandal, Dell EMC & Pat Harkins, RVH - Dell EMC World 2017
>> Announcer: Live from Las Vegas, it's The Cube, covering Dell EMC World 2017, brought to you by Dell EMC. (electronic music) >> Welcome back to The Cube's coverage of Dell EMC World here in Las Vegas. I'm your host, Rebecca Knight, along with my co-host John Walls. Today we are talking to Chhandomay Mandal. He is the Senior Consultant Product Marketing here at Dell EMC, as well as Pat Harkins who is the CTO Informatics and Technology Services at Royal Victoria Health Center. Thanks so much for joining us. >> Thanks for having us. >> Glad to be here. >> So, Pat, I want to start with you. Tell us a little bit about Royal Health. >> Sure. Well, Royal Victoria Regional Health Center in Barrie, Ontario. We're about an hour north of Toronto, Ontario. It's a regional health center, variety of services. We provide oncology, cardiac, child and youth mental health, and what we're doing up there is providing a regional role, regional services for Meditech. We're host Meditech for a number of other hospitals in our area, and we're currently looking to expand that, and increase our volume, but also change platforms as well. >> So tell us about some of the biggest challenges that you see. >> Some of the biggest challenges that we're seeing right now is within Ontario, is the actual funding model, of course. Everything's a little bit tighter. But from a technology perspective, is actually staying with technology, with limited budgets and so forth, and staying with the latest, greatest, providing the best service to our customers, our physicians, our clinicians, which in turn is the best patient care. >> Chhandomay, you look at a client like Pat, who has very specific needs in health care. You've got time issues, you've got privacy issues. How do you deal, or what do you see as far as health care IT fitting in to what you're doing and the services you're providing to somebody like Pat, specifically knowing that these are very unique challenges and critically important challenges? >> Sure. We at Dell EMC look at what the problem is holistically. As Pat was mentioning, in the health care IT, one of the challenges we see is providing consistent high performance with low latency so that the clinicians, physicians can access the patient data in a timely way, quickly, they do not spend more time entering the data or accessing the data, rather spending more time with the patients. Then there is another problem that Pat alluded to. For any EHR, electronic health record systems, it is actually a consolidation of many workloads. You have the EHR workload itself, then you have analytics that needs to be run on it. There are other virtualized applications, and then there is distal partualization, because all the physicians now says they need to access the patient data. So effectively, we need to have platforms, and in this particular case, essentially All-Flash platforms that can offer very high performance, consistently low latency, high storage efficiency in terms of reduced footprint so that Pat and other health providers can consume less rack space, less space in the data center, reduced power and cooling, all those things, and at the end of the day, ensuring the copy data that they have between all the databases, those are efficiently managed and kind of like transforming the health care IT business workflow. That's what we at Dell EMC come with our All-Flash portfolio for health providers like Royal Victoria Health. >> So Pat, on your side of the fence then, from your perspective, limited resources, right? You've got to be very, very protective of what you have, and obviously you have your own challenges. How do you balance all that out in today's environment, where speed matters? Efficiency matters now more than ever. >> And that's, efficiency matters big time with our physicians, and what's happening is we look for partners like Dell EMC to help us with that. One thing that was happening in our experience with efficiency and with timely presentation of data, we weren't getting that with our previous vendor. And when we went to Dell EMC we work with them as a partner and said, "How can we improve on that? "What can we look for?" And we looked at Flash as being that solution, not only providing the performance that we were looking for but also providing built-in security that we were looking for, but also providing even more efficiency, so when the physician, the clinicians were getting that data, they get it in a timely manner, and that means that they're actually spending more time with the patient, they're not searching for the data, they're not searching for reports and so forth. >> Are you hearing any feedback from the patients themselves about how things have changed at the health center? >> Well, for me I'm still stuck in the dungeon. I'm in IT, so we're in the basement, right? so I don't necessarily-- >> John: Glad you could get out for the week. (laughing) >> Exactly. You know, we grow mushrooms in that area. So what's happening with, I don't necessarily talk with the patient, but we're getting the positive feedback from our clinicians and physicians who are then, if they're happy, that means they're providing usually, providing better patient care, and so that means the patients are happy. (audio cuts out) >> Is understanding the true, the point of patient health care from the point they're born to the point that their life ends, and what we're understanding is how getting that data and being able to provide that information to clinicians, see trends, be able to treat, be more proactive instead of a reactive in health care. That's the goal, and with technology and the storage and collecting the data and analytics we'll actually be able to provide that in the future. >> Chhandomay, from your perspective here, what is it about XtremIO you think that makes this a good match? And now you've had X2, right, and sorry Pat. >> Pat: No, it's fine. >> You just deployed, what, six months ago, you said? But now you've got an X2 version to consider, perhaps for your next deployment. What's the fit? Why does it work? >> So you mention Dell EMC XtremIO. So the core premise of XtremIO is we will be able to provide high performance, consistently in low latency no matter what workload you are running, no matter how many workloads you are consolidating on the same array. It is the same high-performance, low-latency, and we have in line all the time, data reduction technologies that are all working on in-memory metadata, which essentially boils down to we are doing all those storage operations at the control plane level without touching the data plane where the data actually lives or exists. So that in turn helps us to consolidate a lot of the copies. You mentioned analytics, right? You have your production database for your patient data, then you need to load those data in an ETL system for running the analytics, then you possibly have your instant development copies, copies for back-up. Now with XtremIO, all the copies, we do not store anything that's not unique, through that entire cluster, and all the metadata is stored in memory, so for us we can create copies that do not take any extra space, and you can run your workloads on the copies themselves with the same performance as in production volume and with all those data reduction and all those technologies that all those data services run. So what that in turn makes Pat's life easier is he can reduce the footprint, he can reduce or consolidate all the workloads on the ATA itself, and his application developers can bring the medical applications online much more faster, he can run his analytics and reports faster, being proactive about the care, and in a nutshell, pretty much taking the storage maintenance, storage planning, storage operations out of the picture so that they can innovate and they can spend time innovating in IT, helping patient care, as opposed to doing routine maintenance and planning. >> So it lets him focus on what he wants to do. You're not spending a majority of your time on mundane tasks, you're actually improving your operations. Give me a real-life example if you can. We talk about more efficiency and better speed, these are all good things and great terms to talk about, but in terms of actually improving patient care, or providing enhanced patient care, what does it mean? How does it translate? >> Well, how it translates is in a lot of cases with the physicians and what we've seen already with them, just with them, they're able to, because we actually improve performance, we're actually able to get more data in analytics, as we say, but then we're able to produce those reports and turn it around in a lot of cases, a lot quicker than what we've been able to do before. An example was, once we moved to XtremIO and our decision support team. Used to take 14 hours to run some of the reports that they were getting. They would start 'em at four o'clock in the evening, they would run to six a.m. in the morning, roughly. When we put the XtremIO in and they ran the same reports they started at 4 o'clock. By six p.m. that night they were completed. They actually called me because they thought they had something wrong. (laughing) It's never been that quick. >> John: Boss, this is too good. >> Exactly. >> John: I messed up. >> And so they actually ran the report three times, and they cued the QA against the report to understand that yeah, it is that efficient now. Now that we've turned that around we actually provide that to the clinicians. We're getting better patient care and they're able to get their information and react quicker to it as well. >> Talking about the massive amounts of data that's being generated that now needs to be analyzed in order to optimize performance, how much do your developers know about data, and are you doing more training for them so that they know what they're doing? >> Well, we always provide training. We're always working on that, but the thing is, we are providing more training and we're providing it to the point that they actually have to be able to mine that data. There's so much data, it's how to manage the data, mine the data. Our analysts at RVH is that we look to Meditech, our EHR vendor as well, to help us on that, but at the same time we're looking to, we're increasing our data warehouses, we're increasing our repositories and registries so that when we do have that data, we can get at it. >> I'm wondering too if using this kind of cutting-edge technology has had an impact on your recruitment. Michael Dell in his keynote mentioned how increasingly, employees are saying the kinds of technologies that's being used is having an impact. >> No, absolutely. I know our vendors, our staff are very excited about the technology. Where we were going before, they weren't, not that they weren't happy, but we were always dealing with mundane tasks. We had some issues that were always repetitive issues that we couldn't seem to get through. Now that we've actually upgraded to the Flash storage and moving through that, they're excited. They love the management, the ease of use, they have a lot of great ideas now it's actually, they're becoming innovative in their thoughts because they know they have the performance and the technology in the back end to do the job for them. >> I hate to ask you what's next because you're six months into your deployment, but this is a constantly evolving landscape, constantly improving. Obviously the pressure is at Dell EMC is responding really well, competitive pressures. What is your road map? If you look two, three years down the road in terms of the kinds of improvements you want to get, the kinds of efficiencies that you can get gains in, and then realistically from a budgetary standpoint, how do you balance all that together? >> Budgetary, there's always the constant discussion with our CFO, and so he's been very supportive, but where we see it going is we want to be able to actually, maybe not even necessarily go to the Cloud but become a private Cloud for our partners and be able to provide a lot of these regional services that we couldn't before with the technology that we had, and be able to expand the services. In Ontario we're seeing some budget constraints, as I mentioned. A lot of these smaller sites, the patients, the customers, as we would say are expecting the service, but with technology and the dollars, they might not be able to do it on their budget, but as we bring stuff back into our data center and be able to provide the technology, we've been able to spread that out, not only from storage, compute side, as well as virtualization, VDI desktops and so forth. That's where I see we're going over the next little while. >> How much learning goes on between your colleagues at CTOs at other health centers, and even health centers and hospitals in the states? Do you talk a lot about-- >> You know what? We do talk a lot. We share stories. Some good, some bad, but we try, we all have the same problems, and why re-create the wheel when you could actually learn from other people? So a lot of the CTOs, we do get together, informally and formally, and understand where we're going and then we also reach out through our vendors and through some of our user groups and so forth to the US and to some of our cohort CTOs down there to understand what they're doing, because they look at it from a different lens at times. >> So speaking of a different lens, from the other side of the fence, Chhandomay if you would, where are you see this headed in terms of your assistance in health care IT, what X2 might be able to do? What kinds of realizations do you think are on the horizon here, and what's possible for a health care provider like RVH? >> So all the organizations, if you look across the industry, they are in the digital transformation journey. Health care providers are no exception, and what we are enabling is the IT transformation part, and Dell XtremIO, and with the XtremIO X2 that we just announced, we are enabling that IT transformation for all of our customers, including health care providers like Royal Victoria Health. Now, with X2, specifically, we continue to improve upon the high performance, the unmatched storage efficiencies that we offer, effectively, again, bringing down the cost of hosting different types of workloads, managing it on a single platform with a much lower total cost of ownership for the health care providers like Pat, so that at the end of the day, they will be able to provide better patient and better care for the patients, be it like a doctor or clinician, trying to access the data from their endpoints or the finance or billing department trying to turn over the bills in a much shorter span as opposed to the typically 45 days turnover that we see. So that's where we see not only just XtremIO X2, but Dell EMC, the All-Flash storage portfolio, helping the customers in their digital transformation journey in health care, and with the IT department, going into the IT transformation journey to help with it. >> Chhandomay, Pat, thanks so much for joining us. >> Thank you. >> It was great, thank you. >> I'm Rebecca Knight for John Walls. We will have more from The Cube's coverage of Dell EMC World after this. (electronic music)
SUMMARY :
brought to you by Dell EMC. He is the Senior Consultant Product Marketing So, Pat, I want to start with you. and what we're doing up there is providing that you see. providing the best service to our customers, to what you're doing and the services you're providing and at the end of the day, ensuring the copy data and obviously you have your own challenges. not only providing the performance that we were looking for Well, for me I'm still stuck in the dungeon. John: Glad you could get out for the week. and so that means the patients are happy. and the storage and collecting the data and analytics what is it about XtremIO you think What's the fit? all the copies, we do not store anything that's not unique, So it lets him focus on what he wants to do. as we say, but then we're able to produce those reports and they're able to get their information but the thing is, we are providing more training the kinds of technologies that's being used and the technology in the back end in terms of the kinds of improvements you want to get, the patients, the customers, as we would say So a lot of the CTOs, we do get together, so that at the end of the day, I'm Rebecca Knight for John Walls.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Rebecca Knight | PERSON | 0.99+ |
Michael Dell | PERSON | 0.99+ |
John Walls | PERSON | 0.99+ |
Pat Harkins | PERSON | 0.99+ |
Pat | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Meditech | ORGANIZATION | 0.99+ |
Ontario | LOCATION | 0.99+ |
45 days | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
Dell EMC | ORGANIZATION | 0.99+ |
14 hours | QUANTITY | 0.99+ |
RVH | ORGANIZATION | 0.99+ |
six months | QUANTITY | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
Royal Victoria Health | ORGANIZATION | 0.99+ |
Chhandomay | PERSON | 0.99+ |
Barrie, Ontario | LOCATION | 0.99+ |
Royal Victoria Regional Health Center | ORGANIZATION | 0.99+ |
three years | QUANTITY | 0.99+ |
Chhandomay Mandal | PERSON | 0.99+ |
4 o'clock | DATE | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
XtremIO | ORGANIZATION | 0.99+ |
six p.m. | DATE | 0.99+ |
six months ago | DATE | 0.99+ |
three times | QUANTITY | 0.98+ |
US | LOCATION | 0.98+ |
Royal Health | ORGANIZATION | 0.98+ |
one | QUANTITY | 0.98+ |
Today | DATE | 0.97+ |
Dell EMC World | ORGANIZATION | 0.97+ |
Toronto, Ontario | LOCATION | 0.97+ |
XtremIO | TITLE | 0.97+ |
o six a.m. | DATE | 0.96+ |
Royal Victoria Health Center | ORGANIZATION | 0.96+ |
CTO Informatics and Technology Services | ORGANIZATION | 0.96+ |
One thing | QUANTITY | 0.91+ |
today | DATE | 0.88+ |
2017 | DATE | 0.88+ |
The Cube | ORGANIZATION | 0.87+ |
Dell EMC World 2017 | EVENT | 0.84+ |
Flash | TITLE | 0.83+ |
single platform | QUANTITY | 0.81+ |
X2 | TITLE | 0.81+ |
XtremIO | COMMERCIAL_ITEM | 0.78+ |
about an hour | QUANTITY | 0.77+ |
four o'clock in the evening | DATE | 0.76+ |
night | DATE | 0.63+ |
World | EVENT | 0.55+ |
XtremIO X2 | TITLE | 0.47+ |
X2 | COMMERCIAL_ITEM | 0.47+ |