Image Title

Search Results for Rao:

DockerCon 2022 | Sudhindra Rao


 

>>And welcome to the DockerCon cube cover here on the main stage. So HIRA RA development manager at J Frogg. Welcome to the cube. You guys have been on many times, uh, with J Frogg on the cube, great product you guys are doing great. Congratulations on all the six. Thanks for coming on the cube. >>Thank you. Thank you for having >>Me. So I'm really interested in talking about the supply chain, uh, package management, supply chain, and software workflow, huge discussion. This is one of the hottest issues that's being solved on by, with, with in DevOps and DevSecOps in, in the planet. It's all over the, all over the news, a real challenge, open source, growing so fast and so successful with cloud scale and with automation, as you guys know, you gotta ha you gotta know what's trusted, so you gotta build trust into the, the product itself. So developers don't have to do all the rework. Everyone kind of knows this right now, and this is a key solve problem you guys are solving. So I gotta ask you, what is the package management issue? Why is it such an important topic when you're talking about security? >>Yeah. Uh, so if you look at, uh, look at how software is built today, about 80 to 90% of that is open source. And currently the way we, the way we pull those open source libraries, we just, we just have blind trust in, in repositories that are central, and we rely on whatever mechanism they have built to, to establish that trust, uh, with the developer who is building it. And from, from our experience, uh, we have learned that that is not sufficient, uh, that is not sufficient to tell us that that particular developer built that end product and, uh, whatever code that they build is actually coming out in the end product. So we need, we need something to bridge that gap. We need, we need a trustworthy mechanism there to bridge that gap. And there are, there are a few other, uh, elements to it. >>Um, all these center depositories are prone to, uh, single point of failures. And, you know, in, we have all experience what happens when one of those goes down and how it stops production and how it, how it stops just software, uh, development, right? And we, what we are working on is how do we build a system where we, we can actually have, uh, liquid software as a reality and just continue to build software, regardless of all these systems of being live all the time, uh, and also have a, an implicit, uh, way of mechanism to trust, uh, what is coming out of those systems? >>You know, we've talked with you guys in the past about the building blocks of software and what flows through the pipelines, all that stuff's part of what is automated these days and, and, and important. And what I gotta ask you because security these days is like, don't trust anything, you know, um, here it's, you're, you're trusting software to be in essence verified. I'm simplifying, obviously. So I gotta ask you what is being done to solve this problem, because states change, you know, you got data, you got software injections, and you got, we got containers and Kubernetes right here, helping all this is on the table now, but what is currently being done to solve the problem? Cause it's really hard. >>Yeah, it is. It is a really hard problem. And currently, right, when we develop software, we have a team, uh, which, which we work with and we trust whatever is coming out of the team. And we have, we have a, um, what do you call certified, uh, pro production mechanism to build that software and actually release it to our customers. And when it is done in house, it is easy because we are, we control all the pieces. Now what happens when, when we are doing this with open source, we don't have that chain. We need that chain, which is independent. We just independent of where the software was, you know, produced versus where it is going to be used. We need a way to have Providence of how it was built, which parts actually went in, uh, making, uh, making the end product. Uh, and, and what are the things that we see are, are, are, uh, continuing, uh, uh, continuing evidences that this software can be used. So if there is a vulnerability that is discovered now, that is discovered, and it is released in some database, and we need to do corrective action to say that this vulnerability associated with this version, and there is no, there's no automated mechanism. So we are working on an automated mechanism where, where you can run a command, which will tell you what has happened with this piece of, uh, software, this version of it, and whether it is production worthy or not. >>It's a great goal. I gotta say, but I'll tell you, I can guarantee there's gonna be a ton of skeptics on this security people. Oh, no, I don't. I doubt it's always a back door. Um, what's the relationship with Docker? How do you guys see this evolving? Obviously it's a super important mission. Um, it's not a trend that's gonna go away. Supply chain software is here to stay. Um, it's not gonna go away. And we saw this in hardware and everyone kind of knows kind of what happens when you see these vulnerabilities. Um, you gotta have trusted software, right? This is gonna be continuing what's the relationship with DockerCon? What are you guys doing with dock and here at DockerCon? >>So we, when we actually started working on this project, uh, both Docker and, uh, J frog had had similar ideas in mind of how, how do we make this, uh, this trust mechanism available to anyone, uh, who wants it, whether they're, whether they're in interacting with dock hub or, or regardless of that, right. And how do we actually make it a mechanism, uh, that just, uh, uh, that just provides this kind of, uh, this kind of trust, uh, without, without the developer having to do something. Uh, so what we worked with, uh, with Docker is actually integrating, um, integrating our solution so that anywhere there, uh, there is, uh, Docker being used currently, uh, people don't have to change those, uh, those behaviors or change those code, uh, those code lines, uh, right. Uh, because changing hand, uh, changing this a single line of code in hundreds of systems, hundreds of CI systems is gonna be really hard. Uh, and we wanted to build a seamless integration between Docker and the solution that we are building, uh, so that, so that you can continue to do Docker pro and dock push and, but get, uh, get all the benefits of the supply chain security solution that we have. >>Okay. So let's step back for a minute and let's discuss about the pro what is the project and where's the commercial J Frogg Docker intersect take that, break that apart, just step out the project for us. What's the intended goals. What is the project? Where is it? How do people get involved and how does that intersect with the commercial interest of JRO and Docker? >>Yeah. Yeah. My favorite topic to talk about. So the, the project is called Peria, uh, Peria is, uh, is an open source project. It is, it is an effort that started with JRO and, and Docker, but by no means limited to just JRO and dock contributing, we already have five companies contributing. Uh, we are actually building a working product, uh, which will demo during, uh, during our, uh, our talk. And there is more to come there's more to come. It is being built iteratively, and, and the solution is basically to provide a decentralized mechanism, uh, similar to similar to how, how you, uh, do things with GI, so that you have, you have the, uh, the packages that you are using available at your nearest peer. Uh, there is also going to be a multi load build verification mechanism, uh, and all of the information about the packages that you're going to use will be available on a Providence log. >>So you can always query that and find out what is the latest state of affairs, what ES were discovered and make, make quick decisions. And you don't have to react after the fact after it has been in the news for a while. Uh, so you can react to your customer's needs, um, uh, as quick as they happen. And we feel that the, our emphasis on open source is key here because, uh, given our experience, you know, 80 to 90% of software that is packaged, contains open source, and there is no way currently, which we, uh, or no engineering mechanisms currently that give us that, uh, that confidence that we, whatever we are building and whatever we are dependencies we are pulling is actually worthwhile putting it into production. >>I mean, you really, it's a great service. I mean, you think about like all that's coming out, open source, open source become very social, too. People are starting projects just to code and get, get in the, in the community and hang out, uh, and just get in the fray and just do stuff. And then you see venture capitals coming in funding those projects, it's a new economic system as well, not just code, so I can see this pipeline beautifully up for scale. How do people get involved with this project? Cause again, my, my questions all gonna be around integration, how frictionless it is. That's gonna be the challenge. You mentioned that, so I can see people getting involved. What's what's how do people join? What do they do? What can they do here at Docker con? >>Yeah. Uh, so we have a website, Percy, I P yr S I a.io, and you'll find all kinds of information there. Uh, we have a GI presence. Uh, we have community meetings that are open to public. We are all, we are all doing this under the, uh, under the umbrella limits foundation. We had a boots scrap project within Linux foundation. Uh, so people who have interest in, in all these areas can come in, just, just attend those meetings, uh, add, uh, you know, add comments or just attend our stand up. So we are running it like a, like a agile from, uh, process. We are doing stand up, we are doing retrospectives and we are, we are doing planning and, and we are, we are iteratively building this. So what you'll see at Dr. Conn is, is just a, a little bit of a teaser of what we have built so far and what you, what you can expect to, uh, see in, in future such events. >>So thanks for coming on the queue. We've got 30 seconds left, put a quick plug in for the swamp up, coming up. >>Yeah. Uh, so we, we will talk a lot more about Peria and our open source efforts and how we would like you all to collaborate. We'll be at swamp up, uh, in San Diego on May 26th, uh, May 24th to 26th. Uh, so hope to see you there, hope to discuss more about Peria and, and see what he will do with, uh, with this project. Thank you. >>All right. Thanks for coming on the back to the main stage. I'm John cube. Thanks for watching. >>Thank >>You.

Published Date : May 11 2022

SUMMARY :

You guys have been on many times, uh, with J Frogg on the cube, great product you guys are doing great. Thank you for having Me. So I'm really interested in talking about the supply chain, uh, package management, supply And there are, there are a few other, uh, elements to it. a, an implicit, uh, way of mechanism to trust, uh, what is coming out of those systems? And what I gotta ask you And we have, we have a, um, what do you call certified, uh, And we saw this in hardware and everyone kind of knows kind of what happens when you see these vulnerabilities. that we are building, uh, so that, so that you can continue to do Docker pro and dock push and, How do people get involved and how does that intersect with the commercial interest of JRO and Uh, we are actually building a working product, our emphasis on open source is key here because, uh, given our experience, you know, And then you see venture capitals coming in funding those projects, uh, you know, add comments or just attend our stand up. So thanks for coming on the queue. Uh, so hope to see you there, hope to discuss more about Peria Thanks for coming on the back to the main stage.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
80QUANTITY

0.99+

San DiegoLOCATION

0.99+

John cubePERSON

0.99+

May 26thDATE

0.99+

hundredsQUANTITY

0.99+

May 24thDATE

0.99+

PeriaPERSON

0.99+

five companiesQUANTITY

0.99+

26thDATE

0.99+

sixQUANTITY

0.99+

30 secondsQUANTITY

0.99+

DockerORGANIZATION

0.99+

J FroggORGANIZATION

0.98+

Sudhindra RaoPERSON

0.98+

bothQUANTITY

0.98+

oneQUANTITY

0.98+

JROORGANIZATION

0.98+

90%QUANTITY

0.97+

J frogPERSON

0.97+

todayDATE

0.96+

hundreds of systemsQUANTITY

0.96+

DockerConORGANIZATION

0.95+

PercyPERSON

0.94+

J Frogg DockerORGANIZATION

0.94+

J FroggORGANIZATION

0.93+

about 80QUANTITY

0.9+

LinuxTITLE

0.88+

ProvidenceLOCATION

0.87+

DockerTITLE

0.87+

single lineQUANTITY

0.86+

CI systemsQUANTITY

0.84+

Dr. ConnORGANIZATION

0.83+

HIRA RAORGANIZATION

0.82+

DockerConCOMMERCIAL_ITEM

0.8+

Docker conEVENT

0.79+

GIORGANIZATION

0.78+

PeriaTITLE

0.69+

agileTITLE

0.68+

DockerCon 2022EVENT

0.68+

single pointQUANTITY

0.67+

a minuteQUANTITY

0.63+

DevSecOpsORGANIZATION

0.62+

I P yr S I a.ioORGANIZATION

0.6+

ESTITLE

0.54+

DevOpsORGANIZATION

0.46+

DockerCon 2022 023 Shubha Rao


 

(upbeat music) >> Hey, welcome back to theCUBE's cover of DockerCon Mainstage, I'm John Furrier, host of theCUBE. We're here with Shubha Rao, Senior Manager, Product Manager at AWS, in the container services. Shubha, thanks for coming on theCUBE. >> Hi, thank you very much for having me, excited to be here. >> So obviously, we're doing a lot of coverage with AWS recently, on containers, cloud native, microservices and we see you guys always at the events. But tell me about what your role is in the organization? >> Yeah, so I lead the product management and developer advocacy team, in the AWS Container Services group, where we focus on elastic containers. And what I mean by elastic containers, is that, all the AWS opinionated, out of the box solutions that we have for you, like, you know, ECS and App Runner and Elastic BeanStalk. So where we bring in our services in a way that integrates with the AWS ecosystem. And, you know, my team manages the product management and speaking to customers and developers like you all, to understand how we can improve our services for you to use it more seamlessly. >> So, I mean, I know AWS has a lot of services tha t have containers involved with them and it's a lot of integration within the cloud. Amazon's as cloud native as you're going to get at AWS. If I was a new customer, where do I start with containers if you had to give me advice? And then, where I have a nice roadmap to grow within AWS. >> Yeah, no, that's a great question a lot of customers ask us this. We recommend that the customers choose whatever is the best fit for their application needs and for their operational flexibilities. So, if you have an application which you can use, pretty abstract and like end to end managed by AWS service, we recommend that you start at the highest level of abstraction that's okay to use for your application. And that means something like App Runner, where you can bring in a web application and run it like end to end. And if there are things that you want to control and tweak, then you know, we have services like ECS, where you get control and you get flexibility to tweak it to your needs. Be it needs of like, integrations or running your own agents and running your own partner solutions or even customizing how it scales and all the, you know, characteristics related to it. And of course we have, if there are a lot of our customers also run kubernetes, so that is a requirement for you, if your apps are already packaged to run, you know, easily with the kubernetes ecosystem, then we have, yes, for you. So, like application needs, the operational, how much of the operations do you want us to handle? Or how much of it do you want to actually have control over. And with all that, like the highest level of abstraction so that we can do the work on your behalf, which is the goal of AWS. >> Yeah, well, we always hear that all that heavy lifting, undifferentiated heavy lifting, you guys handle all that. Since you're in product management, I have to ask the question 'cause you guys have a little bit longer view, as you have think about what's on the roadmap. What type of customer trends are you seeing in container services? >> We see a lot of trends about customers who want to have the plugability for their, you know, services of choice. And our EKS offerings actually help in that. And we see customers who want an opinionated, you know, give me an out of the box solution, rather than building blocks. And ECS brings you that experience. The new strengths that we are seeing is that a lot of our customer workloads are also on their data centers and in their on-prem like environments. Be it branch offices or data centers or like, you know, other areas. And so we've recently launched the, anywhere offerings for you. So, ECS anywhere, brings you an experience for letting your workloads run and management that you control, where we manage the scaling and orchestration and the whole like, you know, monitoring and troubleshooting aspects of it. Which is the new trend, which seems to be something that our customers use as a way to migrate their applications to the cloud in the long term or just to get, you know, the same experience and the same, like, constructs that they're familiar with, come onto their data centers and their environments. >> You know, Shubha, we hear a lot about containers. It's becoming standard in the enterprise now, mainstream. But customers, when we talk to them, they kind of have this evolution, they start with containers and they realize how great it is and they become container full, right. And then you start to see kind of, them trying to evolve to the next level. And then you start to see EKS come into the equation. We see that in cloud native. Is EKS a container? Or is it a service? How does that work with everything? >> So EKS is a Amazon managed service, container service, where we do the operational set up, you know, upgrades and other things for the customer on their behalf. So basically, you get the same communities APIs that you get to use for your application but we handle a little bit of the integrations and the operations selected to keeping it up and running with high availability. in a way that actually meets your needs for the applications. >> And more and more people are dipping their toe in the water, as we say, with containers. What are some of the things you've seen customers do when they jump in and start implementing that kind of phase one containers? Also, there's a lot of head room beyond that, as you mentioned. What's the first couple steps that they take? They jump in,, is it a learning process? Is it serverless? Where is the connection points all come together? >> Great, so, I want to say that, no one solution that we have, fits all needs. Like, it's not the best case, best thing for all your use cases, and not for all of your applications. So, how it all comes together is that, AWS gives you a ecosystem of tools and capabilities. Some customers want to really build the, you know, castle themselves with each of the Lego block and some customers want it to be a ready made thing. And I want, you know, one of the things that I speak to customers about is, is to rethink which of the knobs and controls do they really need to have, you know because none of the services we have is a one way door. Like, there is always flexibility and, you know ability to move from one service to the other. So, my recommendation is to always like, start with things where Amazon handles many of the heavy lifting, you know, operations for you. And that means starting with something like, serverless offerings, where, like, for example, with Lambda and Forget, we manage the host, we manage the patching, we manage the monitoring. And that would be a great place for you to use ECS offering and, you know, basically get an end to end experience in a couple of days. And over time, if you have more needs, if you have more control, you know, if you want to bring in your own agents and whatever else you have, the option to use your own EC2 Instances or to take it to other, like, you know, parts of the AWS ecosystem, where you want to, you know, tweak it to your needs. >> Well, we're seeing a lot of great traction here at DockerCon. And all the momentum around containers. And then you're starting to get into trust and security supply chain, as open source becomes more exponentially in growth, it's growing like crazy, which is a great thing. So what can we expect to see from your team in the coming months, as this rolls forward? It's not going away anytime soon. It's going to be integrated and keep on scaling. What do we expect from the team in the next month or so? Couple of months. >> Security and, you know, is our number one job. So you will continue to see more and more features, capabilities and integrations, to ensure that your workloads are secure. Availability and scaling are the things that we do, you know, as keep the lights on. So, you should expect to see all of our services growing to make it like, more user friendly, easier, you know, simpler ways to get the whole availability and scaling to your needs, better. And then like, you know, very specifically, I want to touch on a few services. So App Runner, today we have support for public facing web services. You can expect that the number of use cases that you can meet with app runner is going to increase over time. You want to invest into making it AWS end to end workflow experience for our customers because, that's the easiest journey to the cloud. And we don't want you to actually wait for months and years to actually leverage the benefits of what AWS provides. ECS, we've already launched our, like, you know, Forget and Anywhere, to bring you more flexibility in terms of easier networking capabilities, more granular controls in deployment and more controls to actually help you plug in your preferred, you know, solution ties. And in EKS, we are going to continue to keep the communities, you know, versions and, you know, bring simpler experiences for you. >> A lot of nice growth there, containers, EKS, a lot more goodness in the cloud, obviously. We have 30 seconds left. Tell us what you're most excited about personally. And what should the developers pay attention to in this conference around containers and AWS? >> I would say that AWS has a lot of offerings but, you know, speak to us, like, come to us with your questions or, you know, anything that you have, like in terms of feature requests. We are very, very eager and happy to speak to you all. You know, you can engage with us on the container store map, which is on GitHub. Or you can find, you know, many of us in events like this, AWS Summits and, you know, DockerCon and many of the other meetups. Or find us on LinkedIn, we're always happy to chat. >> Yeah, always open, open source. Open source meets cloud scale, meets commercialization. All happening, all great stuff. Shubha, thank you for coming on theCUBE. Thanks for sharing. We'll send it back now to the DockerCon Mainstage. I'm John Furrier with theCUBE. Thanks for watching. (upbeat music)

Published Date : May 11 2022

SUMMARY :

at AWS, in the container services. Hi, thank you very much for microservices and we see you and developers like you all, if you had to give me advice? packaged to run, you know, easily as you have think about in the long term or just to get, you know, And then you start to see kind of, that you get to use for your application in the water, as we say, with containers. or to take it to other, like, you know, And all the momentum around containers. keep the communities, you know, the cloud, obviously. lot of offerings but, you know, Shubha, thank you for coming on theCUBE.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
AWSORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

ShubhaPERSON

0.99+

John FurrierPERSON

0.99+

Shubha RaoPERSON

0.99+

DockerConEVENT

0.99+

30 secondsQUANTITY

0.99+

LinkedInORGANIZATION

0.98+

todayDATE

0.98+

one serviceQUANTITY

0.98+

next monthDATE

0.98+

Elastic BeanStalkTITLE

0.96+

App RunnerTITLE

0.96+

first couple stepsQUANTITY

0.96+

LambdaTITLE

0.96+

DockerConORGANIZATION

0.95+

one wayQUANTITY

0.95+

app runnerTITLE

0.92+

eachQUANTITY

0.92+

theCUBEORGANIZATION

0.9+

EKSORGANIZATION

0.87+

GitHubORGANIZATION

0.87+

oneQUANTITY

0.87+

LegoORGANIZATION

0.85+

Couple of monthsQUANTITY

0.8+

ECSTITLE

0.8+

EC2TITLE

0.73+

ForgetTITLE

0.72+

SummitsEVENT

0.59+

ECSORGANIZATION

0.57+

EKSTITLE

0.53+

AppTITLE

0.53+

2022DATE

0.52+

RunnerORGANIZATION

0.49+

023OTHER

0.44+

Seth Rao, FirstEigen | AWS re:Invent 2021


 

(upbeat music) >> Hey, welcome back to Las Vegas. theCUBE is live at AWS re:Invent 2021. I'm Lisa Martin. We have two live sets, theCUBE. We are running one of the largest hybrid tech events, most important events of the year with AWS and its massive ecosystem of partners like as I said. Two live sets, two remote sets. Over a hundred guests on the program talking about the next generation of cloud innovation. I'm pleased to welcome a first timer to theCUBE. Seth Rao, the CEO of FirstEigen joins me. Seth, nice to have you on the program. >> Thank you nice to be here. >> Talk to me about FirstEigen. Also explain to me the name. >> So FirstEigen is a startup company based out of Chicago. The name Eigen is a German word. It's a mathematical term. It comes from eigenvectors and eigenvalues which is used and what it's called is principal component analysis, which is used to detect anomalies, which is related to what we do. So we look for errors in data and hence our name FirstEigen. >> Got it. That's excellent. So talk to me. One of the things that has been a resounding theme of this year's re:Invent is that especially in today's age, every company needs to be a data company. >> Yeah. >> It's all one thing to say it's as a whole other thing to be able to put that into practice with reliable data, with trustworthy data. Talk to me about some of the challenges that you help customers solve 'cause some of the theme about not just being a data company but if you're not a data company you're probably not going to be around much longer. >> Yeah, absolutely .So what we have seen across the board across all verticals, the customers we work with is data governance teams and data management teams are constantly firefighting to find errors in data and fix it. So what we have done is we have created the software DataBuck that autonomously looks at every data set and it will discover errors that are hidden to the human eye. They're hard to find out, hard to detect. Our machine learning algorithms figure out those errors before those errors impact the business. In the usual way, things are sorted out, things are done. It's very laborious, time-consuming and expensive. You have taken a process that takes man-years or even man-months and compressed it to a few hours. >> So dramatic time-savings there. >> Absolutely. >> So six years ago when you guys were founded, you realize this gap in the market, thought it's taking way too long. We don't have this amount of time. Gosh, can you imagine if you guys weren't around the last 22 months when certainly time was of the essence? >> Absolutely. Yeah. Six years ago when we founded the company, my co-founder who's also the CTO. He has extensive experience in validating data and data quality. And my own background and my own experiences in AI and ML. And what we saw was that people are spending an enormous amount of time and yet errors were getting down through to the business side. And at that point it comes back and people are still firefighting. So it was a waste of time, waste of money, waste of effort. >> Right. But also there's the potential for brand damage, brand reputation. Whatever products and services you're producing, if your employees don't have the right data, if there's errors there of what's going out to the consumers is wrong then you've got a big problem. >> Absolutely. Interesting you should mention that because over the summer there was a Danish bank, a very big name Danish bank that had to send apology letters to its customers because they overcharged them on the mortgage because the data in the backend had some errors in it and didn't realize it was inadvertent. But somebody ultimately caught it and did the right thing. Absolutely correct. If the data is incorrect and then you're doing analytics or you're doing reporting or you're sending people a bill that they need to pay it better be very accurate. Otherwise it's a serious brand damage. It has real implications and it has a whole bunch of other issues as well. >> It does and those things can snowball very quickly. >> Yeah. >> So talk to me about one of the things that we've seen in the recent months and years is this explosion of data. And then when the pandemic struck we had this scattering of people and data sources or so much data. The edge is persistent. We've got this work from anywhere environment. What are some of the risks for organizations? They come to you and saying help us ensure that our data is trustworthy. I mean that the trust is key but how do you help organizations that are in somewhat a flux figure out how to solve that problem? >> Yeah. So you're absolutely correct. There is an explosion of data, number one. And along with that, there is also an explosion of analytical tools to mine that data. So as a consequence, there is a big growth. It's exponential growth of microservices, how people are consuming that data. Now in the old world when there were a few consumers of data, it was a lot easier to validate the data. You had few people who are the gatekeepers or the data stewards. But with an explosion of data consumers within a company, you have to take a completely different approach. You cannot now have people manually looking and creating rules to validate data. So there has to be a change in the process. You start validating the data. As soon as the data comes into your system, you start validating if the data is reliable at point zero. >> Okay. >> And then it goes downstream. And every stage the data hops that is a chance that data can get corrupted. And these are called systems risks. Because there are multiple systems and data comes from multiple systems onto the cloud, errors creep in. So you validate the data from the beginning all the way to the end and the kinds of checks you do also increase in complexity as the data is going downstream. You don't want to boil the ocean upfront. You want to do the essential checks. Is my water drinkable at this point, right? I'm not trying to cook as soon as it comes out of the tap. Is it drinkable? - Right. >> Good enough quality. If not then we go back to the source and say, guys, send me better quality data. So sequence, the right process and check every step along the way. >> How much of a cultural shift is FirstEigen helping to facilitate within organizations that now don't... There isn't time to, like we talked about if an error gets in, there's so many downstream effects that can happen, but how do you help organizations shift their mindset? 'Cause that's hard thing to change. >> Fantastic point. In fact, what we see is the mindset change is the biggest wall for companies to have good data. People have been living in the old world where there is a team that is a group, much downstream that is responsible for accurate data. But the volume of data, the complexity of data has gone up so much that that team cannot handle it anymore. It's just beyond their scope. It's not fair for us to expect them to save the world. So the mindshift has to come from an organization leadership that says guys, the data engineers who are upfront who are getting the data into the organization, who are taking care of the data assets have to start thinking of trustable data. Because if they stopped doing it, everything downstream becomes easy. Otherwise it's much, much more complex for these guys. And that's what we do. Our tool provides autonomous solution to monitor the data. It comes out with a data trust score with zero human input. Our software will be able to validate the data and give an objective trust score. Right now it's a popularity contest. People are saying they vote. Yeah, I think I like this. I like this and I like that. That's okay. Maybe it's acceptable. But the reason they do it is because there is no way to objectively say the data is trustable. If there is a small error somewhere, it's a needle in the haystack. It's hard to find out, but we can. With machine learning algorithms our software can detect the errors, the minutest errors, and to give an objective score from zero to a hundred, trust or no trust. So along with a mindset, now they have the tool to implement that mindset and we can make it happen. >> Talk to me about some of the things that you've seen from a data governance perspective, as we've seen, the explosion, the edge, people working from anywhere. This hybrid environment that we're going to be in for quite some time. >> Yeah. >> From a data governance perspective and Dave Vellante did his residency. We're seeing so many more things pop up, you know different regulations. How do you help facilitate data governance for organizations as the data volume is just going to continue to proliferate? >> Absolutely correct. So data governance. So we are a key component of data governance and data quality and data trustworthiness, reliability is a key component of it. And one of the central, one of the central pillars of data governance is the data catalog. Just like a catalog in the library. It's cataloging every data asset. But right now the catalogs, which are the mainstay are not as good as they can be. A key information that is missing is I know where my data is what I don't know is how good is my data? How usable is it? If I'm using it for an accounts receivable or an accounts payable, for example, the data better be very, very accurate. So what our software will do is it'll help data governance by linking with any data governance tool and giving an important component which is data quality, reliability, trustability score, which is objective to every data asset. So imagine I open the catalog. I see where my book is in the library. I also know if there are pages missing in the book is the book readable? So it's not good enough to know that I have a book somewhere but it's how good is it? >> Right >> So DataBuck will make that happen. >> So when customers come to you, how do you help them start? 'Cause obviously the data, the volume it's intimidating. >> Yeah. >> Where do they start? >> Great. This is interestingly enough a challenge that every customer has. >> Right. >> Everybody is ambitious enough to say, no, I want to make the change. But the previous point was, if you want to do such a big change, it's an organizational change management problem. So the way we recommend customers is start with the small problem. Get some early victories. And this software is very easy. Just bring it in, automate a small part. You have your sales data or transactional data, or operational data. Take a small portion of it, automate it. Get reliable data, get good analytics, get the results and start expanding to other places. Trying to do everything at one time, it's just too much inertia, organizations don't move. You don't get anywhere. Data initiatives will fail. >> Right. So you're helping customers identify where are those quick wins? >> Yes. And where are the landmines that we need to be able to find out where they are so we can navigate around them? >> Yeah. We have enough expedience over 20 years of working with different customers. And I know if something can go wrong we know where it'll go wrong and we can help them steer them away from the landmines and take them to areas where they'll get quick wins. 'Cause we want the customer to win. We want them to go back and say, look, because of this, we were able to do better analytics. We are able to do better reporting and so on and so forth. We can help them navigate this area. >> Do you have a favorite example, customer example that you think really articulates that value there, that we're helping customers. We can't boil the ocean like you said. It doesn't make any sense, but customer that you helped with small quick wins that really just opened up the opportunity to unlock the value of trustable data. >> Absolutely. So we're working with a fortune 50 company in the US and it's a manufacturing company. Their CFO is a little in a concern whether the data that she's reporting to the Wall Street is acceptable, does it have any errors? And ultimately she signing off on it. So she had a large team in the technology side that was supporting her and they were doing their best. But in spite of that, she's a very sharp woman. She was able to look and find errors and saying, "Something does not look right here guys. Go back and check". Then it goes back to the IT team and they go, "Oh yeah, actually, there was an error". Some errors had slipped through. So they brought us in and we were able to automate the process, What they could do. They could do a few checks within that audit window. We were able to do an enormous number of checks more. More detailed, more accurate. And we were able to reduce the number of errors that were slipping through by over 98%. >> Big number. >> So, absolutely. Really fast. Really good. Now that this has gone through they feel a lot more comfortable than the question is, okay. In addition to financial reporting, can I use it to iron out my supply chain data? 'Cause they have thousands of vendors. They have hundreds of distributors. They have products all over the globe. Now they want to validate all the data because even if your data is off in a one or 2%, if you're a hundred plus billion dollar company, it has an enormous impact on your balance sheet and your income statement. >> Absolutely. Yeah. >> So we are slowly expanding as soon as they allow us. They like us now they're taking it to other areas from beyond finance. >> Well it sounds like you have not only great technology, Seth but a great plan for helping customers with those quick wins and then learning and expanding within and really developing that trusted relationship between FirstEigen and your customers. Thank you so much for joining me on the program today. Introducing the company, what you guys are doing really cool stuff. Appreciate your time. >> Thank you very much. >> All right. >> Pleasure to be here. >> For Seth Rao, I'm Lisa Martin. You're watching theCUBE. The global leader in live tech coverage. (upbeat music)

Published Date : Dec 2 2021

SUMMARY :

We are running one of the Also explain to me the name. So FirstEigen is a startup One of the things 'cause some of the theme that are hidden to the human eye. So six years ago through to the business side. have the right data, that they need to pay it can snowball very quickly. I mean that the trust is key So there has to be a the kinds of checks you do So sequence, the right process 'Cause that's hard thing to change. So the mindshift has to come the things that you've seen as the data volume is just going is the data catalog. 'Cause obviously the data, that every customer has. So the way we recommend customers So you're to find out where they are We are able to do better We can't boil the ocean like you said. the IT team and they go, They have products all over the globe. Yeah. to other areas from beyond finance. me on the program today. The global leader in live tech coverage.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

Lisa MartinPERSON

0.99+

Lisa MartinPERSON

0.99+

Seth RaoPERSON

0.99+

ChicagoLOCATION

0.99+

Las VegasLOCATION

0.99+

AWSORGANIZATION

0.99+

SethPERSON

0.99+

USLOCATION

0.99+

FirstEigenORGANIZATION

0.99+

two remote setsQUANTITY

0.99+

Two live setsQUANTITY

0.99+

zeroQUANTITY

0.99+

oneQUANTITY

0.99+

two live setsQUANTITY

0.99+

thousandsQUANTITY

0.99+

six years agoDATE

0.99+

fortune 50ORGANIZATION

0.98+

over 98%QUANTITY

0.98+

over 20 yearsQUANTITY

0.98+

todayDATE

0.98+

Six years agoDATE

0.98+

2%QUANTITY

0.98+

OneQUANTITY

0.97+

one timeQUANTITY

0.97+

GermanOTHER

0.97+

pandemicEVENT

0.96+

Over a hundred guestsQUANTITY

0.95+

zero humanQUANTITY

0.94+

DataBuckTITLE

0.94+

hundred plus billion dollarQUANTITY

0.93+

InventEVENT

0.9+

DataBuckORGANIZATION

0.89+

one thingQUANTITY

0.87+

Wall StreetLOCATION

0.87+

last 22 monthsDATE

0.85+

re:Invent 2021EVENT

0.83+

this yearDATE

0.8+

a hundredQUANTITY

0.77+

first timerQUANTITY

0.75+

hundreds of distributorsQUANTITY

0.73+

point zeroQUANTITY

0.67+

2021DATE

0.63+

theCUBETITLE

0.55+

DanishLOCATION

0.55+

CEOPERSON

0.52+

theCUBEORGANIZATION

0.46+

DanishOTHER

0.45+

Krishna Kottapalli and Sumant Rao, Abacus Insights | AWS Startup Showcase


 

(upbeat music) >> Welcome to today's session of theCUBE's presentation of the AWS Startup Showcase, the Next Big Thing in AI, Security & Life Sciences. Today we're joined by Abacus Insights for the Life Sciences track, I'm your host, Natalie Ehrlich. Now we're going to be speaking about creating an innovation enabling data environment to accelerate your healthcare analytics journey, and we're now joined by our guests Krishna Kottapalli, chief commercial officer as well as Sumant Rao, chief product officer, both working at Abacus Insights, thank you very much for joining us. >> Thank you for having us. >> Well let's kick off with our theme Krishna, how can we create innovation enabling data environments in order to facilitate the healthcare analytics? >> Yeah, so I think if you sort of think about this that is a lot of data proliferating inside the healthcare system, and whether it's through the internal sources, external sources, devices, patient monitoring platforms, and so on, and all of this carries yeah all of these essentially carry, have useful data and intelligence, right, and essentially the users are looking to get insights out of it to solve problems. And we're also seeing that the journey that our clients are going through is actually a transformation journey, right so they are thinking about how do we seamlessly interact with our stakeholders, so their stakeholders being members and providers, so that they don't get frustrated and feel like they're interacting with multiple parts of the health plan, right, we typically when you call the health plan you feel like you're calling five different departments, so they want to have a seamless experience, and finally, I think the whole, you know, the data being you know, in the ecosystem within the patients, payers, and providers being able to operate and interact has intelligence. So what we, what we think about this is how do we take all of this and help our clients you know, digitize their, you know, path forward and create a way to deliver, you know enable them to do meaningful analytics. >> Well Sumant, when you think about your customers what are the key benefits that Abacus is providing? >> So that's a good question, so primarily speaking, we approach this as, you know a framework that drives innovation that enables data and analytics. I mean, that's really what we're trying to do here. What Abacus does though, is this is slightly different is how we think about this. So we firmly believe that data analytics is not a linear journey, I mean, you cannot say that, oh I'll build my data foundation first and then, you know have the data and then they shall come that's not how it works. So for us, the way Abacus approaches this is, we focus really heavily on the data foundation part of it first. But along the way in the process, a big part of our value statement is we engage and make sure we are driving business value throughout this piece. So, so general message is, you know make sure innovation for the sake of innovation data is not how you're approaching this, but think about your business users, get them engaged, have it small, milestone driven progress that you make along the way. So, so generally speaking, it's we're not tryna be just a platform who moves bits and bytes of information. The way we think about this is you know we'll help you along this journey, there are steps that happen that take you there. And because of which, the message to most of our customers is you focus on your core competence. You know your business, you have nuances in the data, you have nuances on needs that your customers need, you focus on that. The scale that Abacus brings because this is what we do day in day out is more along area of re-usability. So if within our customers, they've got data assets how do we reuse some of that? How does Abacus re-use the fact that because of our of what we do, we actually have data assets that, you know, we can bring data to life quickly. So, so general guidelines, right, so first is don't innovate for the sake of innovating. I mean, that's not going to get you far, respect the process that this is not a linear path, there's always value that's happening throughout the process, and that's, you know, Abacus will work closely with you to make sure you recognize that value. The second part is within your organization, you have assets. There's like major data assets, there's IP, there's things that can leverage that Abacus will do. And because we are a platform, what we focus on is configurability. We've done this for, I mean, a lot of us on the Abacus team come from healthcare space, we have got big payer DNA, we get this, and what we also know is data rules change. I mean, you know, it's really hard when you build a system that's tightly built and you cannot change and you cannot adapt as data rules change, so we've made that part of it easier. We have, we understand data governance, so we work closely with our payers data governance teams to make sure that part of it happens. And I think the last part of this which is really important, this in the context of this conversation is, all of this is good stuff, I mean, you've got massive data foundation, you've got, you know, healthcare expertise flowing in, you've got partnerships with data governance, all that is great. If you don't have best-in-class infrastructure supporting all of that, then you really, you will really have issues Erlich. I mean, that's just the way it works, and this is why, you know, we're built on the AWS stack which kind of helps us, and also helps our clients along with their cloud journey. So it's kind of an interesting set of events in terms of you know, again, I'm going to repeat this because it's important that we don't innovate for the sake of innovating, re-use your assets, leverage your existing IP, make things configurable, data changes, and then leverage best in class infrastructure, so Abacus strategy progresses across those four dimensions. >> And I mean, that's an excellent point about healthcare data being really nuanced and you know, Krishna would love to get your insights on what you see are the biggest opportunities in healthcare analytics now. >> Yeah, so the biggest opportunities are, you know there are two, we think about it in two dimensions, right, one is really around sort of the analytics use cases, and second is around the operational use cases, right, so if you think about a payer they're trying to solve both, and we see because of, you know, our the way we think about data, which is close to near real time, we are able to essentially serve up our clients with, you know helping them solve both their use cases. So think of this that, when you're a patient, you go to you know, you go to a CVS to do something, and then you go to your doctor's office to do something, right, to be, to be able to take a test. If all of these are known, to your payer care management team, if you will, in close to near real time then know, right, where you've been, what you can do how to be able to sort of intervene and so on and so forth, so from a next best action and operational use cases we see a lot of them emerging, new thanks to the cloud as well as thanks to infrastructure, which can do sort of near real time. So that's our own sort of operational use cases if you will. If, when you think about the analytics right, so, you know, every, all payers struggle with this, Which is you have limited dollars to be able to intervene with you know, a large set of population, right so every piece of data that you know, have about your patient, about the specific provider so on and so forth is able to actually, you know give you analytics to be able to intervene or engage if you will, with the patient in a very one-to-one manner. And what we find is at the end of the day if the patient is not engaged in this and the member or the patient is not engaged, you know in the healthcare, you know, value chain, if you will, then your dollars go to waste, and we feel that, in essentially both of these type of use cases can be sorted up really well with, with a unified data platform, as well as with upstack analytics. >> And now Sumant, I'd love to hear from you, you know you're really involved with the product, how do you see the competitive landscape? How do you make sure that your product is the best out there? >> So I think, I think a lot of that is we think about ourselves across three, three vectors. Talk about it as core platform, which is at a very minimal level of description, it's really moving bits and bytes from point A to point B. That's one part of it, right, and I think there's a, it's a pretty crowded space, it's a whole bunch of folks out there trying to, you know demonstrate that they can successfully land data from one point to the other. We do that too, we do that at scale. Where you'd start differentiating and pulling away from the pack is the second vector, which is enrichment. Now, this is where again, it's you have to understand healthcare data to really build a level of respect for how messy it can get. And you have to understand it and build it in a way where it's easy to keep up with the changes. We spend a lot of time, you know in building out a platform to do that so that we can implement data quickly. I mean, you know, for Abacus to bring a data source to life in less than 45 days, it's pretty straightforward. And it's you're talking on an average 6 or 12 months across the rest. Because we get this, we've got a library of rules, we understand how to bring this piece, so we start pulling away from the competitors, if you may. More along the enrichment vector, because that's where we think, getting high quality rules, getting these re-used, all of this is part of it, but then we bring another level of enrichment where we have, you know, we use public data sets, we use a reference data sets, we tie this, we fill in the blanks in the data. All of this is the end state, let's make the data shovel-ready for analytics. So we do all of that along the way, so now applying our expertise, cleansing data, making sure it's the gaps are all filled out and getting this ready and then comes the next part where we tie this data out. Cause it's one thing to bring in multiple sources quickly at scale high speed and all that good stuff, which is hard work, but you know, it's, it's expected now at the same time how do you put all that together in a meaningful manner with which we can actually, you know, land it and keep it ready? So that's two parts. So first is, the platform, the nuts and bolts, the pipes, all that is good stuff, the second is the enrichment. The third side, which is really where we start differentiating is distribution. We have a philosophy that, you know, really the mission of the whole company was to get data available. To solve use cases like the one Krishna just talked about. So rather than make this a massive change management program that takes five years to implement, and really scares your end users away, our philosophy is like let's have incremental use case all on the way, but let's talk to the users, let them interact with data as easy as they can. So we've built our partnerships on our distribution hub, which makes it easy, so an example is if you have someone in the marketing team, who really wants to analyze a particular population to reach out to them, and all they know is tableau, it is great. It should be as simple as saying, look what's the sliver of data you need to get your job done, how do you interact? So we've our distribution hub, is really is the part where, users come in, interact with the data with you know, we will meet you where you are is the underlying principle and that's how it operates. So, so I think on the first level of platform, yeah a crowded space everyone's fighting for that piece, the second part of it is enrichment where we really start pulling away using our expertise, and then at the end of it you've got the distribution part where you know you just want to make it available to users, and, you know, a lot of work has gone into getting this done but that's how we work. >> And if I could add a couple more things, Natalie, so the other thing is security, right so the reason that healthcare, healthcare players have not gone to the cloud until about three four years back, is the whole concern about security so we have invested a ton of resources and money to make sure that our platform is run in the most secure manner, and giving confidence to our clients, and it's an expensive process, right, even though you're on AWS you have to have your own certification that, so that that gives us a huge differentiator, and the last but not least is how we actually approach the whole data management deployment process, which is, our clients think about us in two dimensions, total cost of ownership, but typically 50 to 60% of what it would cost internally, and secondly, time to value, right, you can't have an infinitely long deployment cycle. So we think about those two and actually put our skin in the game and tie our, you know, tie our success to total cost of ownership and time to value. >> Well, just really quick in 1-2 sentences, would love to get your insight on Abacus's defining contribution to the future of cloud scale. >> Go ahead, Sumant. >> So as I see it, I think so part of it is we've got some of our clients who are payers and we've got them along their cloud journey trusting one of their key assets which is data, and letting us drive it. And this is really driven by domain expertise, a good understanding of data governance, and a great understanding of security, I mean, combining all of this, we've actually got our clients sitting and operating on, you know pretty significant cloud infrastructure successfully day in day out. So I think we've done our part as far as, you know helping folks along that journey. >> Yeah and just to close it out I would say it is speed, right, it is speed to deployment, you don't have to wait. You know, we have set up the infrastructure, set up the cloud and the ability to get things up and running is literally we think about it in weeks, and not months. >> Terrific, well, thank you both very much for insights, fantastic to have you on the show, really fascinating to hear about how Abacus is leveraging healthcare data expertise on its platform , to drive robust analytics, and of course, here we were joined by Abacus Insights, Krishna Kottapalli, the chief commercial officer, as well as Sumant Rao, the chief product officer, thank you again very much for your insights on this program and this session of the AWS Startup Showcase. (upbeat music)

Published Date : Jun 24 2021

SUMMARY :

thank you very much for joining us. of this and help our clients you know, and this is why, you know, and you know, Krishna would and we see because of, you know, our the competitors, if you may. and tie our, you know, the future of cloud scale. and operating on, you know and the ability to get fantastic to have you on the show,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Natalie EhrlichPERSON

0.99+

AbacusORGANIZATION

0.99+

Krishna KottapalliPERSON

0.99+

Sumant RaoPERSON

0.99+

50QUANTITY

0.99+

five yearsQUANTITY

0.99+

AWSORGANIZATION

0.99+

6QUANTITY

0.99+

Abacus InsightsORGANIZATION

0.99+

NataliePERSON

0.99+

twoQUANTITY

0.99+

two dimensionsQUANTITY

0.99+

12 monthsQUANTITY

0.99+

one pointQUANTITY

0.99+

second partQUANTITY

0.99+

threeQUANTITY

0.99+

second vectorQUANTITY

0.99+

bothQUANTITY

0.99+

secondQUANTITY

0.99+

one partQUANTITY

0.99+

less than 45 daysQUANTITY

0.99+

firstQUANTITY

0.99+

third sideQUANTITY

0.99+

60%QUANTITY

0.98+

three vectorsQUANTITY

0.98+

TodayDATE

0.98+

two partsQUANTITY

0.98+

five different departmentsQUANTITY

0.98+

KrishnaPERSON

0.98+

first levelQUANTITY

0.97+

oneQUANTITY

0.97+

todayDATE

0.97+

ErlichPERSON

0.96+

one thingQUANTITY

0.95+

theCUBEORGANIZATION

0.92+

1-2 sentencesQUANTITY

0.91+

Startup ShowcaseEVENT

0.9+

SumantPERSON

0.82+

three four years backDATE

0.8+

AWS Startup ShowcaseEVENT

0.79+

resourcesQUANTITY

0.63+

pointOTHER

0.63+

aboutDATE

0.62+

a tonQUANTITY

0.61+

secondlyQUANTITY

0.6+

SumantORGANIZATION

0.52+

coupleQUANTITY

0.45+

Dr. Tim Wagner & Shruthi Rao | Cloud Native Insights


 

(upbeat electronic music) >> Narrator: From theCUBE studios in Palo Alto and Boston, connecting with thought leaders all around the world, this is a CUBE conversation! >> Hi, I'm Stu Miniman, your host for Cloud Native Insight. When we launched this series, one of the things we wanted to talk about was that we're not just using cloud as a destination, but really enabling new ways of thinking, being able to use the innovations underneath the cloud, and that if you use services in the cloud, that you're not necessarily locked into a solution or can't move forward. And that's why I'm really excited to help welcome to the program, I have the co-founders of Vendia. First we have Dr. Tim Wagner, he is the co-founder and CEO of the company, as well as generally known in the industry as the father of Serverless from the AWS Lambda, and his co-founder, Shruthi Rao, she is the chief business officer at Vendia, also came from AWS where she worked on blockchain solutions. Tim, Shruthi, thanks so much for joining us. >> Thanks for having us in here, Stu. Great to join the show. >> All right, so Shruthi, actually if we could start with you because before we get into Vendia, coming out of stealth, you know, really interesting technology space, you and Tim both learned a lot from working with customers in your previous jobs, why don't we start from you. Block chain of course had a lot of learnings, a lot of things that people don't understand about what it is and what it isn't, so give us a little bit about what you've learned and how that lead towards what you and Tim and the team are doing with Vendia. >> Yeah, absolutely, Stu! One, the most important thing that we've all heard of was this great gravitational pull towards blockchain in 2018 and 2019. Well, I was one of the founders and early adopters of blockchain from Bitcoin and Ethereum space, all the way back from 2011 and onwards. And at AWS I started the Amazon Managed Blockchain and launched Quantum Ledger Database, two services in the block chain category. What I learned there was, no surprise, there was a gold rush to blockchain from many customers. We, I personally talked to over 1,092 customers when I ran Amazon Managed Blockchain for the last two years. And I found that customers were looking at solving this dispersed data problem. Most of my customers had invested in IoT and edge devices, and these devices were gathering massive amounts of data, and on the flip side they also had invested quite a bit of effort in AI and ML and analytics to crunch this data, give them intelligence. But guess what, this data existed in multiple parties, in multiple clouds, in multiple technology stacks, and they needed a mechanism to get this data from wherever they were into one place so they could the AI, ML, analytics investment, and they wanted all of this to be done in real time, and to gravitated towards blockchain. But blockchain had quite a bit of limitations, it was not scalable, it didn't work with the existing stack that you had. It forced enterprises to adopt this new technology and entirely new type of infrastructure. It didn't work cross-cloud unless you hired expensive consultants or did it yourself, and required these specialized developers. For all of these reasons, we've seen many POCs, majority of POCs just dying on the vine and not ever reaching the production potential. So, that is when I realized that what the problem to be solved was not a trust problem, the problem was dispersed data in multiple clouds and multiple stacks problem. Sometimes multiple parties, even, problem. And that's when Tim and I started talking about, about how can we bring all of the nascent qualities of Lambda and Serverless and use all of the features of blockchain and build something together? And he has an interesting story on his own, right. >> Yeah. Yeah, Shruthi, if I could, I'd like to get a little bit of that. So, first of all for our audience, if you're watching this on the minute, probably want to hit pause, you know, go search Tim, go watch a video, read his Medium post, about the past, present, and future of Serverless. But Tim, I'm excited. You and I have talked in the past, but finally getting you on theCUBE program. >> Yeah! >> You know, I've looked through my career, and my background is infrastructure, and the role of infrastructure we know is always just to support the applications and the data that run business, that's what is important! Even when you talk about cloud, it is the applications, you know, the code, and the data that are important. So, it's not that, you know, okay I've got near infinite compute capacity, it's the new things that I can do with it. That's a comment I heard in one of your sessions. You talked about one of the most fascinating things about Serverless was just the new creativity that it inspired people to do, and I loved it wasn't just unlocking developers to say, okay I have new ways to write things, but even people that weren't traditional coders, like lots of people in marketing that were like, "I can start with this and build something new." So, I guess the question I have for you is, you know we had this idea of Platform as a Service, or even when things like containers launched, it was, we were trying to get close to that atomic unit of the application, and often it was talked about, well, do I want it for portability? Is it for ease of use? So, you've been wrangling and looking at this (Tim laughing) from a lot of different ways. So, is that as a starting point, you know, what did you see the last few years with Lambda, and you know, help connect this up to where Shruthi just left off her bit of the story. >> Absolutely. You know, the great story, the great success of the cloud is this elimination of undifferentiated heavy lifting, you know, from getting rid of having to build out a data center, to all the complexity of managing hardware. And that first wave of cloud adoption was just phenomenally successful at that. But as you say, the real thing businesses wrestle with are applications, right? It's ultimately about the business solution, not the hardware and software on which it runs. So, the very first time I sat down with Andy Jassy to talk about what eventually become Lambda, you know, one of the things I said was, look, if we want to get 10x the number of people to come and, you know, and be in the cloud and be successful it has to be 10 times simpler than it is today. You know, if step one is hire an amazing team of distributed engineers to turn a server into a full tolerance, scalable, reliable business solution, now that's going to be fundamentally limiting. We have to find a way to put that in a box, give that capability, you know, to people, without having them go hire that and build that out in the first place. And so that kind of started this journey for, for compute, we're trying to solve the problem of making compute as easy to use as possible. You know, take some code, as you said, even if you're not a diehard programmer or backend engineer, maybe you're just a full-stack engineer who loves working on the front-end, but the backend isn't your focus, turn that into something that is as scalable, as robust, as secure as somebody who has spent their entire career working on that. And that was the promise of Serverless, you know, outside of the specifics of any one cloud. Now, the challenge of course when you talk to customers, you know, is that you always heard the same two considerations. One is, I love the idea of Lamdba, but it's AWS, maybe I have multiple departments or business partners, or need to kind of work on multiple clouds. The other challenge is fantastic for compute, what about data? You know, you've kind of left me with, you're giving me sort of half the solution, you've made my compute super easy to use, can you make my data equally easy to use? And so you know, obviously the part of the genesis of Vendia is going and tackling those pieces of this, giving all that promise and ease of use of Serverless, now with a model for replicated state and data, and one that can cross accounts, machines, departments, clouds, companies, as easily as it scales on a single cloud today. >> Okay, so you covered quite a bit of ground there Tim, if you could just unpack that a little bit, because you're talking about state, cutting across environments. What is it that Vendia is bringing, how does that tie into solutions like, you know, Lamdba as you mentioned, but other clouds or even potentially on premises solutions? So, what is, you know, the IP, the code, the solution that Vendia's offering? >> Happy to! So, let's start with the customer problem here. The thing that every enterprise, every company, frankly, wrestles with is in the modern world they're producing more data than ever, IMT, digital journeys, you know, mobile, edge devices. More data coming in than ever before, at the same time, more data getting consumed than ever before with deep analytics, supply chain optimization, AI, ML. So, even more consumers of ever more data. The challenge, of course, is that data isn't always inside a company's four walls. In fact, we've heard 80% or more of that data actually lives outside of a company's control. So, step one to doing something like AI, ML, isn't even just picking a product or selecting a technology, it's getting all of your data back together again, so that's the problem that we set out to solve with Vendia, and we realized that, you know, and kind of part of the genesis for the name here, you know, Vendia comes from Venn Diagram. So, part of that need to bring code and data together across companies, across tech stacks, means the ability to solve some of these long-standing challenges. And we looked at the two sort of big movements out there. Two of them, you know, we've obviously both been involved in, one of them was Serverless, which amazing ability to scale, but single account, single cloud, single company. The other one is blockchain and distributed ledgers, manages to run more across parties, across clouds, across tech stacks, but doesn't have a great mechanism for scalability, it's really a single box deployment model, and obviously there are a lot of limitations with that. So, our technology, and kind of our insight and breakthrough here was bringing those two things together by solving the problems in each of them with the best parts of the other. So, reimagine the blockchain as a cloud data implementation built entirely out of Serverless components that have all of the scale, the cost efficiencies, the high utilization, like all of the ease of deployment that something like Lambda has today, and at the same time, you know, bring state to Serverless. Give things like Lambda and the equivalent of other clouds a simple, easy, built-in model so that applications can have multicloud, multi-account state at all times, rather than turning that into a complicated DIY project. So, that was our insight here, you know and frankly where a lot of the interesting technology for us is in turning those centralized services, a centralized version of Serverless Compute or Serverless Database into a multi-account, multicloud experience. And so that's where we spent a lot of time and energy trying to build something that gives customers a great experience. >> Yeah, so I've got plenty of background in customers that, you know, have the "information silos", if you will, so we know, when the unstructured data, you know so much of it is not searchable, I can't leverage it. Shruthi, but maybe it might make sense, you know, what is, would you say some of the top things some of your early customers are saying? You know, I have this pain point, that's pointing me in your direction, what was leading them to you? And how does the solution help them solve that problem? >> Yeah, absolutely! One of our design partners, our lead design partners is this automotive company, they're a premier automotive company, they want, their end goal is to track car parts for warranty recall issues. So, they want to track every single part that goes into a particular car, so they're about 30 to 35,000 parts in each of these cars, and then all the way from manufacturing floor to when the car is sold, and when that particular part is replaced eventually, towards the end of the lifecycle of that part. So for this, they have put together a small test group of their partners, a couple of the parts manufacturers, they're second care partners, National Highway Safety Administration is part of this group, also a couple of dealers and service centers. Now, if you just look at this group of partners, you will see some of these parties have high technology, technology backgrounds, just like the auto manufacturers themselves or the part manufacturers. Low modality or low IT-competency partners such as the service centers, for them desktop PCs are literally the IT competency, and so does the service centers. Now, most of, majority of these are on multiple clouds. This particular auto customer is on AWS and manufactures on Azure, another one is on GCP. Now, they all have to share these large files between each other, making sure that there are some transparency and business rules applicable. For example, two partners who make the same parts or similar parts cannot see each other's data. Most of the participants cannot see the PII data that are not applicable, only the service center can see that. National Highway Safety Administration has read access, not write access. A lot of that needed to be done, and their alternatives before they started using Vendia was either use point-to-point APIs, which was very expensive, very cumbersome, it works for a finite small set of parties, it does not scale, as in when you add more participants into this particular network. And the second option for them was blockchain, which they did use, and used Hyperledger Fabric, they used Ethereum Private to see how this works, but the scalability, with Ethereum Private, it's about 14 to 15 transactions per second, with Hyperledger Fabric it taps out at 100, or 150 on a good day, transaction through, but it's not just useful. All of these are always-on systems, they're not Serverless, so just provisioning capacity, our customers said it took them two to three weeks per participant. So, it's just not a scalable solution. With Vendia, what we delivered to them was this virtual data lake, where the sources of this data are on multiple clouds, are on multiple accounts owned by multiple parties, but all of that data is shared on a virtual data lake with all of the permissions, with all of the logging, with all of the security, PII, and compliance. Now, this particular auto manufacturer and the National Highway Safety Administration can run their ML algorithms to gain intelligence off of it, and start to understand patterns, so when certain parts go bad, or what's the propensity of a certain manufacturing unit producing faulty parts, and so on, and so forth. This really shows you this concept of unstructured data being shared between parties that are not, you know, connected with each other, when there are data silos. But I'd love to follow this up with another example of, you know, the democratization, democratization is very important to Vendia. When Tim launched Lambda and founded the AWS Serverless movement as a whole, and at AWS, one thing, very important thing happened, it lowered the barrier to entry for a new wave of businesses that could just experiment, try out new things, if it failed, they scrap it, if it worked, they could scale it out. And that was possible because of the entry point, because of the paper used, and the architecture itself, and we are, our vision and mission for Vendia is that Vendia fuels the next generation of multi-party connected distributed applications. My second design partner is actually a non-profit that, in the animal welfare industry. Their mission is to maintain a no-kill for dogs and cats in the United States. And the number one reason for over populations of dogs and cats in the shelters is dogs lost, dogs and cats lost during natural disasters, like the hurricane season. And when that happens, and when, let's say your dogs get lost, and you want to find a dog, the ID or the chip-reading is not reliable, they want to search this through pictures. But we also know that if you look at a picture of a dog, four people can come up with four different breed names, and this particular non-profit has 2,500 plus partners across the U.S., and they're all low to no IT modalities, some of them have higher IT competency, and a huge turnover because of volunteer employees. So, what we did for them was came up with a mechanism where they could connect with all 2,500 of these participants very easily in a very cost-effective way and get all of the pictures of all of the dogs in all these repositories into one data lake so they can run some kind of a dog facial recognition algorithm on it and identify where my lost dog is in minutes as opposed to days it used to take before. So, you see a very large customer with very sophisticated IT competency use this, also a non-profit being able to use this. And they were both able to get to this outcome in days, not months or years, as, blockchain, but just under a few days, so we're very excited about that. >> Thank you so much for the examples. All right, Tim, before we get to the end, I wonder if you could take us under the hood a little bit here. My understanding, the solution that you talk about, it's universal apps, or what you call "unis" -- >> Tim: Unis? (laughs) >> I believe, so if I saw that right, give me a little bit of compare and contrast, if you will. Obviously there's been a lot of interest in what Kubernetes has been doing. We've been watching closely, you know there's connections between what Kubernetes is doing and Serverless with the Knative project. When I saw the first video talking about Vendia, you said, "We're serverless, and we're containerless underneath." So, help us understand, because at, you know, a super high level, some of the multicloud and making things very flexible sound very similar. So you know, how is Vendia different, and why do you feel your architecture helps solve this really challenging problem? >> Sure, sure, awesome! You know, look, one of the tenets that we had here was that things have to be as easy as possible for customers, and if you think about the way somebody walks up today to an existing database system, right? They say, "Look, I've got a schema, I know the shape of my data." And a few minutes later I can get a production database, now it's single user, single cloud, single consumer there, but it's a very fast, simple process that doesn't require having code, hiring a team, et cetera, and we wanted Vendia to work the same way. Somebody can walk up with a JSON schema, hand it to us, five minutes later they have a database, only now it's a multiparty database that's decentralized, so runs across multiple platforms, multiple clouds, you know, multiple technology stacks instead of being single user. So, that's kind of goal one, is like make that as easy to use as possible. The other key tenet though is we don't want to be the least common denominator of the cloud. One of the challenges with saying everyone's going to deploy their own servers, they're going to run all their own software, they're going to build, you know, they're all going to co-deploy a Kubernetes cluster, one of the challenges with that is that, as Shruthi was saying, first, anyone for whom that's a challenge, if you don't have a whole IT department wrapped around you that's a difficult proposition to get started on no matter how amazing that technology might be. The other challenge with it though is that it locks you out, sort of the universe of a lock-in process, right, is the lock-out process. It locks you out of some of the best and brightest things the public cloud providers have come up with, and we wanted to empower customers, you know, to pick the best degree. Maybe they want to go use IBM Watson, maybe they want to use a database on Google, and at the same time they want to ingest IoT on AWS, and they wanted all to work together, and want all of that to be seamless, not something where they have to recreate an experience over, and over, and over again on three different clouds. So, that was our goal here in producing this. What we designed as an architecture was decentralized data storage at the core of it. So, think about all the precepts you hear with blockchain, they're all there, they all just look different. So, we use a no SQL database to store data so that we can scale that easily. We still have a consensus algorithm, only now it's a high speed serverless and cloud function based mechanism. You know, instead of smart contracts, you write things in a cloud function like Lambda instead, so no more learning Solidity, now you can use any language you want. So, we changed how we think about that architecture, but many of those ideas about people, really excited about blockchain and its capabilities and the vision for the future are still alive and well, they've just been implemented in a way that's far more practical and effective for the enterprise. >> All right, so what environments can I use today for your solution, Shruthi talked about customers spanning across some of the cloud, so what's available kind of today, what's on the roadmap in the future? Will this include beyond, you know, maybe the top five or six hyper scalers? Can I do, does it just require Serverless underneath? So, will things that are in a customer's own data center eventually support that. >> Absolutely. So, what we're doing right now is having people sign up for our preview release, so in the next few weeks, we're going to start turning that on for early access to developers. That'll be, the early access program, will be multi-account, focused on AWS, and then end of summer, well be doing our GA release, which will be multicloud, so we'll actually be able to operate across multiple clouds, multiple cloud services, on different platforms. But even from day one, we'll have API support in there. So, if you got a service, could even be running on a mainframe, could be on-prem, if it's API based you can still interact with the data, and still get the benefits of the system. So, developers, please start signing up, you can go find more information on vendia.net, and we're really looking forward to getting some of that early feedback and hear more from the people that we're the most excited to have start building these projects. >> Excellent, what a great call to action to get the developers and users in there. Shruthi, if you could just give us the last bit, you know, the thing that's been fascinating, Tim, when I look at the Serverless movement, you know, I've talked to some amazing companies that were two or three people (Tim laughing) and out of their basement, and they created a business, and they're like, "Oh my gosh, I got VC funding, and it's usually sub $10,000,000. So, I look at your team, I'd heard, Tim, you're the primary coder on the team. (Tim laughing) And when it comes to the seed funding it's, you know, compared to many startups, it's a small number. So, Shruthi, give us a little bit if you could the speeds and feeds of the company, your funding, and any places that you're hiring. Yeah, we are definitely hiring, lets me start from there! (Tim laughing) We're hiring for developers, and we are also hiring for solution architects, so please go to vendia.net, we have all the roles listed there, we would love to hear from you! And the second one, funding, yes. Tim is our main developer and solutions architect here, and look, the Serverless movement really helped quite a few companies, including us, to build this, bring this to market in record speeds, and we're very thankful that Tim and AWS started taking the stands, you know back in 2014, 2013, to bring this to market and democratize this. I think when we brought this new concept to our investors, they saw what this could be. It's not an easy concept to understand in the first wave, but when you understand the problem space, you see that the opportunity is pretty endless. And I'll say this for our investors, on behalf of our investors, that they saw a real founder market-fit between us. We're literally the two people who have launched and ran businesses for both Serverless and blockchain at scale, so that's what they thought was very attractive to them, and then look, it's Tim and I, and we're looking to hire 8 to 10 folks, and I think we have gotten to a space where we're making a meaningful difference to the world, and we would love for more people to join us, join this movement and democratize this big dispersed data problem and solve for this. And help us create more meanings to the data that our customers and companies worldwide are creating. We're very excited, and we're very thankful for all of our investors to be deeply committed to us and having conviction on us. >> Well, Shruthi and Tim, first of all, congratulations -- >> Thank you, thank you. >> Absolutely looking forward to, you know, watching the progress going forward. Thanks so much for joining us. >> Thank you, Stu, thank you. >> Thanks, Stu! >> All right, and definitely tune in to our regular conversations on Cloud Native Insights. I'm your host Stu Miniman, and looking forward to hearing more about your Cloud Native Insights! (upbeat electronic music)

Published Date : Jul 2 2020

SUMMARY :

and CEO of the company, Great to join the show. and how that lead towards what you and Tim and on the flip side You and I have talked in the past, it is the applications, you know, and build that out in the first place. So, what is, you know, the and at the same time, you know, And how does the solution and get all of the solution that you talk about, and why do you feel your architecture and at the same time they Will this include beyond, you know, and hear more from the people and look, the Serverless forward to, you know, and looking forward to hearing more

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
ShruthiPERSON

0.99+

TimPERSON

0.99+

AWSORGANIZATION

0.99+

2018DATE

0.99+

2014DATE

0.99+

twoQUANTITY

0.99+

TwoQUANTITY

0.99+

80%QUANTITY

0.99+

Shruthi RaoPERSON

0.99+

2019DATE

0.99+

National Highway Safety AdministrationORGANIZATION

0.99+

two partnersQUANTITY

0.99+

National Highway Safety AdministrationORGANIZATION

0.99+

2011DATE

0.99+

2013DATE

0.99+

8QUANTITY

0.99+

BostonLOCATION

0.99+

second optionQUANTITY

0.99+

10 timesQUANTITY

0.99+

StuPERSON

0.99+

VendiaORGANIZATION

0.99+

Stu MinimanPERSON

0.99+

Palo AltoLOCATION

0.99+

Andy JassyPERSON

0.99+

United StatesLOCATION

0.99+

U.S.LOCATION

0.99+

10xQUANTITY

0.99+

oneQUANTITY

0.99+

GoogleORGANIZATION

0.99+

Tim WagnerPERSON

0.99+

two peopleQUANTITY

0.99+

vendia.netOTHER

0.99+

two servicesQUANTITY

0.99+

first videoQUANTITY

0.99+

OneQUANTITY

0.99+

2,500 plus partnersQUANTITY

0.99+

eachQUANTITY

0.99+

firstQUANTITY

0.99+

bothQUANTITY

0.99+

five minutes laterDATE

0.99+

todayDATE

0.98+

100QUANTITY

0.98+

IBMORGANIZATION

0.98+

FirstQUANTITY

0.98+

over 1,092 customersQUANTITY

0.98+

three peopleQUANTITY

0.98+

two thingsQUANTITY

0.98+

AmazonORGANIZATION

0.98+

150QUANTITY

0.98+

AWS LambdaORGANIZATION

0.98+

Naveen Rao, Intel | AWS re:Invent 2019


 

>> Announcer: Live from Las Vegas, it's theCUBE! Covering AWS re:Invent 2019. Brought to you by Amazon Web Services and Intel, along with its ecosystem partners. >> Welcome back to the Sands Convention Center in Las Vegas everybody, you're watching theCUBE, the leader in live tech coverage. My name is Dave Vellante, I'm here with my cohost Justin Warren, this is day one of our coverage of AWS re:Invent 2019, Naveen Rao here, he's the corporate vice president and general manager of artificial intelligence, AI products group at Intel, good to see you again, thanks for coming to theCUBE. >> Thanks for having me. >> Dave: You're very welcome, so what's going on with Intel and AI, give us the big picture. >> Yeah, I mean actually the very big picture is I think the world of computing is really shifting. The purpose of what a computer is made for is actually shifting, and I think from its very conception, from Alan Turing, the machine was really meant to be something that recapitulated intelligence, and we took sort of a divergent path where we built applications for productivity, but now we're actually coming back to that original intent, and I think that hits everything that Intel does, because we're a computing company, we supply computing to the world, so everything we do is actually impacted by AI, and will be in service of building better AI platforms, for intelligence at the edge, intelligence in the cloud, and everything in between. >> It's really come full circle, I mean, when I first started this industry, AI was the big hot topic, and really, Intel's ascendancy was around personal productivity, but now we're seeing machines replacing cognitive functions for humans, that has implications for society. But there's a whole new set of workloads that are emerging, and that's driving, presumably, different requirements, so what do you see as the sort of infrastructure requirements for those new workloads, what's Intel's point of view on that? >> Well, so maybe let's focus that on the cloud first. Any kind of machine learning algorithm typically has two phases to it, one is called training or learning, where we're really iterating over large data sets to fit model parameters. And once that's been done to a satisfaction of whatever performance metrics that are relevant to your application, it's rolled out and deployed, that phase is called inference. So these two are actually quite different in their requirements in that inference is all about the best performance per watt, how much processing can I shove into a particular time and power budget? On the training side, it's much more about what kind of flexibility do I have for exploring different types of models, and training them very very fast, because when this field kind of started taking off in 2014, 2013, typically training a model back then would take a month or so, those models now take minutes to train, and the models have grown substantially in size, so we've still kind of gone back to a couple of weeks of training time, so anything we can do to reduce that is very important. >> And why the compression, is that because of just so much data? >> It's data, the sheer amount of data, the complexity of data, and the complexity of the models. So, very broad or a rough categorization of the complexity can be the number of parameters in a model. So, back in 2013, there were, call it 10 million, 20 million parameters, which was very large for a machine learning model. Now they're in the billions, one or two billion is sort of the state of the art. To give you bearings on that, the human brain is about a three to 500 trillion model, so we're still pretty far away from that. So we got a long way to go. >> Yeah, so one of the things about these models is that once you've trained them, that then they do things, but understanding how they work, these are incredibly complex mathematical models, so are we at a point where we just don't understand how these machines actually work, or do we have a pretty good idea of, "No no no, when this model's trained to do this thing, "this is how it behaves"? >> Well, it really depends on what you mean by how much understanding we have, so I'll say at one extreme, we trust humans to do certain things, and we don't really understand what's happening in their brain. We trust that there's a process in place that has tested them enough. A neurosurgeon's cutting into your head, you say you know what, there's a system where that neurosurgeon probably had to go through a ton of training, be tested over and over again, and now we trust that he or she is doing the right thing. I think the same thing is happening in AI, some aspects we can bound and say, I have analytical methods on how I can measure performance. In other ways, other places, it's actually not so easy to measure the performance analytically, we have to actually do it empirically, which means we have data sets that we say, "Does it stand up to all the different tests?" One area we're seeing that in is autonomous driving. Autonomous driving, it's a bit of a black box, and the amount of situations one can incur on the road are almost limitless, so what we say is, for a 16 year old, we say "Go out and drive," and eventually you sort of learn it. Same thing is happening now for autonomous systems, we have these training data sets where we say, "Do you do the right thing in these scenarios?" And we say "Okay, we trust that you'll probably "do the right thing in the real world." >> But we know that Intel has partnered with AWS, I ran autonomous driving with their DeepRacer project, and I believe it's on Thursday is the grand final, it's been running for, I think it was announced on theCUBE last year, and there's been a whole bunch of competitions running all year, basically training models that run on this Intel chip inside a little model car that drives around a race track, so speaking of empirical testing of whether or not it works, lap times gives you a pretty good idea, so what have you learned from that experience, of having all of these people go out and learn how to use these ALM models on a real live race car and race around a track? >> I think there's several things, I mean one thing is, when you turn loose a number of developers on a competitive thing, you get really interesting results, where people find creative ways to use the tools to try to win, so I always love that process, I think competition is how you push technology forward. On the tool side, it's actually more interesting to me, is that we had to come up with something that was adequately simple, so that a large number of people could get going on it quickly. You can't have somebody who spends a year just getting the basic infrastructure to work, so we had to put that in place. And really, I think that's still an iterative process, we're still learning what we can expose as knobs, what kind of areas of innovation we allow the user to explore, and where we sort of walk it down to make it easy to use. So I think that's the biggest learning we get from this, is how I can deploy AI in the real world, and what's really needed from a tool chain standpoint. >> Can you talk more specifically about what you guys each bring to the table with your collaboration with AWS? >> Yeah, AWS has been a great partner. Obviously AWS has a huge ecosystem of developers, all kinds of different developers, I mean web developers are one sort of developer, database developers are another, AI developers are yet another, and we're kind of partnering together to empower that AI base. What we bring from a technological standpoint are of course the hardware, our CPUs, our AI ready now with a lot of software that we've been putting out in the open source. And then other tools like OpenVINO, which make it very easy to start using AI models on our hardware, and so we tie that in to the infrastructure that AWS is building for something like DeepRacer, and then help build a community around it, an ecosystem around it of developers. >> I want to go back to the point you were making about the black box, AI, people are concerned about that, they're concerned about explainability. Do you feel like that's a function of just the newness that we'll eventually get over, and I mean I can think of so many examples in my life where I can't really explain how I know something, but I know it, and I trust it. Do you feel like it's sort of a tempest in a teapot? >> Yeah, I think it depends on what you're talking about, if you're talking about the traceability of a financial transaction, we kind of need that maybe for legal reasons, so even for humans we do that. You got to write down everything you did, why did you do this, why'd you do that, so we actually want traceability for humans, even. In other places, I think it is really about the newness. Do I really trust this thing, I don't know what it's doing. Trust comes with use, after a while it becomes pretty straightforward, I mean I think that's probably true for a cell phone, I remember the first smartphones coming out in the early 2000s, I didn't trust how they worked, I would never do a credit card transaction on 'em, these kind of things, now it's taken for granted. I've done it a million times, and I never had any problems, right? >> It's the opposite in social media, most people. >> Maybe that's the opposite, let's not go down that path. >> I quite like Dr. Kate Darling's analogy from MIT lab, which is we already we have AI, and we're quite used to them, they're called dogs. We don't fully understand how a dog makes a decision, and yet we use 'em every day. In a collaboration with humans, so a dog, sort of replace a particular job, but then again they don't, I don't particularly want to go and sniff things all day long. So having AI systems that can actually replace some of those jobs, actually, that's kind of great. >> Exactly, and think about it like this, if we can build systems that are tireless, and we can basically give 'em more power and they keep going, that's a big win for us. And actually, the dog analogy is great, because I think, at least my eventual goal as an AI researcher is to make the interface for intelligent agents to be like a dog, to train it like a dog, reinforce it for the behaviors you want and keep pushing it in new directions that way, as opposed to having to write code that's kind of esoteric. >> Can you talk about GANs, what is GANs, what's it stand for, what does it mean? >> Generative Adversarial Networks. What this means is that, you can kind of think of it as, two competing sides of solving a problem. So if I'm trying to make a fake picture of you, that makes it look like you have no hair, like me, you can see a Photoshop job, and you can kind of tell, that's not so great. So, one side is trying to make the picture, and the other side is trying to guess whether it's fake or not. We have two neural networks that are kind of working against each other, one's generating stuff, and the other one's saying, is it fake or not, and then eventually you keep improving each other, this one tells that one "No, I can tell," this one goes and tries something else, this one says "No, I can still tell." The one that's trying with a discerning network, once it can't tell anymore, you've kind of built something that's really good, that's sort of the general principle here. So we basically have two things kind of fighting each other to get better and better at a particular task. >> Like deepfakes. >> I use that because it is relevant in this case, and that's kind of where it came from, is from GANs. >> All right, okay, and so wow, obviously relevant with 2020 coming up. I'm going to ask you, how far do you think we can take AI, two part question, how far can we take AI in the near to mid term, let's talk in our lifetimes, and how far should we take it? Maybe you can address some of those thoughts. >> So how far can we take it, well, I think we often have the sci-fi narrative out there of building killer machines and this and that, I don't know that that's actually going to happen anytime soon, for several reasons, one is, we build machines for a purpose, they don't come from an embattled evolutionary past like we do, so their motivations are a little bit different, say. So that's one piece, they're really purpose-driven. Also, building something that's as general as a human or a dog is very hard, and we're not anywhere close to that. When I talked about the trillions of parameters that a human brain has, we might be able to get close to that from a engineering standpoint, but we're not really close to making those trillions of parameters work together in such a coherent way that a human brain does, and efficient, human brain does that in 20 watts, to do it today would be multiple megawatts, so it's not really something that's easily found, just laying around. Now how far should we take it, I look at AI as a way to push humanity to the next level. Let me explain what that means a little bit. Simple equation I always sort of write down, is people are like "Radiologists aren't going to have a job." No no no, what it means is one radiologist plus AI equals 100 radiologists. I can take that person's capabilities and scale it almost freely to millions of other people. It basically increases the accessibility of expertise, we can scale expertise, that's a good thing. It makes, solves problems like we have in healthcare today. All right, that's where we should be going with this. >> Well a good example would be, when, and probably part of the answer's today, when will machines make better diagnoses than doctors? I mean in some cases it probably exists today, but not broadly, but that's a good example, right? >> It is, it's a tool, though, so I look at it as more, giving a human doctor more data to make a better decision on. So, what AI really does for us is it doesn't limit the amount of data on which we can make decisions, as a human, all I can do is read so much, or hear so much, or touch so much, that's my limit of input. If I have an AI system out there listening to billions of observations, and actually presenting data in a form that I can make better decisions on, that's a win. It allows us to actually move science forward, to move accessibility of technologies forward. >> So keeping the context of that timeframe I said, someday in our lifetimes, however you want to define that, when do you think that, or do you think that driving your own car will become obsolete? >> I don't know that it'll ever be obsolete, and I'm a little bit biased on this, so I actually race cars. >> Me too, and I drive a stick, so. >> I kind of race them semi-professionally, so I don't want that to go away, but it's the same thing, we don't need to ride horses anymore, but we still do for fun, so I don't think it'll completely go away. Now, what I think will happen is that commutes will be changed, we will now use autonomous systems for that, and I think five, seven years from now, we will be using autonomy much more on prescribed routes. It won't be that it completely replaces a human driver, even in that timeframe, because it's a very hard problem to solve, in a completely general sense. So, it's going to be a kind of gentle evolution over the next 20 to 30 years. >> Do you think that AI will change the manufacturing pendulum, and perhaps some of that would swing back to, in this country, anyway, on-shore manufacturing? >> Yeah, perhaps, I was in Taiwan a couple of months ago, and we're actually seeing that already, you're seeing things that maybe were much more labor-intensive before, because of economic constraints are becoming more mechanized using AI. AI as inspection, did this machine install this thing right, so you have an inspector tool and you have an AI machine building it, it's a little bit like a GAN, you can think of, right? So this is happening already, and I think that's one of the good parts of AI, is that it takes away those harsh conditions that humans had to be in before to build devices. >> Do you think AI will eventually make large retail stores go away? >> Well, I think as long as there are humans who want immediate satisfaction, I don't know that it'll completely go away. >> Some humans enjoy shopping. >> Naveen: Some people like browsing, yeah. >> Depends how fast you need to get it. And then, my last AI question, do you think banks, traditional banks will lose control of the payment systems as a result of things like machine intelligence? >> Yeah, I do think there are going to be some significant shifts there, we're already seeing many payment companies out there automate several aspects of this, and reducing the friction of moving money. Moving money between people, moving money between different types of assets, like stocks and Bitcoins and things like that, and I think AI, it's a critical component that people don't see, because it actually allows you to make sure that first you're doing a transaction that makes sense, when I move from this currency to that one, I have some sense of what's a real number. It's much harder to defraud, and that's a critical element to making these technologies work. So you need AI to actually make that happen. >> All right, we'll give you the last word, just maybe you want to talk a little bit about what we can expect, AI futures, or anything else you'd like to share. >> I think it's, we're at a really critical inflection point where we have something that works, basically, and we're going to scale it, scale it, scale it to bring on new capabilities. It's going to be really expensive for the next few years, but we're going to then throw more engineering at it and start bringing it down, so I start seeing this look a lot more like a brain, something where we can start having intelligence everywhere, at various levels, very low power, ubiquitous compute, and then very high power compute in the cloud, but bringing these intelligent capabilities everywhere. >> Naveen, great guest, thanks so much for coming on theCUBE. >> Thank you, thanks for having me. >> You're really welcome, all right, keep it right there everybody, we'll be back with our next guest, Dave Vellante for Justin Warren, you're watching theCUBE live from AWS re:Invent 2019. We'll be right back. (techno music)

Published Date : Dec 3 2019

SUMMARY :

Brought to you by Amazon Web Services and Intel, AI products group at Intel, good to see you again, Dave: You're very welcome, so what's going on and we took sort of a divergent path so what do you see as the Well, so maybe let's focus that on the cloud first. the human brain is about a three to 500 trillion model, and the amount of situations one can incur on the road is that we had to come up with something that was on our hardware, and so we tie that in and I mean I can think of so many examples You got to write down everything you did, and we're quite used to them, they're called dogs. and we can basically give 'em more power and you can kind of tell, that's not so great. and that's kind of where it came from, is from GANs. and how far should we take it? I don't know that that's actually going to happen it doesn't limit the amount of data I don't know that it'll ever be obsolete, but it's the same thing, we don't need to ride horses that humans had to be in before to build devices. I don't know that it'll completely go away. Depends how fast you need to get it. and reducing the friction of moving money. All right, we'll give you the last word, and we're going to scale it, scale it, scale it we'll be back with our next guest,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

DavePERSON

0.99+

Amazon Web ServicesORGANIZATION

0.99+

20 wattsQUANTITY

0.99+

AWSORGANIZATION

0.99+

2014DATE

0.99+

10 millionQUANTITY

0.99+

Naveen RaoPERSON

0.99+

Justin WarrenPERSON

0.99+

20 millionQUANTITY

0.99+

oneQUANTITY

0.99+

TaiwanLOCATION

0.99+

2013DATE

0.99+

100 radiologistsQUANTITY

0.99+

Alan TuringPERSON

0.99+

NaveenPERSON

0.99+

IntelORGANIZATION

0.99+

MITORGANIZATION

0.99+

two thingsQUANTITY

0.99+

twoQUANTITY

0.99+

last yearDATE

0.99+

billionsQUANTITY

0.99+

a monthQUANTITY

0.99+

2020DATE

0.99+

two partQUANTITY

0.99+

Las VegasLOCATION

0.99+

one pieceQUANTITY

0.99+

ThursdayDATE

0.99+

Kate DarlingPERSON

0.98+

early 2000sDATE

0.98+

two billionQUANTITY

0.98+

first smartphonesQUANTITY

0.98+

one sideQUANTITY

0.98+

Sands Convention CenterLOCATION

0.97+

todayDATE

0.97+

OpenVINOTITLE

0.97+

one radiologistQUANTITY

0.96+

Dr.PERSON

0.96+

16 year oldQUANTITY

0.95+

two phasesQUANTITY

0.95+

trillions of parametersQUANTITY

0.94+

firstQUANTITY

0.94+

a million timesQUANTITY

0.93+

seven yearsQUANTITY

0.93+

billions of observationsQUANTITY

0.92+

one thingQUANTITY

0.92+

one extremeQUANTITY

0.91+

two competing sidesQUANTITY

0.9+

500 trillion modelQUANTITY

0.9+

a yearQUANTITY

0.89+

fiveQUANTITY

0.88+

eachQUANTITY

0.88+

One areaQUANTITY

0.88+

a couple of months agoDATE

0.85+

one sortQUANTITY

0.84+

two neuralQUANTITY

0.82+

GANsORGANIZATION

0.79+

couple of weeksQUANTITY

0.78+

DeepRacerTITLE

0.77+

millions ofQUANTITY

0.76+

PhotoshopTITLE

0.72+

deepfakesORGANIZATION

0.72+

next few yearsDATE

0.71+

yearQUANTITY

0.67+

re:Invent 2019EVENT

0.66+

threeQUANTITY

0.64+

Invent 2019EVENT

0.64+

aboutQUANTITY

0.63+

Gou Rao, Portworx & Julio Tapia, Red Hat | KubeCon + CloudNativeCon 2019


 

>> Announcer: Live from San Diego, California, it's theCUBE. Covering KubeCon and CloudNativeCon brought to you by Red Hat, the Cloud Native Computing Foundation, and its ecosystem partners. >> Welcome back to theCUBE here in San Diego for KubeCon CloudNativeCon, with John Troyer, I'm Stu Miniman, and happy to welcome to the program two guests, first time guests, I believe. Julio Tapia, who's the director of Cloud BU partner and community with Red Hat and Gou Rao, who's the founder and CEO at Portworx. Gentlemen, thanks so much for joining us. >> Thank you, happy to be here. >> Thanks for having us. >> Alright, let's start with community, ecosystem, it's a big theme we have here at the show. Tell us your main focus, what the team's doing here. >> Sure, so I'm part of a product team, we're responsible for OpenShift, OpenStack and Red Hat virtualization. And my responsibility is to build a partner ecosystem and to do our community development. On the partner front, we work with a lot of different partners. We work with ISVs, we work with OEMs, SIs, COD providers, TelCo partners. And my role is to help evangelize, to help on integrations, a lot of joint solutions, and then do a little bit of go to market as well. And the community side, it's to evangelize with upstream projects or customers with developers, and so forth. >> Alright, so, Gou, actually, it's not luck, but I had a chance to catch up with the Red Hat storage team. Back when I was on the vendor side I partnered with them. Red Hat doesn't sell gear, they're a software company. Everything open-source, and when it comes to data and storage, obviously they're working with partners. So put Portworx into the mix and tell us about the relationship and what you both do together. >> Sure, yeah, we're a Red Hat OpenShift partner. We've been working with them for quite some time now, partner with IBM as well. But yeah, Portworx, we focus on enabling cloud native storage, right? So we complement the OpenShift ecosystem. Essentially we enable people to run stateful services in OpenShift with a lot of agility and we bring DR backup functionality to OpenShift. I'm sure you're familiar with this, but, people, when they deploy OpenShift, they're running fleets of OpenShift clusters. So, multi-cluster management and data accessibility across clusters is a big topic. >> Yeah, if you could, I hear the term cloud native storage, what does that really mean? You know, back a few years ago, containers were stateless, I didn't have my persistent storage, it was super challenging as to how we deal with this. And now we have some options, but what is the goal of what we're doing here? >> There really is no notion of a stateless application, right? Especially when it comes to enterprise applications. What cloud native storage means is, to us at least, it signifies a couple of things. First of all, the consumer of storage is not a machine anymore, right? Typical storage systems are designed to provide storage to either a virtual machine or a hardware server. The consumer of storage is now a container that's running inside of a machine. And in fact, an application is never just one container, it's many containers running on different systems so it's a distributed problem. So what cloud native storage means is the following things. Providing container granular data services, being application aware, meaning that you're providing services to many containers that are running on different systems, and facilitating the data life cycle management of those applications from a Kubernetes way, right? The user experience is now driven through Kubernetes as opposed to a storage admin driving that functionality so it's these three things that make a platform cloud native. >> I want to dig into the operator concept for a little bit here, as it applies to storage. So, first, Operators. I first heard of this a couple years back with the CoreOS folks, who are now part of Red Hat and it's a piece of technology that came into the Kubernetes ecosystem, seems to be very well adopted, they talked about it today on the keynote. And I'd love to hear a little bit more about the ecosystem. But first I want to figure out what it is and in my head, I didn't quite understand it and I'm like, well, okay, automation and life cycle, I get it. There's a bunch of things, Puppet and Chef and Ansible and all sorts of things there. There's also things that know about cloud like Terraform, or Cloudform, or Halloumi, all these sort of things here. But this seems like this is a framework around life cycle, it might be a little higher in the semantic level or knows a little bit more about what's going on inside Kubernetes. >> I'll just touch on this, so Operators, it's a way to codify business logic into the application, so how to manage, how to install, how to manage the life cycle of the application on top of the Kubernetes cluster. So it's a way of automating. >> Right, but-- >> And just to add to that, you mentioned Ansible, Salt, right? So, as engineers, we're always trying to make our lives easier. And so, infrastructure automation certainly is a concept here. What Operators does is it elevates those same needs to more of an application construct level, right? So it's a piece of intelligent software that is watching the entire run-time of an application as opposed to provisioning infrastructure and stepping out of the way. Think of it as a living being, it is constantly running and reacting to what the application is doing and what its needs are. So, on one hand you have automation that sets things up and then the job is done. Here the job is never done, you're sort of, right there as a side car along with the application. >> Nice, but for any sort of life cycle or for any sort of project like this, you have to have code sharing and contributing, right? And so, Julio, can you tell us a little about that? >> What we do is we're obviously all in on Operators. And so we've invested a great deal in terms of documentation and training and workshops. We have certification programs, we're really helping create the ecosystem and facilitate the whole process. You may be familiar, we announced Operator Framework a year ago, it includes Operator SDKs. So we have an Operator SDK for Helm, for Ansible, for Go. We also have announced Operator Life Cycle Manager which does the install, the maintenance and the whole life cycle management process. And then earlier this year we did introduce also, Operatorhub.io which is a community of our Operators, we have about 150 Operators as part of that. >> How does the Operator Framework relate to OpenShare versus upstream Kubernetes? Is it an OpenShift and Red Hat specific thing, or? >> Yes, so, Operatorhub.io is a listing of Operators that includes community Operators. And then we also have certified Operators. And the community Operators run on any Kubernetes instance. The certified Operators make sure that we run on OpenShift specifically. So that's kind of the distinction between those two. >> I remember a Red Hat summit where you talked about some bits. So, give us a little walk around the show, some of the highlights from Operators, the ecosystem, obviously, we've got Portworx here but there's a broad ecosystem. >> Yeah, so we have a huge huge ecosystem. The ISVs play a big part of this. So we've got Operators database partners, security partners, app monitoring partners, storage partners. Yesterday we had an OpenShift commons event, we showcased five of our big Operator partnerships with Couchbase, with MongoDB, with Portworx obviously, with StorageOS and with Dynatrace. But we have a lot of partners in a lot of different areas that are creating these Operators, are certifying them, and they're starting to get a lot of use with customers so it's pretty exciting stuff. >> Gou, I'd love your viewpoint on this because of course, Portworx, good Red Hat partner but you need to work with all the Kubernetes opt-ins out there so, what's the importance of Operators to your business? >> Yeah, you know. OpenShift, obviously, it's one of the leading platforms for Kubernetes out there and so, the reason that is, it's because it's the expectations that it sets to an enterprise customer. It's that Red Hat experience behind it and so the notion of having an Operator that's certified by Red Hat and Red Hat going through the vetting process and making sure that all of the components that it is recommending from its ecosystem that you're putting onto OpenShift, that whole process gives a whole new level of enterprise experience, so, for us, that's been really good, right? Working with Red Hat, going through the process with them and making sure that they are actually double clicking on everything we submit, and there's a real, we iterate with them. So the quality of the product that's put out there within OpenShift is very high. So, we've deployed these Operators now, the Operator that Portworx just announced, right? We have it running in customers' hands so these are real end users, you'll be talking to Ford later on today. Harvard, for example, and so the level of automation that it has provided to them in their platform, it's quite high. >> I was kind of curious to shift maybe to the conference here that you all have a long history. With organizations and both of you personally in the Kubernetes world and cloud native world. We're here at KubeCon CloudNativeCon, North America, 2019. It's pretty big. And I see a lot of folks here, a lot of vendors, a lot of engineers, huge conference, 12,000 people. I mean, any perspective? >> So I've been at Red Hat a little over six years and I was at the very first KubeCon many years ago in San Francisco, I think we had about 200 people there. So this show has really grown over the years. And we're obviously big supporters, we've participated in KubeCon in Shanghai and Barcelona, we're obviously here. We're just super excited about seeing the ecosystem and the whole community grow and expand, so, very exciting. >> Gou? >> Yeah, I mean, like Julio mentioned, right? So, all the way from DockerCon to where we are today and I think last year was 8000 people in Seattle and I think there're probably I've heard numbers like 12? So it's also equally interesting to see the maturity of the products around Kubernetes. And that level of consistency and lack of fracture, right? From mainstream Kubernetes to how it's being adopted in OpenShift, there's consistency across the different Kubernetes platforms. Also, it's very interesting to see how on-prem and public cloud Kubernetes are coexisting. Four years ago we were kind of worried on how that would turn out, but I think it's enabling those hybrid-cloud workloads and I think today in this KubeCon we see a lot of people talking about that and having interest around it. >> That's a really great point there. Julio, want to give you the final word, for people that aren't yet engaged in the ecosystem of Operators, how can they learn more and get involved? >> Yeah, so we're excited to work with everybody, our ecosystem includes customers, partners, contributors, so as long as you're all in on Operators, we're ready to help. We've got tools, we've documentation, we have workshops, we have training, we have certification programs. And we also can help you with go to market. We're very fortunate to have a huge customer footprint, and so for those partners that have solutions, databases, storage solutions, there's a lot of joint opportunities out there that we can participate in. So, really excited to do that. >> Julio, Gou, thank you so much, you have a final word, Gou? >> I was just going to say, so, to follow up on the Operator comment on the certification that Julio mentioned earlier, so the Operator that we have, we were able to achieve level five certification. The level five signifies just the amount of automation that's built into it, so the concept of having Operators help people deploy these complex applications, that's a very important concept in Kubernetes itself. So, glad to be a Red Hat partner. >> That's actually a really good point, we have an Operator maturity model, level one, two, three, four, five. Level one and two are more your installations and upgrades. But the really highly capable ones, the fours and fives, are really to be commended. And Portworx is one of those partners. So we're excited to be here with them. >> That is a powerful statement, we talk about the complexity and how many pieces are in there. Everybody's looking to really help cross that chasm, get the vast majority of people. We need to allow environments to have more automation, more simplicity, a story I heard loud and clear at AnsibleFest earlier this year and through the partner ecosystem. It's good to see progress, so congratulations and thank you both for joining us. >> Thank you, thank you. >> Thank you. >> All right, for John Troyer, I'm Stu Miniman, back with lots more here from KubeCon CloudNativeCon 2019, thanks for watching theCUBE. (electronic music)

Published Date : Nov 19 2019

SUMMARY :

brought to you by Red Hat, I'm Stu Miniman, and happy to welcome to the program it's a big theme we have here at the show. And the community side, it's to evangelize to catch up with the Red Hat storage team. and we bring DR backup functionality to OpenShift. it was super challenging as to how we deal with this. and facilitating the data life cycle management that came into the Kubernetes ecosystem, into the application, so how to manage, and stepping out of the way. and facilitate the whole process. So that's kind of the distinction between those two. the ecosystem, obviously, we've got Portworx here and they're starting to get a lot of use with customers and so the notion of having an Operator in the Kubernetes world and cloud native world. and the whole community grow and expand, So it's also equally interesting to see the maturity for people that aren't yet engaged in the ecosystem And we also can help you with go to market. so the Operator that we have, the fours and fives, are really to be commended. and thank you both for joining us. back with lots more here

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
John TroyerPERSON

0.99+

IBMORGANIZATION

0.99+

JulioPERSON

0.99+

Julio TapiaPERSON

0.99+

SeattleLOCATION

0.99+

Stu MinimanPERSON

0.99+

Red HatORGANIZATION

0.99+

Cloud Native Computing FoundationORGANIZATION

0.99+

San FranciscoLOCATION

0.99+

two guestsQUANTITY

0.99+

San DiegoLOCATION

0.99+

fiveQUANTITY

0.99+

last yearDATE

0.99+

San Diego, CaliforniaLOCATION

0.99+

twoQUANTITY

0.99+

ShanghaiLOCATION

0.99+

Gou RaoPERSON

0.99+

BarcelonaLOCATION

0.99+

GouPERSON

0.99+

PortworxORGANIZATION

0.99+

FordORGANIZATION

0.99+

KubeConEVENT

0.99+

8000 peopleQUANTITY

0.99+

todayDATE

0.99+

bothQUANTITY

0.99+

12,000 peopleQUANTITY

0.99+

oneQUANTITY

0.99+

North AmericaLOCATION

0.99+

first timeQUANTITY

0.98+

YesterdayDATE

0.98+

DynatraceORGANIZATION

0.98+

TelCoORGANIZATION

0.98+

CouchbaseORGANIZATION

0.98+

firstQUANTITY

0.98+

a year agoDATE

0.98+

OpenShiftTITLE

0.98+

Four years agoDATE

0.98+

three thingsQUANTITY

0.97+

one containerQUANTITY

0.97+

over six yearsQUANTITY

0.97+

KubernetesTITLE

0.97+

DockerConEVENT

0.97+

Operatorhub.ioORGANIZATION

0.96+

CloudNativeConEVENT

0.96+

12QUANTITY

0.96+

about 200 peopleQUANTITY

0.96+

fivesQUANTITY

0.95+

about 150 OperatorsQUANTITY

0.95+

Operator FrameworkTITLE

0.95+

2019DATE

0.93+

CloudNativeCon 2019EVENT

0.93+

earlier this yearDATE

0.93+

Ranga Rao & Thomas Scheibe, Cisco | Cisco Live US 2019


 

>> Announcer: Live from San Diego, California, it's theCUBE covering Cisco Live US 2019, brought to you by Cisco and its ecosystem partners. >> Good morning, welcome to theCUBE's second day of coverage of Cisco Live 2019 from San Diego. I'm Lisa Martin, my co-host is Stu Miniman, and Stu and I have a couple of guests from Cisco's Data Center Networking group with us. To my right, Thomas Scheibe, VP of Product Management and to his right, Ranga Rao, Senior Director of Product Management. Guys, welcome to theCUBE. Welcome back, Ranga. >> Thank you very much. >> Thomas, great to have you. >> Happy to be here, yeah. >> So, here we are in the DevNet Zone, this is, Ranga you were saying this is probably one of the busiest locations within all of Cisco Live. It's been jammed since this morning. >> Stu: You got the ACI Takeover going on right now. >> It is! >> Yes, yes. >> So with that said, Thomas, with ACI, application-centric infrastructure, all these changes to the network, what's going on on day two? >> It's fun, it's actually, yeah, as you say, it's a little busier. We have a good set of news coming this week and yeah, you're going to hear more but let me give a little bit of a hint here as to what we're doing. We talked about how do we extend ACI into the cloud, or, as they say, anywhere. We did this two or three months ago with AWS. We're following up with the same for Azure, as well as having extension into the IBM Cloud, so that's really really exciting, opens up a lot of capabilities not just for the networking teams, how to extend into the cloud, we also have some interesting things around how we actually can start with ACI in the cloud first for the app developers, and then come back if you want to deploy either in the cloud or on prem. So, really an extension of what is doable for the networking team, as well as actually for the app teams. >> So, any ACI, any platform, any location, any workload? Any hypervisor, any container platform you want, yeah. Flexibility, that's what this is about. So, Ranga, spoke with you earlier this year at one of your partner shows, talking about all the latest, you know, AI, cognitive learning and all of those pieces there. Partner's obviously a big piece of the ACI story here. >> Thomas: Yeah. >> Thomas was giving a little talk about some of the cloud provider, what more can you share about what's happening with how your product integrates with a vast ecosystem? >> Absolutely, when we built ACI, we sort of anticipated this moment when this crowd in the DevNet area that we are seeing, right? Like, from people shifting from a very CLI focused approach to a developer focused, and integrated solution-focused approach. So we built ACI as an open platform. We have 65 plus partners, and with new integrations coming on like every so often. Just this at Cisco Live, we are launching an integration with F5. F5 is building, has built, an app for the Cisco ACI App Center, and a whole number of tools and integrations for developers. We have essentially built integrations with Ansible, Terraform, and new Python modules, so these are all exciting new things coming at Cisco Live. >> So, when you guys talk with customers, being in product management, I know you talk to customers all the time. There's presumably a very bidirectional symbiotic relationship. When you're talking with customers, Thomas, what are some of the values that they're looking for ACI to help them deliver, especially as it relates to being able to get more value out of the data that they've got? >> Right, so there are a couple of things that are probably standing out. One is, if you turn to the networking team, it's all around network automation and segmentation. There are the two biggest things everybody's after. Particularly if you look as the data is more distributed it's become harder and harder to do those all manually. You want to automate your date one activity as well as you want to make sure you can enforce segmentation of the data lost in the cloud, and on prem, all over the place. So these are the two big ones that really every network operations team is after. And then the second piece that we see obviously more and more is, what about day two? Give me better visibility, in particular, as they say, if the data is so dissolute, get me better visibility. What is going on? And then be able to tie this back to the user, which is the application team. Is there an impact on the application or not? And so there are a lot of interesting tools that we have and we're going to demo those all here, that is available for the networking team. The other piece, really, and you asked for the values, as I said, the app team, right? What is today, if an app team is developing they're typically then handed over in production. This is where this friction happens. How long does it take to go from here to here? If I can shorten that one and just take the blueprints out of the app development process, and map it directly to the automation capability of ACI, I can shorten the cycle to deploy, and so that's a tremendous value that we do see from customers. >> Great, lots of discussion in the keynote about the ever changing architectures that are happening here. Give us the update, you know? We've been down this path for ACI for a few years now, but where are your customers at? You know, what are the new things that are causing them challenges and opportunities, Thomas? >> Yeah, I probably use instead of ever changing I would say ever expanding, but you're absolutely right, because what we saw when we started this office rollaround how do I automate my data center? How do I get a cloud experience in my data center? What we see changing, and what I find is driven by this whole app refactoring process, that customers want to deploy apps maybe in the cloud, maybe develop in the cloud and so they need an extension to their automated data center into the cloud, and so really what you see from us is an expansion of that ACI concept, to Ranga's point, we actually really didn't change. We're just extending it to container development platforms, to different cloud environments, but it's the same idea, how do I automate the end to end network reach as well as these segmentations? >> Ranga, maybe can you expand on some of the automated piece of it, even, you know, I look at one of the things that jumped out at me this week is there's some changes to the CCIE program. It's not just, okay, I've done it, and I do my test. It's, well, we understand that things are changing year to year, and therefor how I get my certification, How I keep up on these, it is going to change. Where does automation play in all of this? >> Absolutely, I mean, when you think about automation there are two key parts to it. One is the automation that happens within the fabric that the controller manages and there's a lot of that. To extend on what Thomas was saying, right, in terms of how quickly the fabric can be brought up, how quickly applications can be deployed on the fabric and so on. Beyond that, there's automation that we have built leveraging the modern devo apps and CICD tools that are very popular among the developer community. Like I said, we have built integrations with Ansible, Terraform and so on, but we have also made rich APIs available across our platform. Every piece of information that the controller or the switches have is very much accessible for developers. That's really a back breaking approach to networking where developers have access to everything and can program to the network. So I think that's where the world is going, and that's what we plan to support with automation. >> Let me comment on this, because there's an interesting piece where we did, right? We have this fabric controller called the APIC. It's actually an App posting platform as well and so where we're actually taking advantage of that, everything is code in ACI, and you can write as a partner customer, apps, right on top, and so like the F5 integration that we have done is literally an F5-written and app residing there, right? And so it's really, really really flexible to build workflows around what you want to do on any infrastructure, for customers themself, for any partner themself. >> Yeah, it's an interesting piece, cause when you think, you know, I think back at my career. How much did the network really, the network architect think about the applications? Like, oh, okay, how much throughput and sure, I need it to go from north to south to east, west. Or, oh, wait, this thing needed some extra buffer credits, but usually, you know, the business owner, application owner and the network people, they were throwing things over the wall between each other and tweaking some dials here. Now when I look around this show, it's we're talking about building applications at the core of it and it's happening together. Can you speak a little bit to some of the activities going around, and that trend? >> No, absolutely, it's actually exciting. I actually, because my background is I did programming long long time back, and it's actually- >> Back when you called it programming, not coding? >> Correct. (laughing) it was programming. It is actually exciting to see, and I can tell you over the fast four, five years, when we run these techtorials we ask, cause, like architecture and programming, how many people are interested programming? And it used to be, I dunno, 10%. Now it's literally 60 to 70% of the people in the room are saying we're using automation frameworks like Ansible, and they actually see what we're doing and the value and they want to learn more. So there's a significant shift in terms of what people expect, what they want to do as a network infrastructure, versus what it was in the past. It's just a reflection on, as I said, the agility that is needed out of the infrastructure, and how do we react to what the developers, the users, want to do that put the apps on, so. >> In the spirit of Cisco's bridge to possible which was the Barcelona theme, is this a bridge to IT and business working better together? >> Absolutely. I mean the way... I dunno whether I can say much, but it's absolutely, how do I bridge, we call it initially, how do I bridge between what you called out, the networking and application team? It's the bridge to possible. It's not like, oh, it's your problem, it's my problem. We can do it together or these two teams can do it together, absolutely. That's actually a very good reference. >> To add to that, when we were in 2012 thinking about what should ACI be, everyone in the industry was somehow thinking that all the network engineers will magically become programmers, right? So programmability is a big part of what the network needs but also being aware of the application and being able to respond to the right needs of the application at the right moment is a pretty big thing, and that's what we have built with ACI, with the first class support for programmability. >> And the programmability that we're seeing and hearing about, Ranga, how is that a differentiator for Cisco? >> So, I think, first of all, the network, we have always believed, is the nervous system of the enterprise, so a lot really interesting information goes through the network, so unlocking the value of the network for these different use cases is what's made possible with the programmability approaches that we have taken, right? The only reason why we have 65 plus partners programming to our platform, is because we have these open APIs. We have a ton of channel partners using the open APIs to build apps and to, like, support videos, different use cases for our customers, with ad hoc automation or even using some of the automation framework, so it has really, evolved the network from being CLI-centric to being solution and programmability centric. >> Maybe one point to make since he said open APIs, and I can't overemphasize this, we're truly open APIs, right? Because sometimes there's in this, not naming names, but people saying, oh, you have to be a certification to use these APIs. That is not the case for Cisco APIs on ACI. They are open. Everybody, customers, partners, quite frankly, even competitors could use those to program. We're standing behind our APIs. They can be used as-is. >> So it is quite a big change. I mean, people know historically, Cisco it's like, well, Cisco solves customer problems, and then they would drive it through the standards. Here, you know, we we've watched the ascendency of the DevNet group, and you know, hundreds of thousands of people now helping to build code. It's the API economy, so, you know, very much it's not the Cisco I thought about a generation ago. >> Thank you. You know, we take this as a compliment. We're actually really excited to see how much development is possible by opening this platform up with APIs and I think that somebody else said this, so it's not all mine, but your more APIs to have, you have user lists to integrate, the faster we jointly develop and actually achieve what we want, so that's the bridge. >> And the beautiful thing about APIs is customers and other developers learn things that we wouldn't have and we learn, and we are seeing a lot of that, so that's another way of unlocking innovation for our customers, and we are seeing a lot of that, you know? >> So when we talk with any customer, any business, we always talk about speed scale, speed to innovation. With the wave of connectivity, the expansion of 5G, WiFi six, the proliferation of mobile data that's going to be traversing the networks the next few years. It's going to be video. How is an application-centric infrastructure going to allow customers to take advantage of the demands on the network that need for speed, so that your customers can be as competitive as they need to be? >> Let me try to kind of tune down to the essence there. What really is going on, you have all these different applications, as you point out, all of these users and endpoints, and what you want to do is you want to have an ability to correlate between what the user wants to run on the infrastructure and how the infrastructure has to behave. And then also, you want to correlate back as to infrastructure issues, who is impacted? And really was ACI was about is not rebuilding applications. It was about, provide this glue, this bridge, between what's going on in the infrastructure to what the user experience is, and if I can do this it becomes so much more efficient and it's so much easier to roll out all these new applications on an ACI infrastructure. >> Exciting stuff, guys. Thomas, Ranga, thank you for joining Stu and me on theCUBE today. Lots of exciting stuff. We'll be listening for those announcements that you said are coming out later today. >> Yeah, okay. You will see them. >> All right, excellent. Thanks, guys, we appreciate your time. >> Thanks, same here, appreciate it. >> Thank you. >> Our pleasure. For Stu Miniman, I am Lisa Martin. You're watching theCUBE from our second day of coverage of Cisco Live San Diego. Thanks for watching. (energetic theme music)

Published Date : Jun 11 2019

SUMMARY :

brought to you by Cisco and its ecosystem partners. and to his right, Ranga Rao, this is, Ranga you were saying how to extend into the cloud, we also have So, Ranga, spoke with you earlier this year for the Cisco ACI App Center, So, when you guys talk with customers, I can shorten the cycle to deploy, Great, lots of discussion in the keynote and so really what you see from us is an expansion piece of it, even, you know, I look at one of the things and can program to the network. and so like the F5 integration that we have done and the network people, they were throwing things I actually, because my background is I did programming It is actually exciting to see, and I can tell you It's the bridge to possible. and being able to respond to the right needs of the network for these different use cases That is not the case for Cisco APIs on ACI. of the DevNet group, and you know, hundreds of thousands the faster we jointly develop How is an application-centric infrastructure going to allow and how the infrastructure has to behave. Thomas, Ranga, thank you for joining Stu and me You will see them. Thanks, guys, we appreciate your time. of Cisco Live San Diego.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
ThomasPERSON

0.99+

Lisa MartinPERSON

0.99+

Thomas ScheibePERSON

0.99+

StuPERSON

0.99+

CiscoORGANIZATION

0.99+

2012DATE

0.99+

Ranga RaoPERSON

0.99+

RangaPERSON

0.99+

Stu MinimanPERSON

0.99+

San DiegoLOCATION

0.99+

AWSORGANIZATION

0.99+

60QUANTITY

0.99+

10%QUANTITY

0.99+

second pieceQUANTITY

0.99+

San Diego, CaliforniaLOCATION

0.99+

65 plus partnersQUANTITY

0.99+

twoDATE

0.99+

PythonTITLE

0.99+

theCUBEORGANIZATION

0.99+

two teamsQUANTITY

0.99+

two key partsQUANTITY

0.99+

F5TITLE

0.99+

OneQUANTITY

0.99+

todayDATE

0.99+

ACIORGANIZATION

0.99+

second dayQUANTITY

0.99+

this weekDATE

0.98+

fourQUANTITY

0.98+

five yearsQUANTITY

0.98+

70%QUANTITY

0.98+

three months agoDATE

0.97+

BarcelonaLOCATION

0.97+

this morningDATE

0.97+

DevNetORGANIZATION

0.97+

one activityQUANTITY

0.95+

one pointQUANTITY

0.95+

oneQUANTITY

0.95+

day twoQUANTITY

0.95+

AnsibleORGANIZATION

0.93+

TerraformORGANIZATION

0.93+

earlier this yearDATE

0.92+

two biggest thingsQUANTITY

0.92+

AzureTITLE

0.91+

hundreds of thousands of peopleQUANTITY

0.91+

ACI App CenterTITLE

0.89+

RangaORGANIZATION

0.89+

Cisco LiveORGANIZATION

0.89+

later todayDATE

0.87+

two big onesQUANTITY

0.86+

firstQUANTITY

0.86+

Ranga Rao, Cisco & Dave Link, ScienceLogic | CUBEConversation, May 2019


 

from our studios in the heart of Silicon Valley Palo Alto California this is a cute conversation hello everyone welcome to this cube conversation here in Palo Alto California I'm John Fourier host of the cube we are in the cube studios we're here with Ranga Rao's Senior Director of Product Management Cisco Networking group and David link the CEO of science logic tell what their part is guys great to see you thanks I come in it glad to be here so you guys had a great event by the way symposium in DC thanks a lot of momentum yeah it was fun to watch he was there eight videos up on YouTube but you guys are classic partnership here with Cisco talk about the relationship you guys are meeting in the channel got a lot of joint customers a lot of innovation talk about the relationship between science logic and Cisco Cisco and science logic have been working together because our customers demand us to work together right so across more for more than 10 years science logic has been a strong part of for Cisco working across various different business units when we started working on ACI which is application centric infrastructure which is the product that my group works on we thought science logic was a perfect partner to work with and in fact some of our joint customers including Cisco IT which is a customer of science logic two came to the table and said we needed an integration with science logic so customers are a huge part of the genesis of the partnership and that's what keeps us going together and the fact that we have such strong synergies from a technology perspective makes it really easy for us to collaborate and the fact that we have both open platforms with strong api's makes it really easy for us to collaborate and this is the real haveá-- we're here to the chalk tracks around you know these API so this abstraction software layer zci has kind of gone the next level we covered that at Cisco live Dave we talked before about your value proposition you're on the front lines Cisco obviously has this new programmability model where their goal is to leverage the value the network abstract away the complexities and allow people to get more value out of it how is that working in what and how do you guys tie into Cisco it's really an ecosystem of technologies and partnerships that deliver outcomes for customers Cisco's advancing technology so fast we've seen an innovation sprint from Cisco it's actually causing us to sprint with them on behalf of their customers but ultimately we've had to introduce deep integrated monitoring across all the fabrics that they support for software-defined and that includes visibility into the ACCAC eye fabrics but it also goes through into the virtual machines the storage layers the operating systems the application layer so when you pull all of that together that's a day to challenge for the enterprise to make sure they're delivering outcomes to the customer that are above expectations recently we supported support for multi-site and multi pod ACI fabric we're working on a CI anywhere in the cloud we're really Cisco's extending the datacenter hyper-converged solutions to deliver value propositions no matter where the applications live so that's a huge step forward and then it causes operational initiatives to say how are we going to solve that problem for our customers no matter where the application lives so that's really where we're focused helping solve problems on that day to side of make all these technologies come to collect together to deliver a great outcome for the end-user and so they're enabling you with the with the ACI if a hyper converges to go out and do your thing yes so we instrument all those different really abstracted components because what we have with container management with Software Defined it's abstraction on top of core routes which and server and hypervisor technologies that bring it together in an intuitive way ultimately what we've seen from the enterprise and service providers is they really want infrastructure delivered as a service and that's really where Cisco's headed helping make that a reality with these products we just helped bring them together with the instrumentation analytics to operate them as one system you know ranking this is a great example of what we're seeing in this modern era with the data center on premises modernization growth of the cloud the advent of you know real hybrid cloud and private cloud as well as public cloud you guys are in a good position so I want to kind of dig into some talk tracks one you mentioned day two operates I've heard that term kicked around before cuz this is kind of speaks to this modernization in IT operations you know enabling an environment for you know Network compute storage to work seamlessly this is this is the real deal what is day to operations can you like define what that is yeah so our customers essentially go through the journey of building and network for some purpose to deliver an application or a service to their end users so the we think of the process of them building the network is day zero configuring the network for the particular operator a particular service as day one and everything that they deal with which is a lot of complexity which is they where they spend 80% of their time 90% of their budget has data operations which is a very complex domain so this is the area that we have been focusing within our business unit to make our customers lives easier with products that essentially solve some of these problems and collaborating with partners like science logic to make the operations of our customers much easier a important part of data operations is making sure that we provide the light right kind of abstractions for our customers initially customers used to configure switches on a switch by switch basis using command line interfaces with obscure commands what we have done within the data center as of like 2014 is brought in the application centric abstraction so customers can configure the network in the language of the application which is the intent interesting you know I'm old enough to remember those days of standing those networks up day one day zero on day one but I think day two has become really the new environment because day 2 operates was simply you know make sure the lights are on provision the switches top of racks all that stuff that went on and then you managed it you had your storage administrator all these things we're kind of static perimeter based security all that now is kind of thrown away so I think day two is almost like the reality of the situation because you now have micro services you've got apps and DevOps manding to have the agility and programmability the network so I got to ask is if that's true and you got cloud over the top happening this means that the software has to be really rock-solid because it's not getting less complex it's getting more complex so it's what is science logic fit in today too we've been focused on all these different technologies you bring together so from an intent based perspective Cisco's been really focused on intent based solutions but that lines up to a business service the business service is made up of a lot of different technologies that can come from many different locations to deliver you an application to you where you're super satisfied with an outcome its delivering productivity to you all the great things that you're hoping to experience when you interact with an application but behind the scenes there's a whole myriad of technologies that we instrument from a fault configuration performance analytics and really an analysis perspective to see all these multivariate data streams coming together in a hub where we can analyze them and understand the relationships the context of how all of those data feeds come together to enable a service so if we know that service view again no matter where the service is coming from in Cisco is now supporting ECI anywhere so that service could be sitting in a lot of different places today and we're seeing more and more hybrid applications and I think that's around for a long time to stay for good reasons security compliance and other reasons you've got to bring all that together and understand real time the real time operational viewpoint of how is it now and more so that proactive insight to know if you have an anomaly across any one of these performance variants how that may impact the service so is it going to be impacting the service and really help operations stay proactive I think that's where the DevOps and focuses right now look at evolution of DevOps gene King was on the cube said 3% of enterprises have adopted Ewa certainly there's the early adopters we all know who they are they're there DevOps cloud native from day one but really the adoption of DevOps is not yet there on mainstream is getting there but you're speaking to day two operations as kind of like operations you mentioned developers the apps that needs to be built to require all this infrastructure program ability this is where I think a CIS can I guess you need intrument ation so you need IT ops and you got to have program ability of the network but everyone's talking about automation so to me it sounds like there's an automation story in here if you got an instrument everything you got to have move beyond command line and configuring is that how does that fit it how's the automation finish yeah absolutely first of all within the ACI fabric we have we have a controller based approach so there's a single place of managing the entire infrastructure today we have customers who use two hundred-plus physicals which is being managed by a single point that's a huge amount of automation for provisioning the network from the perspective of managing the network the controller continuously looks at what's going on and essentially we have a product core network assurance engine which look which are which is a second pair of eyes which will tell you proactively if there are problems in the network right but a broader automation is needed where you can actually look at information from various different silos because network as important as we as cisco think is one part of the whole puzzle rate information comes from many different places so there's a platform that's needed where people can funnel in pieces of information from various different places and analyze that pieces of information figure out trends find the things that are of interest to them and operate in a data-driven fashion they want to get your thoughts on this next talk track around the impact of the cloud because if this happens the automations is pretty much agreed upon the industry therefore we've gotta automate things that are repetitive mundane tasks and certainly the network's a lot of command line stuff that can be automated away value will shift to other places but with the impact of cloud operation the operational side of the data center is looking more and more cloud like so in a way whether the debate of moving to the private cloud versus on-premises goes away and it becomes more of a cloud operation story on-premises multi-cloud on public clouds is kind of a new system this is the operational shift this is where all the action is talk about your perspective on this because this is kind of like you know it's not a simple saying lift and shift and moving into the cloud it's I want cloud like economics I want cloud like elasticity I want all that benefit on premises that's day two in my opinion would you agree I do and I think that's sometimes lost on the industry that we have a lot of clouds that we have to serve and for good purpose they're gonna live in different places but back to the earlier comment you've got to then pull information into a data hub I'll just call it in an architecture of data where you've got it from multiple sources whether it's clouds or private hyper-converged the wireless to the end-user all these different layers often that are being abstracted we've got to really understand how that relates to a user experience so when we think about what are the end results we're trying to achieve we're trying to be proactive so among the things that we're working on from a vision perspective instead of thinking about waiting for a system across any one of these tiers to have a fault where it tells you I've got a problem here's a trap here's a log I've experienced this problem we really want to do a lot more on the front end that performance analytics the anomaly detection to get across multivariate all these very 'it's mean this kind of performance health and in a performance score cisco has been investing heavily here we have as well jointly for some of our customers what I'd like to see in the future our vision is that you rely less on fault management and more on the proactive analytics side so that you understand anomalous behavior and how that could impact your experience as an end-user and fix it through automation before there is a problem so that's a very different thought I'd love to say our industry should in the future worry less about event correlation and more about predictive behavior so that's where we're spending a lot I don't like so wherever the false look for the goodness to I mean where's the zag lesion but you have to have all these data streams and you have to understand how they can textually relate to one another to make those important decisions and recommendations well you know I've always said this on the cube you can you know in this world of digital you can instrument everything so you soon it's going to be a matter of time for seeing what everything's happening but knowing what to look at it's kind of like what you're getting at yes hey Rach I talked about your your perspective because again and one of the things that Dave Volante and I just do many we talk about all the time on the cube is we debate this cloud conversation because I think my opinion is it's one big distributed architecture second operating system the cliff it's all the cloud they're all edges nodes and arcs on a distributed a dissenter certainly isn't going away but if everything is a network connection well that's the edge your data center you got this is you're in the business of networking right what's your take on all this because you know if it's a cloud operations it's a shift from the old IT to the new IT what's your perspective on this so the moniker that we have been using this year is that there's nothing centered about the data centers like you said there are workloads that reside in many different environments including the cloud so customers are demanding consistent operations and consistent management capabilities across this many different environments right so you're right the data center itself is turning out more to be like a cloud and we have even seen large cloud providers like offer solutions that sit within a customer's data center right so that's one area in which the words evolving another area is in terms of all the tools that are coming together to solve some of the operational problems to be more predictive and more proactive yeah you know I like to draw horns sometimes too many minute we keep on coin the term private cloud years ago and everyone was throwing hate at them you know on the way I don't know what's this private cloud nonsense if that's what's happening there's a private cloud it's a hybrid cloud multiple clouds you have public cloud and again you're gonna have multi purpose pick the right cloud for the workload kind of environment going on kind of like the way the tools business would but it's still platform so so guys thanks for coming in and sharing your insights I really appreciate that before we go take a minute to get the plug in for what you guys are working on give the company update what's going on you're hiring revenues up what's happening give us a state of the science logic what's going on so we had a great first quarter the best first quarter in the history of the company the health of the business is good I think the underlying theme is the transform of infrastructure is causing a lot of people to rethink the monitoring tooling as to how do we need to manage in this new operating environment you mentioned DevOps I think the real key there is the developer really wants to have the application be infrastructure aware and he needs good information coming from not 50 places but from a trusted place where he can make sure the application knows about how all the infrastructure that's supporting it no matter where it is is behaving and that's really the wind behind the sales driving our business we grew quarter-over-quarter sequentially with our subscription over a hundred percent in q1 so we're really thrilled with where the business is headed excited about the momentum and this is a really important partnership for us because everybody uses Cisco all of our enterprise customer service provider government customers Cisco is embedded in virtually every customer that we work with so we have to have the best support kind of that thought leadership of support for our customers for them to entrust management of those core applications through our platform right it gives a quick plug for the data center networking group what's happening there what's the the hot items what's the with give the plug quickly so very quickly I think the journey that we have been on is a CI any where to take a CI and its management and operations paradigm to many different environments we introduced support for AWS earlier in this year we are working on support for Azure and soon we'll have support for Google public clouds in all these environments we want our customers to have consistent experience and the way we get that is through solutions working with partners where we offer consistent solutions across all these environments for our customers and working with science logic as a very important partner to solve problems for our joint customers and you guys have always had a great Channel great ecosystem now you have not new for you to partner yeah we have like open API is open platform 65 plus partners that we work with so all customer focus well let me put you on the spot one last question got you here because your guru and networking and you know you've been around the block you've seen the different waves what's the biggest wake-up call that customers are having with respect to the old way of doing networking and the new way cuz clearly everyone has come to the realization that the perimeter based security model and static networking has to be more dynamic what's the big wake-up call that you think customers are seeing now with this new modern era I think customers are realizing more and more how important technologies part of it as part of their business sometimes it even drives the techni drives the business and helps customers make ditions on what's the right path to take for their business so what this applications become really important and the nerve center which is the network that supports the application becomes really important so customers are demanding us to build the best network possible to support this modern world that's continuously evolving so did you think a stab at that customer wake-up call what's your perspective on this what's the big R from your experience over the years you can't use tools that were built 20 years ago to continue operating global networks so we see a lot of the industry it's about a ten billion dollar total addressable market changing over because the market fit of the old tools that people have relied upon for many years aren't solving modern problems Oh guys thanks for the insight appreciating and good to see the partnership doing well thanks for coming into the cube studio we have Ranga Rao senior director of product management Cisco Networking group and David Lynch CEO of science logic here for cube conversation I'm Sean Fourier thanks for watching you [Music] you

Published Date : May 22 2019

**Summary and Sentiment Analysis are not been shown because of improper transcript**

ENTITIES

EntityCategoryConfidence
David LynchPERSON

0.99+

John FourierPERSON

0.99+

Dave VolantePERSON

0.99+

Sean FourierPERSON

0.99+

Ranga RaoPERSON

0.99+

CiscoORGANIZATION

0.99+

80%QUANTITY

0.99+

May 2019DATE

0.99+

DavidPERSON

0.99+

AWSORGANIZATION

0.99+

90%QUANTITY

0.99+

Ranga RaoPERSON

0.99+

2014DATE

0.99+

Silicon ValleyLOCATION

0.99+

more than 10 yearsQUANTITY

0.99+

DCLOCATION

0.99+

eight videosQUANTITY

0.98+

Dave LinkPERSON

0.98+

DavePERSON

0.98+

GoogleORGANIZATION

0.98+

3%QUANTITY

0.98+

over a hundred percentQUANTITY

0.98+

bothQUANTITY

0.97+

Palo Alto CaliforniaLOCATION

0.97+

DevOpsTITLE

0.97+

q1DATE

0.97+

two hundred-plus physicalsQUANTITY

0.97+

ciscoORGANIZATION

0.97+

50 placesQUANTITY

0.96+

Palo Alto CaliforniaLOCATION

0.96+

todayDATE

0.96+

AzureTITLE

0.95+

RachPERSON

0.95+

one partQUANTITY

0.94+

20 years agoDATE

0.94+

oneQUANTITY

0.93+

ten billion dollarQUANTITY

0.92+

ScienceLogicORGANIZATION

0.92+

single pointQUANTITY

0.92+

YouTubeORGANIZATION

0.91+

day twoQUANTITY

0.91+

second pair of eyesQUANTITY

0.91+

Cisco ITORGANIZATION

0.9+

twoQUANTITY

0.9+

this yearDATE

0.9+

day oneQUANTITY

0.88+

first quarterDATE

0.87+

first quarterDATE

0.86+

day twoQUANTITY

0.86+

ACIORGANIZATION

0.86+

one last questionQUANTITY

0.85+

ECIORGANIZATION

0.83+

day oneQUANTITY

0.83+

one areaQUANTITY

0.81+

single placeQUANTITY

0.8+

one bigQUANTITY

0.79+

second operatingQUANTITY

0.79+

Cisco NetworkingORGANIZATION

0.78+

day 2QUANTITY

0.78+

65 plus partnersQUANTITY

0.77+

a lotQUANTITY

0.76+

Cisco Networking groupORGANIZATION

0.74+

years agoDATE

0.72+

one systemQUANTITY

0.7+

KingPERSON

0.7+

earlier in this yearDATE

0.69+

Cisco liveORGANIZATION

0.69+

day twoQUANTITY

0.63+

customerQUANTITY

0.62+

a dayQUANTITY

0.61+

Mike DiPetrillo & Pratima Rao Gluckman, VMware | VMware Radio 2019


 

>> from San Francisco. It's the Cube covering the M Wear Radio twenty nineteen brought to you by the M where >> welcome to Special Cube conversation here in San Francisco for PM wears radio event there. Top engineers air here for once a year Get together and show in the best stuff road map to get to great guests. Here we got Mike Petrillo is the senior director of Blockchain of'em were here where their journey is, where they come from and gone to today where they are up to and for Team A Gluckman Engineering leader Blockchain engine both with GM Where great to see you guys. Thanks for coming on. Appreciate spending the time My favorite topic block Jane our favorite topic to thanks for joining me. So, Mike, let's start with you. Take a street where you guys are now Because we talked about a year ago. Just getting the putting the team together. You were here last year radio kind of getting some core mo mentum. Where have you guys come from? And where are you now? Yeah, >> it's been quite a journey. You know, over the past five years, we've been doing a lot of research that research culminated in an open source project called Project Concorde that we announced last year. Then we wrapped some commercial offerings around it around really the operational side. How do you operate a blockchain at scale in an enterprise setting we introduced? That is Veum wear Blockchain. It's a very descriptive on the naming, and it really focuses on three core things. Enterprise grade, decentralized trust, not distributed trust but really decentralized trust. So being able to deploy it across multiple different cloud environments as well as on prim, it concentrates on robust Day two operations. How do you operate it at scale in an enterprise setting? How do you deal with stuff like GDP are the right to be forgotten? You know, data sovereignty issues, things like that, which is much different than other block Janes. And then the third thing is really being developer friendly. Last year we were fully Ethereum compatible. We had the Ethereum language sitting on top of our block chain. Since then, we've added support for Djamel from digital asset. So another language and we're adding more and more languages so the developers can develop in the framework that they're used to on the best scaleable. You know, Enterprise Supporter >> brings him Dev Ops Mojo Concepts to blockchain. Absolutely, absolutely somewhat. The demo you guys did you get on stage? I want to get too, because it's a really use case. So again, rnd concept jewel years of research started putting together making some progress on developer. So the solutions that you guys presented, we really take us through that. >> So the ocean plastics demo that we talked about a radio basically solves this problem or, you know, just the plastics polluted, polluting our oceans today. So if you look at the numbers are staggering and the BP that you actually a consume, you know, in fish it's pretty scary. The other thing is, you know, it's been predicted that we'll have mohr plastic in our oceans than fish by twenty fifty. And one of the things Dell is trying to do is clean up the environment, and they're building these reusable trays or packaging material for their Dell laptops. And so this use case was providing them that functionality. And if you look at del they you know, they have a massive supply chain. They've got hundreds and thousands of renders that you know, they use despite systems that don't actually talk to each other, they've got complicated work flows. There's also a lot off corruption in their supply chain. And one of the things that can really solve a lot of those problems is bmr blockchain. And they have an instance running on our service, which runs on VM CNW s and I walked through the devil of just going through an aggregator to getting to a manufacturer and then assembling these laptops with the trays and shipping them off to >> think this was something we covered. A del technology world. I just wanna point out for the folks watching Dell's taking recycled material from the ocean, using it as materials into their laptops as a part of sustainability. Great business. You know, Malcolm for del the supply chain pieces. Interesting. So you guys are using blockchain and track the acquisition of the plastic out of the ocean? >> Yes. To manufacturing and to end >> Yes. In tow? Yes. >> Using of'em were blocking. >> Yes. So who coded >> this up? It was actually >> del Del developers coded it up. So they quoted up. They took two weeks to code it. We had absolutely no support issues that came away, which really talks about these. A fuse of our platform. And, you know, just the applications that running. >> How does someone get involved real quick at the plug for they have someone joins the bm where? Blockchain initiative? Yes, Via more block >> change. That's ah, manage service offering from us. So it's a license, you know, product. If they're interested, they can go to the inn where dot com slash blockchain or emails? The blockchain dash info beyond where dotcom get it signed up on the beta. We have an active beta. We have lots of enterprise customers all around the planet using it today. You know, at scale >> is a free servicers are licensed paid license. >> There will >> be a paid license for it. You know, we're invader right now, so it is free, right? >> I get it. But we're gonna get you in the snow. So it is. It is a licence service. Yeah, It's Amanda's >> service offering on. That's you know, the beauty of it is that you don't have to worry about updating it, you know, keeping the nose live anything like that. We don't see the transactional data. We don't manage the nodes or anything like that. But we deploy and keep him updated. Keeping refresh. >> You know, one of the benefits Ray Farrell's on. Just how about the ape revolution? How I change the world with the Internet and the web that blockchain has that same kind of inflection point impact you mentioned GDP are that implies, Reed. You know, changing the the values on the block chain will dwell in the Ganges is immutable. How do you handle that? Because if it's already immutable on encrypted, how does Tootie pr work? >> Yeah, just doesn't take >> care of you. Why wait? If it's encrypted, no one can see it. >> Yeah, well, you know, block chains, not about encrypting the data, right? There are some block change that do encrypted data. People get confused because they associated the cryptography with it, which links the blocks together. But the data in there is still visible. Right? We're working on privacy solutions to make privacy per transaction. We're working on GDP our issues right now, because that is an issue. When you get into a regulated environment, which there isn't really a non regulated environment these days. You have to worry about these things. You know, Blockchain gives you a mutability and that gives you to trust. But really, Blockchain is about trust. It's about this decentralized trust. And when you think about it in that context, you say, Well, if I trust that I want to be able to delete that data and we reach consensus on it and we still maintain the order, right, the proper order of the bits, which is really what Blockchain is doing, is giving you trust on that order. Bits and I agree, is a consensus to delete one of those pieces out of the order bit, and we can still maintain trust of the order bits. And that's fine. Now I can't get into details on this engineering secret sauce. >> I. So one of the things I want to ask >> you guys just engineers, because I think this is one of the things that I see is that Blockchain is attractive. There's a lot of unknowns that coming down the pike, but we do know one thing. It's a distributed, decentralized kind of concept that people like it. I see a new generation of attracted the blockchain new generation of entrepreneurs, a new generation of young people, engineers who see use cases that others from old school industry might not. So you start to see. I won't say it's the hipster or cutting edge. It's just that it's attracting this kind of new generation developer for engineer. >> Why do you >> think that's the case? Why, and and is that right assumption to look at? Because, you know, when asked his blockchain, certainly state of the art. Yeah, it's not as fast as a database of I wanted to do something. Technically, that's like saying the Internet dial up was bad. But what happened after? So you know, a lot of people making these arguments, But I see it definitely resonating with young people. Yes, look at Facebook who tried looking blockchain and moving their entire a broken system. Teo blocked, Jammed. Try to fix that so you can feel these indicators. What's your thoughts? >> I think >> part of it. And I definitely saw this, you know, last Friday and Saturday, I was eighty three up in New York City, right, and it was very much that hipster crowd, and I was really attached to the crypto currency phase. Cryptocurrency allowed individuals to make investments. You know, the kind of millennials to make investments. They didn't have to go to a e trade or they didn't have to go to some broker. It wasn't caught up in anything. They could, you know, make these bets. And now they can build applications that are directly attached to that currency. They can make up their own currency. They can make up their own value system. You know, you've done some of that with a cube, right? We've launched an application that provides value around the content and token eyes. Is that value? And now it can transfer that value So it opens up the transfer of that value, the trust of that value. And I think, you know, we're in a generation of trust and transparency. That's what's powering the world right now is about trust and transparency, and that's what Blockchain gives you, gives you trust in the system that no one person or no one government owns, and >> I really like that. >> But one thing important is I mean, we just have to demystify this like we just have to say, This is not about Cryptocurrency. That's one thing, and what I am is doing is enterprise blockchain and, you know, and Mike, you've talked about this. You always say, you know, black jeans, not going toe, you know, save the world or, you know it's not going to get rid of poverty. But there's four use cases that we've drilled down to in the supply chain realm, and there's the financial services. And so those are some of the things we're tackling, and I think it's important to like talk about that. And, you know, there's these hipsters every time we go and talk anything with regarding the blockchain, we know a big chunk of people, therefore Cryptocurrency and apparently at consensus in New York, they reduce their audience from I can't remember the numbers. When >> I was >> in the two thousand >> last year were about House kicked out all the crypto >> currency people, and so it's important to make that distinction. >> I think the crypto winter probably hurt them more than taking up, because last year there was a lot of hype there, but I think the bubble was already burst around February last year. But this piece of the good point is something that we've been kind of covering on silicon angle. The Cube is there's infrastructure dynamic. The engineering goodness. I think that certainly is intoxicating to think about Blockchain as an impact. Engineering wise, the token conversation brings up utility, the decentralized crypto currency, the icy, his initial coin offerings. The fraud part or regulated part has caused a lot of problems. So to me, well, I tell people was looking the CEO kind of scams and fraud kind of put a shadow on token economics. And Blockchain is a technology so supply chain no doubt is great. Blockchain. That's where you guys are focused. That's what the enterprise want. Way start getting into tokens. Tokens is a form of measurement. Uh huh. And that's where I think the regulators to do your point earlier is it's caused a lot of problems. So you know, the says if you got a utility token and you're selling >> it, it's Zane exchanges, not kill each other. So that's you're >> called. A lot of it would call those app developers. >> Yeah, but the up developers were still out there, right? And what's nice is these app developers that are on the side building these unique little applications. They still end up working for these larger companies and driving interesting solutions through like we're doing with supply chain like we're doing financial services like what we're doing in Telco and Media. You look at the people that we're dealing with in these companies. They came from building those applications. Heck, some of our own product managers came from building unique things mining rigs and mining companies. So you still have that background. They still have that entrepreneurial, you know, asset. And that's what's changing these cos they're driving change these companies saying, Hey, look, we can use the blockchain for this really unique thing that opens up a brand new business line, you know, for this large corporation, >> you know, I showed you our check preview. We did a quick preview of'Em World last year with our block changing me with a cube coin token, kind of total experimental thing. And it was interesting time because I think you hit the nail on the head. We as entrepreneurial developers. I had this great application we want to do for the Cube community, but we were stalled by the you know, the crypto winter, and you know, we're Apple developers, so there's many use cases of such scenarios like that. That's kind of people are kind of halfway between a Z A B. What's your advice toe to us, or folks like us who were out there who want to get the project back on track? What what what should we and application will do? They should they free focus on the infrastructure peace? What's your advice for the marketplace? >> It's so early, it's It's so early to actually really comment on that. I mean, I would say Just keep at it because you never know. It's I feel like we're so early in the game that though we can solve world hunger, there's so many use cases and applications that come out of it and we just have to keep going. And I think the developer community is what's going to make this successful, you know, and even emerging standards. I think that's one thing is standards across, You know, these block chains like we don't have that right now, and that's something we really need to need to do. And >> I don't know we program in Ethereum. >> So the question is, is that a bad choice is a lot of cognitive dissonance around with the right to. >> That's what I was just going to bring up is that >> you know, you brought up the point of your an app developer and you become stalled, you know, in your project. And we see that exactly same thing happening in the enterprise. We go into account after account where they've chosen some block change solution that's out there and to become what I call a stalled pioneer. They've gone through. They develop that application. But they either hit a scale, ability issue and then, you know, throughput or the number of nodes. They hit an operations thing. You know, operations comes in and says, Whoa, how are you going to do an audit on that thing? What about data Sovereign? What about GPR? What about this one? How are we? My God, you're gonna operate it inside of my environment? You know, what's the security side? So it's really round scale ability. It's around operations. It's around security. Those are three things we hit on over and over and over again with the stalled pioneers. So those of the accounts that we go into and rescue them essentially right, we say we can provide you the scale. We can provide you the through, but we can provide you the operations. For twenty years, the Empire has been taking large, complex distributed systems and making you operate them at scale in an enterprise setting. Where the experts at it So we're doing that would block chain now and allowing your blockchain projects to succeed and >> really find that term. Yep. Yeah. Okay, So, radio. What's the feedback here? Obviously got. Got the demo. What's been some of the peer review Give us the the four one one on peer review here. People liking what's going on? >> I think the demo really talk to people. It was relatable. You're a There was a social good a demo. I think it really impacted them. Um, but some of the cool stuff we're doing is also like in the financial services side. You know, we've got Mohr interesting stuff on the supply chain, so the feedbacks been great. Ah, lot of focuses on VM are blockchain, which is also cool. We didn't quite have that last year. In radio, we had everyone running off in different directions. So now it's via more blockchain And what Mike talked about installed pioneers is you know, we were seeing scalability throughput on numbers. And, you know, we talked about it at the immoral Barcelona A numbers. They're looking really great. And, you know, we're we're optimizing pushing our platform so we could get to, you know, perhaps the papal numbers rave, and someday visa, >> you have high availability. You guys know scale. Yeah. You happy where you are right now? >> Very happy where we >> are right >> now. I mean, we've got great customers. Great feedback, you know, great solution that solving real world problems. You know, engineers like doing two things shipping code and solving stuff that's going to help the world. At least you're of'em where that's our culture, right? And And we're able to do that day in and day out and the entire block chains the cornerstone to that. That's what makes people happy. >> Mike Protein, We following your journey. Great. Teo, check in Great to hear the progress. Congratulations on the great demo reel Use cases in supply chain. We'LL be following you guys and keep in touch. Thanks for coming on the key. Absolutely. Thank you for >> the time >> chauffeur here with Lisa Martin here in San Francisco for work. You coverage of radio, the top engineering event where they all come together internally with GM, where one of a few press outlets here, the Cube bringing exclusive coverage. Thanks for watching.

Published Date : May 16 2019

SUMMARY :

M Wear Radio twenty nineteen brought to you by the M where And where are you now? How do you operate a blockchain at scale in an enterprise setting we So the solutions that you guys presented, we really take us through that. are staggering and the BP that you actually a consume, So you guys are using blockchain and track the acquisition of the plastic out of the ocean? Yes. And, you know, just the applications that running. So it's a license, you know, product. You know, we're invader right now, so it is free, right? But we're gonna get you in the snow. That's you know, the beauty of it is that you don't have to worry about updating it, You know, one of the benefits Ray Farrell's on. If it's encrypted, no one can see it. Yeah, well, you know, block chains, not about encrypting the data, right? So you start to see. So you know, a lot of people making these arguments, And I think, you know, we're in a generation of trust and transparency. You always say, you know, black jeans, not going toe, you know, So you know, the says if you got a utility token So that's you're A lot of it would call those app developers. They still have that entrepreneurial, you know, Cube community, but we were stalled by the you know, the crypto winter, and you know, And I think the developer community is what's going to make this successful, you know, So the question is, is that a bad choice is a lot of cognitive dissonance around with the But they either hit a scale, ability issue and then, you know, What's been some of the peer review Give us the the four one our platform so we could get to, you know, perhaps the papal numbers rave, You happy where you are right now? Great feedback, you know, great solution that solving real world problems. We'LL be following you guys and the top engineering event where they all come together internally with GM, where one of a few press outlets here,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Lisa MartinPERSON

0.99+

New YorkLOCATION

0.99+

New York CityLOCATION

0.99+

Mike DiPetrilloPERSON

0.99+

San FranciscoLOCATION

0.99+

last yearDATE

0.99+

MikePERSON

0.99+

Mike PetrilloPERSON

0.99+

FacebookORGANIZATION

0.99+

DellORGANIZATION

0.99+

dot comORGANIZATION

0.99+

two weeksQUANTITY

0.99+

Last yearDATE

0.99+

AppleORGANIZATION

0.99+

hundredsQUANTITY

0.99+

twenty yearsQUANTITY

0.99+

ReedPERSON

0.99+

dotcomORGANIZATION

0.99+

two thousandQUANTITY

0.99+

twenty fiftyQUANTITY

0.99+

MalcolmPERSON

0.99+

TeoPERSON

0.99+

Mike ProteinPERSON

0.99+

last FridayDATE

0.99+

Ray FarrellPERSON

0.99+

TelcoORGANIZATION

0.98+

oneQUANTITY

0.98+

third thingQUANTITY

0.98+

four use casesQUANTITY

0.98+

Pratima Rao GluckmanPERSON

0.98+

MORGANIZATION

0.97+

todayDATE

0.97+

fourQUANTITY

0.96+

a year agoDATE

0.96+

eightyQUANTITY

0.96+

one thingQUANTITY

0.96+

SaturdayDATE

0.95+

GMORGANIZATION

0.95+

three thingsQUANTITY

0.94+

DjamelPERSON

0.93+

VMware RadioORGANIZATION

0.92+

JanePERSON

0.92+

bothQUANTITY

0.92+

February last yearDATE

0.91+

once a yearQUANTITY

0.89+

GPRORGANIZATION

0.89+

EthereumTITLE

0.89+

AmandaPERSON

0.88+

three core thingsQUANTITY

0.87+

JammedPERSON

0.86+

CubeORGANIZATION

0.84+

2019DATE

0.83+

two thingsQUANTITY

0.82+

VMwareORGANIZATION

0.82+

twentyQUANTITY

0.77+

VeumORGANIZATION

0.77+

thousandsQUANTITY

0.74+

BlockchainORGANIZATION

0.73+

HouseORGANIZATION

0.72+

Day twoQUANTITY

0.68+

threeQUANTITY

0.68+

rendersQUANTITY

0.65+

past five yearsDATE

0.64+

EthereumORGANIZATION

0.63+

GangesEVENT

0.62+

Project ConcordeORGANIZATION

0.62+

WorldTITLE

0.6+

MohrORGANIZATION

0.58+

TootieORGANIZATION

0.57+

RadioORGANIZATION

0.56+

Pratima Rao Gluckman, VMware | Women Transforming Technology 2019


 

>> from Palo Alto, California It's the Cube covering the EM Where women Transforming technology twenty nineteen. Brought to You by VM Wear >> Hi Lisa Martin with the Cube on the ground at the end. Where in Palo Alto, California, for the fourth annual Women Transforming Technology, even W. T. Squared on event that is near and dear to my heart. Excited to welcome back to the Cube pretty much. Rog Lachman, engineering leader, blocked in at the end where pretty much It's so great to have you back on the Cube. Thank you, Lisa. It's amazing to be here, and I can't believe it's been a year, a year. And so last year, when Protein was here, she launched her book. Nevertheless, she persistent love the title You just Did a session, which we'LL get to in a second, but I'd love to get your your experiences in the last year about the book launch. What's the feedback? Ben? What are some of the things that have made me feel great and surprised you at the same time? It's been fantastic. I wasn't expecting that when I started to write this book. It was more like I want to impact one woman's life. But what was interesting is I delivered around twenty twenty five talks last year. My calendar's booked for this year, but every time I go give a talk, my Lincoln goes crazy and I'm connecting with all these women and men. And it's just fantastic because they're basically resonating with everything I talk about in the book. I spoke at the Federal Reserve. Wow, I was like, This is a book on tech and they were like, No, this impacts all of us And I spoke to a group of lawyers and actually, law firms have fifty fifty when they get into law, right when they get into whenever I mean live, I'm not that familiar with it. But getting to partner is where they don't have equality or diversity, and it's resonated. So now I'm like, maybe I should just take the word check out What? You It's been impactful. And so last year was all about companies, so I did. You know, I spoke at uber I spoken Veum, where spoken nutanix it's looking a lot of these companies last year. This year is all about schools, fantastic schools of all different type, so I you know, I've done a talk at San Jose State. I went to CMU. They invited me over Carnegie Mellon. I supported the robotics team, which is all girls team. Nice. And it was fantastic because these girls high school kids were designing robots. They were driving these robots. They were coding and programming these robots and was an all girls team. And I asked them, I said, But you're excluding the men and the boys and they said no. When it's a combined boy girls team, the women end up the girls and organizing the men of the boys are actually writing the code. They're doing the drilling there, doing all that. And so the girls don't get to do any of that. And I was looking at just the competition and as watching these teams, the boy girl steams and those were all organizing. And I thought, this is exactly what happens in the workforce. You're right. Yeah. We come into the workforce, were busy organizing, coordinating and all that, and the men are driving the charge. And that's why these kids where this is at high school, Yeah, thirteen to seventeen, where this is becoming part of their cultural upbringing. Exactly. Pretty. In great. Yes, yes. And a very young age. So that was fascinating. I think that surprised me. You know, you were asking me what surprised you that surprised me. And what also surprised me was the confidence. Though these girls were doing all these things. I've never built a robot. I would love to. I haven't built a robot, and they were doing all these amazing things, and I thought, Oh, my God, >> they're like, >> confident women. But they were not. And it was because they felt that there was too much to lose. They don't want to take risks, they don't want to fail. And it was that impostor syndrome coming back so that conditioning happens way more impossible syndrome is something that I didn't even know what it wass until maybe the last five or six years suddenly even just seeing that a very terse description of anyone Oh, my goodness, it's not just me. And that's really a challenge that I think the more the more it's brought to light, the more people like yourself share stories. But also what your book is doing is it's not just like you were surprised to find out It's not just a tech. This is every industry, Yes, but his pulse syndrome is something that maybe people consider it a mental health issue and which is so taboo to talk about. But I just think it's so important to go. You're not alone. Yeah, vast majority men, women, whatever culture probably have that. Let's talk about that. Let's share stories. So that your point saying why I was surprised that these young girls had no confidence. Maybe we can help. Yes, like opening up. You know, I'm sharing it being authentic. Yeah. So I'm looking at my second book, which basically says what the *** happens in middle school? Because what happens is somewhere in middle school, girls drop out, so I don't know what it is. I think it's Instagram or Facebook or boys or sex. I don't know what it is, but something happens there. And so this year of my focus is girls and you know, young girls in schools and colleges. And I'm trying to get as much research as I can in that space to see what is going on there, because that totally surprised me. So are you kind of casting a wide net and terms like as you're. Nevertheless, she persisted. Feedback has shown you it's obviously this is a pervasive, yes issue cross industry. This is a global pandemic, yes, but it's your seeing how it's starting really early. Tell me a little bit about some of the things that we can look forward to in that book. So one thing that's important is bravery, Which reshma So Johnny, who's the CEO off girls code? She has this beautiful quote, she says. We raise our voice to be brave, and we'd raise our girls to be perfect, pretty telling. And so we want to be perfect. We won't have the perfect hair, the perfect bodies. We want a perfect partner. That never happens. But we want all that and because we want to be perfect, we don't want to take risks, and we're afraid to fail. So I want to focus on that. I want to talk to parents. I want to talk to the kids. I want to talk to teachers, even professors, and find out what exactly it is like. What is that conditioning that happens, like, why do we raise our girls to be perfect because that impacts us at every step of our lives. Not even careers. It's our lives. Exactly. It impacts us because we just can't take that risk. That's so fascinating. So you had a session here about persistent and inclusive leadership at W T squared forth and you will tell me a little bit about that session today. What were some of the things that came up that you just said? Yes, we're on the right track here. So I started off with a very depressing note, which is twenty eighty five. That's how long it's gonna take for us to see equality. But I talked about what we can do to get to twenty twenty five because I'm impatient. I don't want to wait twenty eighty five I'LL be dead by them. We know you're persistent book title. You know, my daughter will be in the seventies. I just don't want that for her. So, through my research, what I found is we need not only women to lean in. You know, we've have cheryl sound. We're talking about how women need to lean in, and it's all about the women. And the onus is on the woman the burdens on the woman. But we actually need society. Selena. We need organizations to lean in, and we need to hold them accountable. And that's where we're going to start seeing that changes doing that. So if you take the m r. I. You know, I've been with him for ten years, and I always ask myself, Why am I still here? One of the things we're trying to do is trying to take the Cirrus early this morning rail Farrell talked about like on the panel. He said, We are now Our bonuses are tied to, you know, domestic confusion, like we're way have to hire, you know, not just gender, right, Like underrepresented communities as well. We need to hire from there, and they're taking this seriously. So they're actually making this kind of mandatory in some sense, which, you know, it kind of sucks in some ways that it has to be about the story that weighing they're putting a stake in the ground and tying it to executive compensation. Yes, it's pretty bold. Yes. So organizations are leaning in, and we need more of that to happen. Yeah. So what are some of the things that you think could, based on the first *** thing you talked about the second one that you think could help some of the women that are intact that are leaving at an alarming rate for various reasons, whether it's family obligations or they just find this is not an environment that's good for me mentally. What are some of the things that you would advise of women in that particular situation? First thing is that it's to be equal partnership at home. A lot of women leave because they don't have that. They don't have that support on having that conversation or picking the right partner. And if you do pick the wrong partner, it's having that conversation. So if you have equal partnership at home, then it's both a careers that's important. So you find that a lot of women leave tech or leave any industry because they go have babies, and that happens. But it's just not even that, like once they get past that, they come backto work. It's not satisfying because they don't get exciting projects to work on that you don't get strategic projects, they don't have sponsors, which is so important toward the success, and they they're you know, people don't take a risk on them, and they don't take a risk. And so these are some of those things that I would really advice women. And, you know, my talk actually talked about that. Talked about how to get mail allies, how to get sponsors. Like what? You need to actually get people to sponsor you. Don't talk to me a little bit more about that. We talk about mentors a lot. But I did talk this morning with one of our guests about the difference between a sponsor and a mentor. I'd love you to give Sarah some of your advice on how women can find those sponsors. And actually, we activate that relationship. So mentors, uh, talk to you and sponsors talk about okay. And the way to get a sponsor is a is. You do great work. You do excellent work. Whatever you do, do it well. And the second thing is B is brag about it. Talk about it. Humble bragging, Yeah. Humble bragging talkabout it showcases demo it and do it with people who matter in organizations, people who can notice your work building that brand exactly. And you find that women are all the men toward and under sponsored. Interesting, Yes. How do you advise that they change that? There was a Harvard study on this. They found that men tend to find mentors are also sponsors. So what they do is, you know, I like you to stick pad girl singer, he says. Andy Grove was his mentor, but Andy Grove was also his sponsor in many ways, in for his career at Intel, he was a sponsor and a mental. What women tend to do is we find out like even me, like I have female spot him. Mentors were not in my organization, and they do not have the authority to advocate for me. They don't They're not sitting in an important meeting and saying, Oh, patina needs that project for team needs to get promoted. And so I'm not finding the right mentors who can also be my sponsors, or I'm not finding this one says right, and that's happens to us all the time. And so the way we have to switch this is, you know, mentors, a great let's have mentors. But let's laser focus on sponsors, and I've always said this all of last year. I'm like the key to your cell. Success is sponsorship, and I see that now. I am in an organization when my boss is my sponsor, which is amazing, because every time I go into a meeting with him, he says, This is about pretty much grew up. This is a pretty mers group. It's not me asking him. He's basically saying It's pretty nose grow, which is amazing to hear because I know he's my mentor in sponsor as well. And it's funny when I gave him a copy of my book and I signed it and I said, And he's been my sponsor to be more for like ten years I said, Thank you for being my sponsor and he looked at me. He said, Oh, I never realized it was your sponsor So that's another thing is men themselves don't know they're in this powerful position to have an impact, and they don't know that they are sponsors as well. And so we need. We need women to Fox and sponsors. I always say find sponsors. Mentorship is great, but focus of sponsors Look, I think it's an important message to get across and something I imagine we might be reading about in your next book to come. I know. Yeah, well, we'LL see. Artie, thank you so much for stopping by the Cube. It's great to talk to you and to hear some of the really interesting things that you've learned from nevertheless you persistent and excited to hear about book number two and that comes out. You got a combined studio. I'd love to thank you and thank you. I'm Lisa Martin. You're watching the queue from BM Where? At the fourth Annual Women Transforming Technology event. Thanks for watching.

Published Date : Apr 23 2019

SUMMARY :

from Palo Alto, California It's the Cube covering the EM And so the girls don't get to do any of that. And so the way we have to switch this is,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Lisa MartinPERSON

0.99+

SarahPERSON

0.99+

Rog LachmanPERSON

0.99+

Andy GrovePERSON

0.99+

ArtiePERSON

0.99+

Carnegie MellonORGANIZATION

0.99+

LisaPERSON

0.99+

ten yearsQUANTITY

0.99+

last yearDATE

0.99+

thirteenQUANTITY

0.99+

SelenaPERSON

0.99+

second bookQUANTITY

0.99+

Palo Alto, CaliforniaLOCATION

0.99+

uberORGANIZATION

0.99+

FoxORGANIZATION

0.99+

Pratima Rao GluckmanPERSON

0.99+

BenPERSON

0.99+

second oneQUANTITY

0.99+

IntelORGANIZATION

0.99+

HarvardORGANIZATION

0.99+

todayDATE

0.99+

This yearDATE

0.99+

firstQUANTITY

0.99+

a yearQUANTITY

0.99+

seventeenQUANTITY

0.99+

bothQUANTITY

0.98+

twenty eighty fiveQUANTITY

0.98+

Federal ReserveORGANIZATION

0.97+

OneQUANTITY

0.97+

fourth Annual Women Transforming TechnologyEVENT

0.97+

this yearDATE

0.96+

Women Transforming TechnologyEVENT

0.96+

twenty twenty fiveQUANTITY

0.95+

fifty fiftyQUANTITY

0.93+

one thingQUANTITY

0.93+

FarrellPERSON

0.93+

CMUORGANIZATION

0.93+

nutanixORGANIZATION

0.92+

First thingQUANTITY

0.91+

second thingQUANTITY

0.91+

InstagramORGANIZATION

0.88+

reshmaPERSON

0.88+

seventiesQUANTITY

0.88+

W. T. SquaredLOCATION

0.87+

FacebookORGANIZATION

0.87+

Women Transforming Technology 2019EVENT

0.85+

book number twoQUANTITY

0.84+

around twenty twenty five talksQUANTITY

0.83+

fourth annualQUANTITY

0.83+

this morningDATE

0.82+

early this morningDATE

0.81+

twenty nineteenQUANTITY

0.81+

LincolnPERSON

0.8+

six yearsQUANTITY

0.76+

VMwareLOCATION

0.75+

HumbleORGANIZATION

0.74+

CirrusORGANIZATION

0.73+

W TORGANIZATION

0.71+

San Jose StateORGANIZATION

0.71+

a secondQUANTITY

0.71+

one of our guestsQUANTITY

0.69+

JohnnyPERSON

0.68+

VeumPERSON

0.64+

VMORGANIZATION

0.63+

one womanQUANTITY

0.61+

pandemicEVENT

0.6+

CubeORGANIZATION

0.6+

fiveQUANTITY

0.43+

lastDATE

0.43+

CubeCOMMERCIAL_ITEM

0.39+

Sreesha Rao, Niagara Bottling & Seth Dobrin, IBM | Change The Game: Winning With AI 2018


 

>> Live, from Times Square, in New York City, it's theCUBE covering IBM's Change the Game: Winning with AI. Brought to you by IBM. >> Welcome back to the Big Apple, everybody. I'm Dave Vellante, and you're watching theCUBE, the leader in live tech coverage, and we're here covering a special presentation of IBM's Change the Game: Winning with AI. IBM's got an analyst event going on here at the Westin today in the theater district. They've got 50-60 analysts here. They've got a partner summit going on, and then tonight, at Terminal 5 of the West Side Highway, they've got a customer event, a lot of customers there. We've talked earlier today about the hard news. Seth Dobern is here. He's the Chief Data Officer of IBM Analytics, and he's joined by Shreesha Rao who is the Senior Manager of IT Applications at California-based Niagara Bottling. Gentlemen, welcome to theCUBE. Thanks so much for coming on. >> Thank you, Dave. >> Well, thanks Dave for having us. >> Yes, always a pleasure Seth. We've known each other for a while now. I think we met in the snowstorm in Boston, sparked something a couple years ago. >> Yep. When we were both trapped there. >> Yep, and at that time, we spent a lot of time talking about your internal role as the Chief Data Officer, working closely with Inderpal Bhandari, and you guys are doing inside of IBM. I want to talk a little bit more about your other half which is working with clients and the Data Science Elite Team, and we'll get into what you're doing with Niagara Bottling, but let's start there, in terms of that side of your role, give us the update. >> Yeah, like you said, we spent a lot of time talking about how IBM is implementing the CTO role. While we were doing that internally, I spent quite a bit of time flying around the world, talking to our clients over the last 18 months since I joined IBM, and we found a consistent theme with all the clients, in that, they needed help learning how to implement data science, AI, machine learning, whatever you want to call it, in their enterprise. There's a fundamental difference between doing these things at a university or as part of a Kaggle competition than in an enterprise, so we felt really strongly that it was important for the future of IBM that all of our clients become successful at it because what we don't want to do is we don't want in two years for them to go "Oh my God, this whole data science thing was a scam. We haven't made any money from it." And it's not because the data science thing is a scam. It's because the way they're doing it is not conducive to business, and so we set up this team we call the Data Science Elite Team, and what this team does is we sit with clients around a specific use case for 30, 60, 90 days, it's really about 3 or 4 sprints, depending on the material, the client, and how long it takes, and we help them learn through this use case, how to use Python, R, Scala in our platform obviously, because we're here to make money too, to implement these projects in their enterprise. Now, because it's written in completely open-source, if they're not happy with what the product looks like, they can take their toys and go home afterwards. It's on us to prove the value as part of this, but there's a key point here. My team is not measured on sales. They're measured on adoption of AI in the enterprise, and so it creates a different behavior for them. So they're really about "Make the enterprise successful," right, not "Sell this software." >> Yeah, compensation drives behavior. >> Yeah, yeah. >> So, at this point, I ask, "Well, do you have any examples?" so Shreesha, let's turn to you. (laughing softly) Niagara Bottling -- >> As a matter of fact, Dave, we do. (laughing) >> Yeah, so you're not a bank with a trillion dollars in assets under management. Tell us about Niagara Bottling and your role. >> Well, Niagara Bottling is the biggest private label bottled water manufacturing company in the U.S. We make bottled water for Costcos, Walmarts, major national grocery retailers. These are our customers whom we service, and as with all large customers, they're demanding, and we provide bottled water at relatively low cost and high quality. >> Yeah, so I used to have a CIO consultancy. We worked with every CIO up and down the East Coast. I always observed, really got into a lot of organizations. I was always observed that it was really the heads of Application that drove AI because they were the glue between the business and IT, and that's really where you sit in the organization, right? >> Yes. My role is to support the business and business analytics as well as I support some of the distribution technologies and planning technologies at Niagara Bottling. >> So take us the through the project if you will. What were the drivers? What were the outcomes you envisioned? And we can kind of go through the case study. >> So the current project that we leveraged IBM's help was with a stretch wrapper project. Each pallet that we produce--- we produce obviously cases of bottled water. These are stacked into pallets and then shrink wrapped or stretch wrapped with a stretch wrapper, and this project is to be able to save money by trying to optimize the amount of stretch wrap that goes around a pallet. We need to be able to maintain the structural stability of the pallet while it's transported from the manufacturing location to our customer's location where it's unwrapped and then the cases are used. >> And over breakfast we were talking. You guys produce 2833 bottles of water per second. >> Wow. (everyone laughs) >> It's enormous. The manufacturing line is a high speed manufacturing line, and we have a lights-out policy where everything runs in an automated fashion with raw materials coming in from one end and the finished goods, pallets of water, going out. It's called pellets to pallets. Pellets of plastic coming in through one end and pallets of water going out through the other end. >> Are you sitting on top of an aquifer? Or are you guys using sort of some other techniques? >> Yes, in fact, we do bore wells and extract water from the aquifer. >> Okay, so the goal was to minimize the amount of material that you used but maintain its stability? Is that right? >> Yes, during transportation, yes. So if we use too much plastic, we're not optimally, I mean, we're wasting material, and cost goes up. We produce almost 16 million pallets of water every single year, so that's a lot of shrink wrap that goes around those, so what we can save in terms of maybe 15-20% of shrink wrap costs will amount to quite a bit. >> So, how does machine learning fit into all of this? >> So, machine learning is way to understand what kind of profile, if we can measure what is happening as we wrap the pallets, whether we are wrapping it too tight or by stretching it, that results in either a conservative way of wrapping the pallets or an aggressive way of wrapping the pallets. >> I.e. too much material, right? >> Too much material is conservative, and aggressive is too little material, and so we can achieve some savings if we were to alternate between the profiles. >> So, too little material means you lose product, right? >> Yes, and there's a risk of breakage, so essentially, while the pallet is being wrapped, if you are stretching it too much there's a breakage, and then it interrupts production, so we want to try and avoid that. We want a continuous production, at the same time, we want the pallet to be stable while saving material costs. >> Okay, so you're trying to find that ideal balance, and how much variability is in there? Is it a function of distance and how many touches it has? Maybe you can share with that. >> Yes, so each pallet takes about 16-18 wraps of the stretch wrapper going around it, and that's how much material is laid out. About 250 grams of plastic that goes on there. So we're trying to optimize the gram weight which is the amount of plastic that goes around each of the pallet. >> So it's about predicting how much plastic is enough without having breakage and disrupting your line. So they had labeled data that was, "if we stretch it this much, it breaks. If we don't stretch it this much, it doesn't break, but then it was about predicting what's good enough, avoiding both of those extremes, right? >> Yes. >> So it's a truly predictive and iterative model that we've built with them. >> And, you're obviously injecting data in terms of the trip to the store as well, right? You're taking that into consideration in the model, right? >> Yeah that's mainly to make sure that the pallets are stable during transportation. >> Right. >> And that is already determined how much containment force is required when your stretch and wrap each pallet. So that's one of the variables that is measured, but the inputs and outputs are-- the input is the amount of material that is being used in terms of gram weight. We are trying to minimize that. So that's what the whole machine learning exercise was. >> And the data comes from where? Is it observation, maybe instrumented? >> Yeah, the instruments. Our stretch-wrapper machines have an ignition platform, which is a Scada platform that allows us to measure all of these variables. We would be able to get machine variable information from those machines and then be able to hopefully, one day, automate that process, so the feedback loop that says "On this profile, we've not had any breaks. We can continue," or if there have been frequent breaks on a certain profile or machine setting, then we can change that dynamically as the product is moving through the manufacturing process. >> Yeah, so think of it as, it's kind of a traditional manufacturing production line optimization and prediction problem right? It's minimizing waste, right, while maximizing the output and then throughput of the production line. When you optimize a production line, the first step is to predict what's going to go wrong, and then the next step would be to include precision optimization to say "How do we maximize? Using the constraints that the predictive models give us, how do we maximize the output of the production line?" This is not a unique situation. It's a unique material that we haven't really worked with, but they had some really good data on this material, how it behaves, and that's key, as you know, Dave, and probable most of the people watching this know, labeled data is the hardest part of doing machine learning, and building those features from that labeled data, and they had some great data for us to start with. >> Okay, so you're collecting data at the edge essentially, then you're using that to feed the models, which is running, I don't know, where's it running, your data center? Your cloud? >> Yeah, in our data center, there's an instance of DSX Local. >> Okay. >> That we stood up. Most of the data is running through that. We build the models there. And then our goal is to be able to deploy to the edge where we can complete the loop in terms of the feedback that happens. >> And iterate. (Shreesha nods) >> And DSX Local, is Data Science Experience Local? >> Yes. >> Slash Watson Studio, so they're the same thing. >> Okay now, what role did IBM and the Data Science Elite Team play? You could take us through that. >> So, as we discussed earlier, adopting data science is not that easy. It requires subject matter, expertise. It requires understanding of data science itself, the tools and techniques, and IBM brought that as a part of the Data Science Elite Team. They brought both the tools and the expertise so that we could get on that journey towards AI. >> And it's not a "do the work for them." It's a "teach to fish," and so my team sat side by side with the Niagara Bottling team, and we walked them through the process, so it's not a consulting engagement in the traditional sense. It's how do we help them learn how to do it? So it's side by side with their team. Our team sat there and walked them through it. >> For how many weeks? >> We've had about two sprints already, and we're entering the third sprint. It's been about 30-45 days between sprints. >> And you have your own data science team. >> Yes. Our team is coming up to speed using this project. They've been trained but they needed help with people who have done this, been there, and have handled some of the challenges of modeling and data science. >> So it accelerates that time to --- >> Value. >> Outcome and value and is a knowledge transfer component -- >> Yes, absolutely. >> It's occurring now, and I guess it's ongoing, right? >> Yes. The engagement is unique in the sense that IBM's team came to our factory, understood what that process, the stretch-wrap process looks like so they had an understanding of the physical process and how it's modeled with the help of the variables and understand the data science modeling piece as well. Once they know both side of the equation, they can help put the physical problem and the digital equivalent together, and then be able to correlate why things are happening with the appropriate data that supports the behavior. >> Yeah and then the constraints of the one use case and up to 90 days, there's no charge for those two. Like I said, it's paramount that our clients like Niagara know how to do this successfully in their enterprise. >> It's a freebie? >> No, it's no charge. Free makes it sound too cheap. (everybody laughs) >> But it's part of obviously a broader arrangement with buying hardware and software, or whatever it is. >> Yeah, its a strategy for us to help make sure our clients are successful, and I want it to minimize the activation energy to do that, so there's no charge, and the only requirements from the client is it's a real use case, they at least match the resources I put on the ground, and they sit with us and do things like this and act as a reference and talk about the team and our offerings and their experiences. >> So you've got to have skin in the game obviously, an IBM customer. There's got to be some commitment for some kind of business relationship. How big was the collective team for each, if you will? >> So IBM had 2-3 data scientists. (Dave takes notes) Niagara matched that, 2-3 analysts. There were some working with the machines who were familiar with the machines and others who were more familiar with the data acquisition and data modeling. >> So each of these engagements, they cost us about $250,000 all in, so they're quite an investment we're making in our clients. >> I bet. I mean, 2-3 weeks over many, many weeks of super geeks time. So you're bringing in hardcore data scientists, math wizzes, stat wiz, data hackers, developer--- >> Data viz people, yeah, the whole stack. >> And the level of skills that Niagara has? >> We've got actual employees who are responsible for production, our manufacturing analysts who help aid in troubleshooting problems. If there are breakages, they go analyze why that's happening. Now they have data to tell them what to do about it, and that's the whole journey that we are in, in trying to quantify with the help of data, and be able to connect our systems with data, systems and models that help us analyze what happened and why it happened and what to do before it happens. >> Your team must love this because they're sort of elevating their skills. They're working with rock star data scientists. >> Yes. >> And we've talked about this before. A point that was made here is that it's really important in these projects to have people acting as product owners if you will, subject matter experts, that are on the front line, that do this everyday, not just for the subject matter expertise. I'm sure there's executives that understand it, but when you're done with the model, bringing it to the floor, and talking to their peers about it, there's no better way to drive this cultural change of adopting these things and having one of your peers that you respect talk about it instead of some guy or lady sitting up in the ivory tower saying "thou shalt." >> Now you don't know the outcome yet. It's still early days, but you've got a model built that you've got confidence in, and then you can iterate that model. What's your expectation for the outcome? >> We're hoping that preliminary results help us get up the learning curve of data science and how to leverage data to be able to make decisions. So that's our idea. There are obviously optimal settings that we can use, but it's going to be a trial and error process. And through that, as we collect data, we can understand what settings are optimal and what should we be using in each of the plants. And if the plants decide, hey they have a subjective preference for one profile versus another with the data we are capturing we can measure when they deviated from what we specified. We have a lot of learning coming from the approach that we're taking. You can't control things if you don't measure it first. >> Well, your objectives are to transcend this one project and to do the same thing across. >> And to do the same thing across, yes. >> Essentially pay for it, with a quick return. That's the way to do things these days, right? >> Yes. >> You've got more narrow, small projects that'll give you a quick hit, and then leverage that expertise across the organization to drive more value. >> Yes. >> Love it. What a great story, guys. Thanks so much for coming to theCUBE and sharing. >> Thank you. >> Congratulations. You must be really excited. >> No. It's a fun project. I appreciate it. >> Thanks for having us, Dave. I appreciate it. >> Pleasure, Seth. Always great talking to you, and keep it right there everybody. You're watching theCUBE. We're live from New York City here at the Westin Hotel. cubenyc #cubenyc Check out the ibm.com/winwithai Change the Game: Winning with AI Tonight. We'll be right back after a short break. (minimal upbeat music)

Published Date : Sep 13 2018

SUMMARY :

Brought to you by IBM. at Terminal 5 of the West Side Highway, I think we met in the snowstorm in Boston, sparked something When we were both trapped there. Yep, and at that time, we spent a lot of time and we found a consistent theme with all the clients, So, at this point, I ask, "Well, do you have As a matter of fact, Dave, we do. Yeah, so you're not a bank with a trillion dollars Well, Niagara Bottling is the biggest private label and that's really where you sit in the organization, right? and business analytics as well as I support some of the And we can kind of go through the case study. So the current project that we leveraged IBM's help was And over breakfast we were talking. (everyone laughs) It's called pellets to pallets. Yes, in fact, we do bore wells and So if we use too much plastic, we're not optimally, as we wrap the pallets, whether we are wrapping it too little material, and so we can achieve some savings so we want to try and avoid that. and how much variability is in there? goes around each of the pallet. So they had labeled data that was, "if we stretch it this that we've built with them. Yeah that's mainly to make sure that the pallets So that's one of the variables that is measured, one day, automate that process, so the feedback loop the predictive models give us, how do we maximize the Yeah, in our data center, Most of the data And iterate. the Data Science Elite Team play? so that we could get on that journey towards AI. And it's not a "do the work for them." and we're entering the third sprint. some of the challenges of modeling and data science. that supports the behavior. Yeah and then the constraints of the one use case No, it's no charge. with buying hardware and software, or whatever it is. minimize the activation energy to do that, There's got to be some commitment for some and others who were more familiar with the So each of these engagements, So you're bringing in hardcore data scientists, math wizzes, and that's the whole journey that we are in, in trying to Your team must love this because that are on the front line, that do this everyday, and then you can iterate that model. And if the plants decide, hey they have a subjective and to do the same thing across. That's the way to do things these days, right? across the organization to drive more value. Thanks so much for coming to theCUBE and sharing. You must be really excited. I appreciate it. I appreciate it. Change the Game: Winning with AI Tonight.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Shreesha RaoPERSON

0.99+

Seth DobernPERSON

0.99+

IBMORGANIZATION

0.99+

Dave VellantePERSON

0.99+

WalmartsORGANIZATION

0.99+

CostcosORGANIZATION

0.99+

DavePERSON

0.99+

30QUANTITY

0.99+

BostonLOCATION

0.99+

New York CityLOCATION

0.99+

CaliforniaLOCATION

0.99+

Seth DobrinPERSON

0.99+

60QUANTITY

0.99+

NiagaraORGANIZATION

0.99+

SethPERSON

0.99+

ShreeshaPERSON

0.99+

U.S.LOCATION

0.99+

Sreesha RaoPERSON

0.99+

third sprintQUANTITY

0.99+

90 daysQUANTITY

0.99+

twoQUANTITY

0.99+

first stepQUANTITY

0.99+

Inderpal BhandariPERSON

0.99+

Niagara BottlingORGANIZATION

0.99+

PythonTITLE

0.99+

bothQUANTITY

0.99+

tonightDATE

0.99+

ibm.com/winwithaiOTHER

0.99+

oneQUANTITY

0.99+

Terminal 5LOCATION

0.99+

two yearsQUANTITY

0.99+

about $250,000QUANTITY

0.98+

Times SquareLOCATION

0.98+

ScalaTITLE

0.98+

2018DATE

0.98+

15-20%QUANTITY

0.98+

IBM AnalyticsORGANIZATION

0.98+

eachQUANTITY

0.98+

todayDATE

0.98+

each palletQUANTITY

0.98+

KaggleORGANIZATION

0.98+

West Side HighwayLOCATION

0.97+

Each palletQUANTITY

0.97+

4 sprintsQUANTITY

0.97+

About 250 gramsQUANTITY

0.97+

both sideQUANTITY

0.96+

Data Science Elite TeamORGANIZATION

0.96+

one dayQUANTITY

0.95+

every single yearQUANTITY

0.95+

Niagara BottlingPERSON

0.93+

about two sprintsQUANTITY

0.93+

one endQUANTITY

0.93+

RTITLE

0.92+

2-3 weeksQUANTITY

0.91+

one profileQUANTITY

0.91+

50-60 analystsQUANTITY

0.91+

trillion dollarsQUANTITY

0.9+

2-3 data scientistsQUANTITY

0.9+

about 30-45 daysQUANTITY

0.88+

almost 16 million pallets of waterQUANTITY

0.88+

Big AppleLOCATION

0.87+

couple years agoDATE

0.87+

last 18 monthsDATE

0.87+

Westin HotelORGANIZATION

0.83+

palletQUANTITY

0.83+

#cubenycLOCATION

0.82+

2833 bottles of water per secondQUANTITY

0.82+

the Game: Winning with AITITLE

0.81+

Santosh Rao, NetApp | Accelerate Your Journey to AI


 

>> From Sunnyvale California, in the heart of Silicon Valley. It's theCUBE, covering, Accelerate Your Journey to AI, Brought to you by NetApp. >> Hi I'm Peter Burris, welcome to another conversation here from the Data Visionary Center at NetApp's headquarters in beautiful Sunnyvale California. I'm being joined today by Santosh Rao. Santosh is the Senior Technical Director at NetApp, Specifically Santosh we're going to talk about some of the challenges and opportunities associated with AI and how NetApp is making that possible. Welcome to theCUBE. >> Thank you Peter, I'm excited to be here. Thank you for that. >> So, Santosh what is your role at Netapp? Why don't we start there. >> Wonderful, glad to be here, my name is Santosh Rao, I'm a Senior Technical Director at NetApp, part of the Product Operations group, and I've been here 10 years. My role is to drive up new lines of opportunity for NetApp, build up new product businesses. The most recent one has been AI. So I've been focused on bootstrapping and incubating the AI effort at NetApp for the last nine months now. Been excited to be part of this effort now. >> So nine months of talking, both internally, but spending time with customers too. What are customers telling you that are NetApp's opportunities, and what NetApp has to do to respond to those opportunities? >> That's a great question. We are seeing a lot of focus around expanding the digital transformation to really get value out of the data, and start looking at AI, and Deep Learning in particular, as a way to prove the ROI on the opportunities that they've had. AI and deep learning requires a tremendous amount of data. We're actually fascinated to see the amount of data sets that customers are starting to look at. A petabyte of data is sort of the minimum size of data set. So when you think about petabyte-scale data lakes. The first think you want to think about is how do you optimize the TCO for the solution. NetApp is seen as a leader in that, just because of our rich heritage of storage efficiency. A lot of these are video image and audio files, and so you're seeing a lot of unstructured data in general, and we're a leader in NFS as well. So a lot of that starts to come together from a NetApp perspective. And that's where customers see us as the leader in NFS, the leader in files, and the leader in storage efficiency, all coming together. >> And you want to join that together with some leadership, especially in GPU's, so that leads to NVIDIA. So you've announced an interesting partnership between NetApp and NVIDIA. How did that factor into your products, and where do you think that goes? >> It's kind of interesting how that came about, because when you look at the industry it's a small place. Some of the folks driving the NVIDIA leadership have been working with us in the past, when we've bootstrapped converged infrastructures with other vendors. We're known to have been a 10 year metro vendor in the converged infrastructure space. The way this came about was NVIDIA is clearly a leader in the GPU and AI acceleration from a computer perspective. But they're also seen as a long history of GPU virtualization and GPU graphics acceleration. When they look at NetApp, what NetApp brings to NVIDIA is just the converged infrastructure, the maturity of that solution, the depth that we have in the enterprise and the rich partner ecosystem. All of that starts to come together, and some of the players in this particular case, have had aligned in the past working on virtualization based conversion infrastructures in the past. It's an exciting time, we're really looking forward to working closely with NVIDIA. >> So NVIDIA brings these lighting fast machines, optimized for some of the new data types, data forms, data structures associated with AI. But they got to be fed, got to get the data to them. What is NetApp doing from a standpoint of the underlying hardware to improve the overall performance, and insure that these solutions really scream for customers? >> Yeah, it's kind of interesting, because when you look at how customers are designing this. They're thinking about digital transformation as, "What is the flow of that data? "What am I doing to create new sensors "and endpoints that create data? "How do I flow the data in? "How do I forecast how much data I'm going to "create quarter over quarter, year over year? "How many endpoints? what is the resolution of the data?" And then as that starts to come into the data center, they got to think about, where are the bottlenecks. So you start looking at a wide range of bottlenecks. You look at the edge data aggregation, then you start looking at network bandwidth to push data into the core data centers. You got to think smart about some of these things. For example, no matter how much network bandwidth you throw at it, you want to reduce the amount of data you're moving. Smart data movement technologies like SnapMirror, which NetApp brings to the table, are some things that we uniquely enable compared to others. The fact of the matter is when you take a common operating system, like ONTAP, and you can lear it across the Edge, Core and Cloud, that gives us some unnatural advantages. We can do things that you can't do in a silo. You've got a commodities server trying to push data, and having to do raw full copies of data into the data center. So we think smart data movement is a huge opportunity. When you look at the core, obviously it's a workhorse, and you've got the random sampling of data into this hardware. And we think the A800 is a workhorse built for AI. It is a best of a system in terms of performance, it does about 25 gigabytes per second just on a dual controller pair. You'll recall that we spent several number of years building out the foundation of Clustered ONTAP to allow us to scale to gigantic sizes. So 24 node or 12 controller pad A800 gets us to over 300 gigabytes per second, and over 11 million IOPS if you think about that. That's over about four to six times greater than anybody else in the industry. So when you think about NVIDIA investment in DGX and they're performance investment they've made there. We think only NetApp can keep up with that, in terms of performance. >> So 11 million IOPS, phenomenal performance for today. But the future is going to demand ever more. Where do you think these trends go? >> Well nobody really knows for sure. The most exciting part of this journey, is nobody knows where this is going. This is where you need to future proof customers, and you need to enable the technology to have sufficient legs, and the architecture to have sufficient legs. That no matter how it evolves and where customers go, the vendors working with customers can go there with them. And actually when customers look at NetApp and say, "You guys are working with the Cloud partners, "you're now working with NVIDIA. "And in the past you worked with a "variety of data source vendors. "So we think we can work with NetApp because, "you're not affiliated to any one of them, "and yet you're giving us that full range of solutions." So we think that performance is going to be key. Acceleration of compute workloads is going to demand orders of magnitude performance improvement. We think data set efficiencies and storage efficiencies is absolutely key. And we think you got to really look at PCO, because customers want to build these great solutions for the business, but they can't afford it unless vendors give them viable options. So it's really up to partners like NVIDIA and NetApp to work together to give customers the best of breed solutions that reduce the TCO, accelerate compute, accelerate the data pipeline, and yet, bring the cost of the overall solution down, and make it simple to deploy and pre integrated. These are the things customers are looking for and we think we have the best bet at getting there. >> So that leads to... Great summary, but that leads to some interesting observations on what customers should be basing their decisions on. What would you say are the two or three most crucial things that customers need to think about right now as a conceptualized, where to go with their AI application, or AI workloads, their AI projects and initiatives? >> So when customers are designing and building these solutions, they're thinking the entire data lifecycle. "How am I getting this new type of "data for digital transformation? "What is the ingestion architecture? "What are my data aggregation endpoints for ingestion? "How am I going to build out my AI data sources? "What are the types of data? "Am I collecting sensor data? Is it a variety of images? "Am I going to add in audio transcription? "Is there video feeds that come in over time?" So customers are having to think about the entire digital experience, the types of data, because that leads to the selection of data sources. For example, if you're going to be learning sensor data, you want to be looking at maybe graph databases. If you want to be learning log data, you're going to be looking at log analytics over time, as well as AI. You're going to look at video image and audio accordingly. Architecting these solutions requires an understanding of, what is your digital experience? How does that evolve over time? What is the right and optimal data source to learn that data, so that you get the best experience from a search, from an indexing, from a tiering, from analytics and AI? And then, what is the flow of that data? And how do you architect it for a global experience? How do you build out these data centers where you're not having to copy all data maybe, into your global headquarters. If you're a global company with presence across multiple Geo's, how do you architect for regional data centers to be self contained? Because we're looking at exabyte scale opportunities in some of these. I think that's pretty much the two or three things that I'd say, across the entire gamut of space here. >> Excellent, turning that then into some simple observations about the fact that data still is physical. There's latency issues, there's the cost of bandwidth issues. There's other types of issues. This notion of Edge, Core, Cloud. How do you see the ONTAP operating system, the ONTAP product set, facilitating being able to put data where it needs to be, while at the same time creating the options that a customer needs to use data as they need to use it? >> The fact of the matter is, these things cannot be achieved overnight. It takes a certain amount of foundational work, that, frankly, takes several years. The fact that ONTAP can run on small, form factor hardware at the edge is a journey that we started several years ago. The fact that ONTAP can run on commodity white box hardware, has been a journey that we have run over the last three, four years. Same thing in the Cloud, we have virtualized ONTAP to the point that it can run on all hyperscalers and now we are in the process of consuming ONTAP as a service, where you don't even know that it is an infrastructure product, or has been. So the process of building an Edge, Core, and Cloud data pipeline leverages the investments that we've made over time. When you think about the scale of compute, data and performance needed, that's a five to six year journey in Clustered ONTAP, if you look at NetApp's past. These are all elements that are coming together from a product and solution perspective. But the reality is that leveraging years and years of investment that NetApp engineering has made. In a a way that the industry really did not invest in the same areas. So when we compare and contrast what NetApp has done versus the rest of the industry. At a time when people were building monolithic engineered systems, we were building software defined architectures. At a time when they were building tightly cobbled system for traditional enterprise, we were building flexible, scale out systems, that assumed that you would want to scale in modular increments. Now as the world has shifted from enterprise into third platform and Webscale. We're finding all those investments NetApp made over the years is really starting to pay off for us. >> Including some of the investments in how AI can be used to handle how ONTAP operates at each of those different levels of scale. >> Absolutely, yes. >> Sontash Rao, Technical Director at NetApp, talking about AI, some of the new changes in the relationships between AI and storage. Thanks very much for being on theCUBE. >> Thank you, appreciate it.

Published Date : Aug 1 2018

SUMMARY :

Brought to you by NetApp. Santosh is the Senior Technical Director at NetApp, Thank you Peter, I'm excited to be here. Why don't we start there. the AI effort at NetApp for the last nine months now. What are customers telling you that are So a lot of that starts to come especially in GPU's, so that leads to NVIDIA. All of that starts to come together, What is NetApp doing from a standpoint of the The fact of the matter is when you But the future is going to demand ever more. and the architecture to have sufficient legs. Great summary, but that leads to some because that leads to the selection of data sources. observations about the fact that data The fact of the matter is, Including some of the investments in how AI can in the relationships between AI and storage.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavidPERSON

0.99+

AmazonORGANIZATION

0.99+

Dave VellantePERSON

0.99+

Justin WarrenPERSON

0.99+

Sanjay PoonenPERSON

0.99+

IBMORGANIZATION

0.99+

ClarkePERSON

0.99+

David FloyerPERSON

0.99+

Jeff FrickPERSON

0.99+

Dave VolantePERSON

0.99+

GeorgePERSON

0.99+

DavePERSON

0.99+

Diane GreenePERSON

0.99+

Michele PalusoPERSON

0.99+

AWSORGANIZATION

0.99+

Sam LightstonePERSON

0.99+

Dan HushonPERSON

0.99+

NutanixORGANIZATION

0.99+

Teresa CarlsonPERSON

0.99+

KevinPERSON

0.99+

Andy ArmstrongPERSON

0.99+

Michael DellPERSON

0.99+

Pat GelsingerPERSON

0.99+

JohnPERSON

0.99+

GoogleORGANIZATION

0.99+

Lisa MartinPERSON

0.99+

Kevin SheehanPERSON

0.99+

Leandro NunezPERSON

0.99+

MicrosoftORGANIZATION

0.99+

OracleORGANIZATION

0.99+

AlibabaORGANIZATION

0.99+

NVIDIAORGANIZATION

0.99+

EMCORGANIZATION

0.99+

GEORGANIZATION

0.99+

NetAppORGANIZATION

0.99+

KeithPERSON

0.99+

Bob MetcalfePERSON

0.99+

VMwareORGANIZATION

0.99+

90%QUANTITY

0.99+

SamPERSON

0.99+

Larry BiaginiPERSON

0.99+

Rebecca KnightPERSON

0.99+

BrendanPERSON

0.99+

DellORGANIZATION

0.99+

PeterPERSON

0.99+

Clarke PattersonPERSON

0.99+

Pratima Rao Gluckman, VMware | Women Transforming Technology (wt2) 2018


 

(electronic music) >> Announcer: From the VMware campus in Palo Alto, California, it's theCUBE! Covering women transforming technology. >> Hi, welcome to theCube. Lisa Martin on the ground at the 3rd Annual Women Transforming Technology event at VMware in Palo Alto, and I'm joined by an author and a senior VMware engineer, Pratima Rao Gluckman. Welcome to the Cube, Pratima. >> Thank you, Lisa. It's great to be here >> It's great to have you here. So you have been an engineer here for about ten years. You knew from when you were a kid, love this, engineer, you knew you wanted to be that. You fell in love with your first programming class. It was like a Jerry McGuire, you complete me kind of moment I'm imagining. Tell me a little bit about your career in engineering and specifically as a female. >> Okay, so I was raised, born and raised, in India, and I grew up in an environment where I was gender blind. You know, my oldest sister played cricket for the country. >> Lisa: Wow! >> And it was a man's game! You know and a lot of people kind of talked about that, but it wasn't like she couldn't do it, right? So, I always grew up with this notion that I could do anything, and I could be whoever I wanted to be. And then I came to the United States, and that whole narrative stayed with me, the meritocracy narrative. Like you work hard, you know, society, the world will take care of you, and good things will happen, but it wasn't until 2016 was when I had this aha moment, and that's when I suddenly felt, suddenly I was aware of my gender, and I was like, okay I'm a female in tech, and there's lots of challenges for women in tech. And I didn't quite realize that. It was just that aha moment, and VMware has been a great company. I've been with VMware for nine years, I started as an engineer, and I moved into engineering management. We had Diane Greene who founded the company, the culture was always meritocratic, but I think something in 2016 kind of made me just thinking about my career and thinking about the careers of the women around me, I felt like we were stuck. But at the same time be focused on the women that were successful, for instance Yanbing Li, who's our senior VP and general manager of our storage business. And we were talking about her, and I said, this is what I said, I said, "There are some women who are successful despite everything "that we're dealing with, and I just want "to know their stories, and I'm going to write this book." The moment I said that it just felt right. I felt like this was something I wanted to do, and the stories in this book are inspiring stories of these women, just listening to Laila Ali this morning, her inspirational story, and this book has around 19 stories of these executive women, and they're just not role models, I mean every story offers strategies of how to thrive in the tech world. >> So interesting that first of all I love the title, Pratima, of this book, "Nevertheless She Persisted." So simple, so articulate, and so inspiring. So interesting, though, that you were working as an engineer for quite a few years before you realized, kind of looked around, like, whoa, this is a challenge that I'm actually living in. Yanbing is a CUBE alumni, I love her Twitter handle. So you said all right, I want to talk to some women who have been persistent and successful in their tech careers, as kind of the genesis of the book. Talk to us about, maybe, of those 19 interviews that range from, what, c-levels to VPs to directors. What are some of the stories that you found, what kind of blew your mind of, wow, I didn't know that you came from that kind of background? >> So when I started off I was very ambitious. I said I'd go interview CEO women, and I did a lot of research, and I found some very disturbing facts. You know, Fortune Magazine lists Fortune 500 companies, and they rank them based on their prior year's fiscal revenues, and from that data there were 24 women CEOs in 2014. That number dropped to 21 in 2015, and it dropped again in 2016, but it went up slightly in 2017 to 32 women, which is promising, but back in 2018 we're down to 24. So we have very very few women CEOs, and when I started off I said I'll talk to the CEO women, and I couldn't find any CEO women, my network, my friends' network, And so I dropped one level and I said let me go talk to SVPs and when I looked at VMware and VMware's network, Yanbing was one of them, so she's in the book, and then I reached out to contacts outside of my network. So I have some women from LinkedIn, I have Google, I have Facebook, I have some women from startups. So I have around four CEOs in the book, I've got, and what's great about this book is it's got a diverse set of women. Right? They have different titles; I've got directors, senior directors, VPs, Senior VPs, GMs, and CEOs. And some of them have PhDs, some of them have a Master's Degree, and some actually don't have formal training in computer science. I thought this would be interesting because a woman with any background can relate to it. Right? And so that was helpful. And so that's kind of how I went off and I started to write this book. And when I interviewed these women, there was a common theme that just kept emerging, and that was persistence. And they persisted against gender bias, stereotype threat, just the negative messages from media and society. I mean like Laila Ali was talking about just even the messages she got from her dad. >> Right. >> Right? Someone who was so close to her who basically said "Women can't box." And that didn't stop her; I mean she persisted. When I was listening to her, she didn't use the word, but, you know, she said she was believing in herself and all that, but she persisted through all those negative messages, right? And she said no one can tell her what to do. (laughs) >> Yeah her confidence is very loud and clear, and I think that you do find women, and I imagine some of them are some of the interviewees in your book, who have that natural confidence, and as you were saying when Muhammad Ali was trying to talk her out of it, and trying to, as she said, "He tried to get me think it was my idea," but she just knew, well no, this is what I want to do. And she had that confidence. Did you find that a lot of the women leaders in this book had that natural confidence? Like you grew up in an environment where you just believed "I can do this, my sister's playing cricket." Did you find that was a common thread, or did you find some great examples of women who wanted to do something, but just thought "Can I do this?" And "How do I do that?" What was the kind of confidence level that you saw? >> I was surprised because I had a question on imposter syndrome, and I asked these women, Telle Whiteney, who's the CEO, she was the CEO, ex-CEO >> Lisa: Grace Hopper >> Yes. The founder of Grace Hopper. I asked her about imposter syndrome and this is what she told me, she said "I feel like I'm not good enough" and that actually gave me goosebumps. I remember I was sitting in front of greatness and this is what she was telling me. And then I asked her "How do you overcome it?" and she said "I just show up the next day." And that actually helped me with this book because I am not an author. >> That's persistence. >> I mean I am an author now but 2 years ago when I started to write this, writing is not my forte. I'm a technologist, I build teams, I manage teams, I ship products, I ship technical products, but everyday I woke up and I said, "I'm feeling like an imposter." It was just her voice right? Yanbing also feels the same way, I mean she does feel times where she feels like, "I'm lacking confidence here." Majority of the people actually, pretty much all the women, this one woman, Patty Hatter, didn't feel like she had imposter syndrome but the rest of them face it everyday. Talia Malachi who's a principal engineer at VMWare, it's very hard to be a PE, she said that she fights it every day, and that was surprising to me, right? Because I was sitting in front of all these women, they were confident, they've achieved so much, but they struggle with that every day. But all they do is they persist, they show up the next day. They take those little steps and they have these goals and they're very intentional and purposeful, I mean just like what Layla said, right? She said, "Everything that I've done in the last 20 years "has been intentional and purposeful." And that's what these women did. And I learned so much from them because 20 years ago I was a drifter (laughs) you know I just kind drifted and I didn't realize that I could set a goal and I could reach it and I could do all these amazing things, and I didn't think any of this was possible for me. But I'm hoping that some girl somewhere can read this book and say "You know what this is possible", right? This is possible and you know role models, I think we need lots of these role models. >> We do I think, you know imposter syndrome I've suffered for it for so long before I even knew what it was and I'll be honest with you even finding out that it was a legitimate issue was (exhales) okay I'm not the only one. So I think it's important that you, that these women and youth are your voice, in your book, identified it. This is something I face everyday even though you may look at me on the outside and think, "She's so successful, she's got everything." And we're human. And Laila Ali talked about of having to revisit that inner lawyer, that sometimes she goes silent, sometimes the pilot light goes out and needs to be reignited or turned back up. I think that is just giving people permission, especially women, and I've felt that in the keynote, giving us permission to go, "Ah, you're not going to feel that everyday, "you're not going to feel it everyday." Get up the next day to your point, keep persisting and pursuing your purpose is in and of itself so incredibly empowering. >> Right but also imposter syndrome is good for you and I talk about that a little bit in the book. And you know why it's good for you? It's you getting out of your comfort zone, you're trying something different, and it's natural to feel that way, but once you get over it, you've mastered that, and Laila talked about it too today she said, "You get uncomfortable to the point "where you get comfortable." >> Lisa: Yes. >> So every time that you find that you have this imposter syndrome, just remember that greatness is right around the corner. >> Yep. I always say "Get uncomfortably uncomfortable". >> Pratima: Yes. >> And I loved how she said that today. So one of the big news of the day is VMWare with Stanford announcing that they are investing $15,000,000 in a new Women's Leadership Innovation Lab at Stanford. Phenomenal. >> Pratima: Yes. >> And they're really going to start studying diversity and there's so many different gaps that we face, wage gap, age gap, gender gap, you know mothers vs motherless gap, and one of the things that was really interesting that, I've heard this before, that the press release actually cited a McKinsey report that says, "Companies with diversity "on their executive staff are 21% more profitable." >> Yes. >> And that just seems like a, no duh, Kind of thing to me for organizations like VMWare and your other partners in this consortium of Wt Squared to get on board to say, "Well of course." Thought diversity is so important and it actually is demonstrated to impact a companies' profitability. >> Right, yeah. And that's true, I just hope that more people listen to it and internalize it, and organizations internalize that, and what VMWare's doing is fantastic. I mean I'm so proud to be part of this company that's doing this. And you Shelly talked about change right? She said, "I think, right now the way I feel "about this whole thing, is we need to stop talking about "diversity and inclusion, we just need to say "enough is enough, this is important, let's just do it." >> Lisa: We should make this a part of our DNA. >> Exactly. Just make it, why do we have to fight for all this, right? It's just pointless and you know, men have wives and daughters and mothers and you know, It impacts societies as a whole and organizations, and we have so much research on this and what I like about what the Stanford Research Lab is doing is, they're actually working with woman all the way from middle-school to high-school to the executive suite, and that's amazing because research has now shown, there was a report in March 2014 by a senior fellow at the Center for American Progress, for Judith Warner, and so she documented, just with the rate of change, like I talked with all the percentages and the number of women CEOs, just with that rate of change, the equality of men and women at the top will not occur until 2085. >> Lisa: Oh my goodness. >> That's 63 years from now. That means all our daughters would be retired by then. My daughters was born on 2013 and so she won't live in a world of female leaders that's representative of the population. And so that realization actually really, really, really broke my heart and that made me want to write this book, to create these role models. And what Stanford is doing, is they're going to work on this and I'm hoping that they can make that transition sooner. Like we don't have to wait 'till 2085. I want this for my daughter. >> It has to be accelerated, yes. >> It has to be accelerated and I think all of us need to do that, our daughters should be in the 20s, 30s when this happens, not when they're in their 70s. >> Lisa: And retired. >> And retired, I mean we don't want that. And we don't know how that number's going to get pushed further, right? Like if we don't do anything now... It. (exhales) >> Lisa: Right. 2085 becomes, what? >> I know! It's insane. >> In the spirit of being persistent, with the theme of this 3rd annual Wt Squared being Inclusion in Action, you're a manager and in a people or hiring role, tell me about the culture on your team and how your awareness and your passion for creating change here, lasting change. How are you actually creating that inclusion through action in your role at VMWare? >> So what I do is when I have to hire engineers on my team, I talk to my recruiter, have a conversation, I'm like, "I need more diversity." It's just not women, I want diversity with the men too. I want different races, different cultures because I believe that if I have a diverse team I'm going to be successful. So it's almost like I'm being selfish but that is very important. So I have that conversation with my recruiters, so I kind have an expectation set. And then we go through their hiring process and I'm very aware of just the hiring panel, like who I put on the panel, I make sure to have at least a women on the panel and have some diversity. My team right now is not really that diverse and I'm working hard to make that because it is hard, you know the pipeline has to get built at a certain point, and then start getting those resumes, but I try to have at least one female on the panel, and during the selection process the first thing I'll tell them is, let's get the elephant out of the room, age, gender, whatever, like let's take that out, let's just talk about skills and how well this person has done in an interview. And that's how I conducted and you know I've had fairly good success of hiring women on the team. But I've also seen that it's hard to retain women because they tend to drop-out faster than the men and so it's constant, it's just constant work to make that happen. >> Yeah. I wish we had more time to talk about retention because it is a huge issue. So the book is Nevertheless, She Persisted. Where can people get a copy of the book? >> So you can get it on Amazon, that's, I think, the best place to get it. You can also get it from my publisher's site which is FriesenPress. >> Excellent well Pratima thank you so much for stopping by. >> Thank you. >> And sharing your passion, how your persisting, and how you're also helping more of us learn how to find that voice and pursue our passions, thank you. >> Thank you. >> We want to thank you for watching. We are TheCUBE on the ground at VMWare for the Third Annual Women Transforming Technology Event. I'm Lisa Martin thanks for watching. (upbeat music)

Published Date : May 24 2018

SUMMARY :

Announcer: From the VMware campus and I'm joined by an author and a senior VMware engineer, It's great to be here It's great to have you here. and I grew up in an environment where I was gender blind. and the stories in this book are inspiring stories What are some of the stories that you found, and from that data there were 24 women CEOs in 2014. And that didn't stop her; I mean she persisted. and I think that you do find women, and I imagine and that actually gave me goosebumps. and that was surprising to me, right? sometimes the pilot light goes out and needs to be reignited and I talk about that a little bit in the book. just remember that greatness is right around the corner. And I loved how she said that today. that the press release actually cited a McKinsey report And that just seems like a, no duh, Kind of thing to me I mean I'm so proud to be part and the number of women CEOs, just with that rate of change, and that made me want to write this book, in the 20s, 30s when this happens, And retired, I mean we don't want that. I know! and how your awareness and your passion and during the selection process the first thing So the book is Nevertheless, She Persisted. the best place to get it. and how you're also helping more of us learn We want to thank you for watching.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
LaylaPERSON

0.99+

Talia MalachiPERSON

0.99+

Telle WhiteneyPERSON

0.99+

Diane GreenePERSON

0.99+

Lisa MartinPERSON

0.99+

VMWareORGANIZATION

0.99+

LailaPERSON

0.99+

Center for American ProgressORGANIZATION

0.99+

VMwareORGANIZATION

0.99+

ShellyPERSON

0.99+

March 2014DATE

0.99+

Patty HatterPERSON

0.99+

2014DATE

0.99+

2017DATE

0.99+

Laila AliPERSON

0.99+

2013DATE

0.99+

LisaPERSON

0.99+

2016DATE

0.99+

IndiaLOCATION

0.99+

2015DATE

0.99+

21%QUANTITY

0.99+

$15,000,000QUANTITY

0.99+

StanfordORGANIZATION

0.99+

PratimaPERSON

0.99+

2018DATE

0.99+

United StatesLOCATION

0.99+

Pratima Rao GluckmanPERSON

0.99+

Palo AltoLOCATION

0.99+

24QUANTITY

0.99+

YanbingPERSON

0.99+

Muhammad AliPERSON

0.99+

LinkedInORGANIZATION

0.99+

Jerry McGuirePERSON

0.99+

nine yearsQUANTITY

0.99+

todayDATE

0.99+

19 interviewsQUANTITY

0.99+

Stanford Research LabORGANIZATION

0.99+

Grace HopperPERSON

0.99+

Palo Alto, CaliforniaLOCATION

0.99+

FacebookORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

24 womenQUANTITY

0.99+

oneQUANTITY

0.99+

70sQUANTITY

0.99+

32 womenQUANTITY

0.99+

21QUANTITY

0.99+

FriesenPressORGANIZATION

0.99+

2085DATE

0.99+

Judith WarnerPERSON

0.98+

AmazonORGANIZATION

0.98+

63 yearsQUANTITY

0.98+

30sQUANTITY

0.98+

Yanbing LiPERSON

0.98+

20sQUANTITY

0.98+

McKinseyORGANIZATION

0.98+

Women's Leadership Innovation LabORGANIZATION

0.98+

2 years agoDATE

0.98+

CUBEORGANIZATION

0.96+

about ten yearsQUANTITY

0.95+

one womanQUANTITY

0.95+

Third Annual Women Transforming Technology EventEVENT

0.95+

next dayDATE

0.94+

20 years agoDATE

0.94+

TheCUBEORGANIZATION

0.93+

3rd Annual Women Transforming Technology eventEVENT

0.92+

first programming classQUANTITY

0.92+

TwitterORGANIZATION

0.92+

first thingQUANTITY

0.91+

Dr. Naveen Rao | SXSW 2017


 

(bright music) >> Narrator: Live from Austin, Texas. It's theCUBE, covering South by Southwest 2017. Brought to you by Intel. Now here's John Furrier. >> We're here live in South by Southwest Austin, Texas. Silicon Angle, theCUBE, our broadcast, we go out and extract the signal from noise. I'm John Furrier, I'm here with Naveene Rao, the vice president general manager of the artificial intelligence solutions group at Intel. Welcome to theCUBE. >> Thank you, yeah. >> So we're here, big crowd here at Intel, Intel AI lounge. Okay, so that's your wheelhouse. You're the general manager of AI solutions. >> Naveene: That's right. >> What is AI? (laughs) I mean-- >> AI has been redefined through time a few times. Today AI means generally applied machine learning. Basically ways to find useful structure in data to do something with. It's a tool, really, more than anything else. >> So obviously AI is a mental model, people can understand kind of what's going on with software. Machine learning and IoT gets kind of in the industry, it's a hot area, but this really is points to a future world where you're seeing software tackling new problems at scale. So cloud computing, what you guys are doing with the chips and software has now created a scale dynamic. Similar to Moore's, but Moore's Law is done for devices. You're starting to see software impact society. So what are some of those game changing impacts that you see and that you're looking at at Intel? >> There are many different thought labors that many of us will characterize as drudgery. For instance, if I'm an insurance company, and I want to assess the risk of 10 million pages of text, I can't do that very easily. I have to have a team of analysts run through, write summaries. These are the kind of problems we can start to attack. So the way I always look at it is what a bulldozer was to physical labor, AI is to data. To thought labor, we can really get through much more of it and use more data to make our decisions better. >> So what are the big game changing things that are going on that people can relate to? Obviously, autonomous vehicles is one that we can all look at and say, "Wow, that's mind blowing." Smart cities is one that you say, "Oh my god, I'm a resident of a community. "Do they have to re-change the roads? "Who writes the software, is there a budget for that?" Smart home, you see Alexa with Amazon, you see Google with their home product. Voice bots, voice interfaces. So the user interface is certainly changing. How is that impacting some of the things that you guys are working on? >> Well, to the user interface changing, I think that has an entire dynamic on how people use tools. Easier something is, the more people use, the more pervasive it becomes, and we start discovering these emergent dynamics. Like an iPod, for instance. Storing music in a digital form, small devices around before the iPod. But when it made it easy to use, that sort of gave rise to the smartphone. So I think we're going to start seeing some really interesting dynamics like that. >> One of the things that I liked about this past week in San Francisco, Google had their big event, their cloud event, and they talked a lot about, and by the way, Intel was on stage with the new Xeon processor, up to 72 cores, amazing compute capabilities, but cloud computing does bring that scale together. But you start thinking about data science has moved into using data, and now you have a tsunami of data, whether it's taking an analog view of the world and having now multiple datasets available. If you can connect the dots, okay, a lot of data, now you have a lot of data plus a lot of datasets, and you have almost unlimited compute capability. That starts to draw in some of the picture a little bit. >> It does, but actually there's one thing missing from what you just described, is that our ability to scale data storage and data collection has outpaced our ability to compute on it. Computing on it typically is some sort of quadratic function, something faster than when your growth on amount of data. And our compute has really not caught up with that, and a lot of that has been more about focus. Computers were really built to automate streams of tasks, and this sort of idea of going highly parallel and distributed, it's something somewhat new. It's been around a lot in academic circles, but the real use case to drive it home and build technologies around it is relatively new. And so we're right now in the midst of transforming computer architecture, and it's something that becomes a data inference machine, not just a way to automate compute tasks, but to actually do data inference and find useful inferences in data. >> And so machine learning is the hottest trend right now that kind of powers AI, but also there's some talk in the leader circles around learning machines. Data learning from engaged data, or however you want to call it, also brings out another question. How do you see that evolving, because do we need to have algorithms to police the algorithms? Who teaches the algorithms? So you bring in this human aspect of it. So how does the machine become a learning machine? Who teaches the machine, is it... (laughs) I mean, it's crazy. >> Let me answer that a little bit with a question. Do you have kids? >> Yes, four. >> Does anyone police you on raising your kids? >> (laughs) Kind of, a little bit, but not much. They complain a lot. >> I would argue that it's not so dissimilar. As a parent, your job is to expose them to the right kind of biases or not biased data as much as possible, like experiences, they're exactly that. I think this idea of shepherding data is extremely important. And we've seen it in solutions that Google has brought out. There are these little unexpected biases, and a lot of those come from just what we have in the data. And AI is no different than a regular intelligence in that way, it's presented with certain data, it learns from that data and its biases are formed that way. There's nothing inherent about the algorithm itself that causes that bias other than the data. >> So you're saying to me that exposing more data is actually probably a good thing? >> It is. Exposing different kinds of data, diverse data. To give you an example from the biological world, children who have never seen people of different races tend to be more, it's something new and unique and they'll tease it out. It's like, oh, that's something different. Whereas children who are raised with people of many diverse face types or whatever are perfectly okay seeing new diverse face types. So it's the same kind of thing in AI, right? It's going to hone in on the trends that are coming, and things that are outliers, we're going to call as such. So having good, balanced datasets, the way we collect that data, the way we sift through it and actually present it to an AI is extremely important. >> So one of the most exciting things that I like, obviously autonomous vehicles, I geek out on because, not that I'm a car head, gear head or car buff, but it just, you look at what it encapsulates technically. 5G overlay, essentially sensors all over the car, you have software powering it, you now have augmented reality, mixed reality coming into it, and you have an interface to consumers and their real world in a car. Some say it's a moving data center, some say it's also a human interface to the world, as they move around in transportation. So it kind of brings out the AI question, and I want to ask you specifically. Intel talks about this a lot in their super demos. What actually is Intel doing with the compute and what are you guys doing to make that accelerate faster and create a good safe environment? Is it just more chips, is it software? Can you explain, take a minute to explain what Intel's doing specifically? >> Intel is uniquely positioned in this space, 'cause it's a great example of a full end to end problem. We have in-car compute, we have software, we have interfaces, we have actuators. That's maybe not Intel's suite. Then we have connectivity, and then we have cloud. Intel is every one of those things, and so we're extremely well positioned to drive this field forward. Now you ask what are we doing in terms of hardware and software, yes, it's all of it. This is a big focus area for Intel now. We see autonomous vehicles as being one of the major ways that people interact with the world, like locality between cars and interaction through social networks and these kinds of things. This is a big focus area, we are working on the in-car compute actively, we're going to lead that, 5G is a huge focus for Intel, as you might've seen in other, Mobile World Congress, other places. And then the data center. And so we own the data center today, and we're going to continue to do that with new technologies and actually enable these solutions, not just from a pure hardware primitives perspective, but from the software-hardware interaction in full stack. >> So for those people who think of Intel as a chip company, obviously you guys abstract away complexities and put it into silicon, I obviously get that. Google Next this week, one thing I was really impressed by was the TensorFlow machine learning algorithms in open source, you guys are optimizing the Xeon processor to offload, not offload, but kind of take on... Is this kind of the paradigm that Intel looks at, that you guys will optimize the highest performance in the chip where possible, and then to let the software be more functional? Is that a guiding principle, is that a one off? >> I would say that Intel is not just a chip company. We make chips, but we're a platform solutions company. So we sell primitives to various levels, and so, in certain cases, yes, we do optimize for software that's out there because that drives adoption of our solutions, of course. But in new areas, like the car for instance, we are driving the whole stack, it's not just the chip, it's the entire package end to end. And so with TensorFlow, definitely. Google is a very strong partner of ours, and we continue to team up on activities like that. >> We are talking with Naveene Rao, vice president general manager Intel's AI solutions. Breaking it down for us. This end to end thing is really interesting to me. So I want to get just double click on that a little bit. It requires a community to do that, right? So it's not just Intel, right? Intel's always had a great rising tide floats all boats kind of concept over the life of the company, but now, more than ever, it's an API world, you see integration points between companies. This becomes an interesting part. Can you talk up to that point about how you guys are enabling partners to work with, and if people want to work with Intel, how do they work, from a developer to whoever? How do you guys view this community aspect? I mean, sure you'd agree with that, right? >> Yeah, absolutely. Working with Intel can take on many different forms. We're very active in the open source community. The Intel Nervana AI solutions are completely open source. We're very happy to enable people in the open source, help them develop their solutions on our hardware, but also, the open source is there to form that community and actually give us feedback on what to build. The next piece is kind of one quick down, if you're actually trying to build an end to end solution, like you're saying, you got a camera. We're not building cameras. But these interfaces are pretty well defined. Generally what we'll do is, we like to select some partners that we think are high value add. And we work with them very closely, and we build stuff that our customers can rely on. Intel stands for quality. We're not going to put Intel branding on something, unless it sort of conforms to some really high standard. And so that's I think a big power here. It doesn't mean we're not going to enable the people that aren't our channel partners or whatever, they're going to have to be enabled through a more of a standard set of interfaces, software or hardware. >> Naveene, I'll ask you, in the final couple minutes we have left, to kind of zoom out and look at the coolness of the industry right now. So you're exposed, your background, we got your PhD, and then you topic wise now heading up the AI solutions. You probably see a lot of stuff. Go down the what's cool to you scene, share with the audience some of the cool things that you can point to that we should pay attention to or even things that are cool that we should be aware that we might not be aware of. What are some of the coolest things that are out there that you could share? >> To share new things, we'll get to that in a second. Things I think are one of my favorites, AlphaGo, I know this is like, maybe it's hackneyed. But as an engineering student in CS in the mid-90s, studying artificial intelligence back then or what we called artificial intelligence, Go was just off the table. That was less than 20 years ago. In that time, it looked like such an insurmountable problem, the brain is doing something so special that we're just not going to figure it out in my lifetime, to actually doing it is incredible. So to me, that represents a lot. So that's a big one. Interesting things that you may not be aware of are other use cases of AI, like we see it in farming. This is something we take for granted. We go to the grocery store, we pick up our food and we're happy, but the reality is, that's a whole economy in and of itself, and scaling it as our population scales is an extremely difficult thing to do. And we're actually interacting with companies that are doing this at multiple levels. One is at the farming level itself, automating things, using AI to determine the state of different props and actually taking action in the field automatically. That's huge, this is back-breaking work. Humans don't necessarily-- >> And it's important too, because people are worried about the farming industry in general. >> Absolutely. And what I love about that use case of like applying AI to farming techniques is that, by doing that, we actually get more consistency and you get better yields. And you're doing it without any additional chemicals, no genetic engineering, nothing like that, you're just applying the same principles we know better. And so I think that's where we see a lot of wonderful things happening. It's a solved problem, but just not at scale. How do I scale this problem up? I can't do that in many instances, like I talked about with the legal documents and trying to come up with a summary. You just can't scale it today. But with these techniques, we can. And so that's what I think is extremely exciting, any interaction there, where we start to see scale-- >> And new stuff, and new stuff? >> New stuff. Well, some of it I can't necessarily talk about. In the robot space, there's a lot happening there. I'm seeing a lot in the startup world right now. We have a convergence of the mechanical part of it becoming cheaper and easier to build with 3D printing, the Maker revolution, all these kind of things happening, which our CEO is really big on. So that, combined with these techniques becoming mature, is going to come up with some really cool stuff. We're going to start seeing The Jetsons kind of thing. It's kind of neat to think about, really. I don't want to clean my room, hey robot, go clean my room. >> John: I'd love that. >> I'd love that too. Make me dinner, maybe like a gourmet dinner, that'd be really awesome. So we're actually getting to a point where there's a line of sight. We're not there yet, I can see it in the next 10 years. >> So the fog is lifting. All right, final question, just more of a personal note. Obviously, you have a neuroscience background, you mentioned that Go is cool. But the humanization factor's coming in. And we mentioned ethics, came up, we don't have time to talk about the ethics role, but as societal changes are happening, with these new impacts of technologies, there's real impact. Whether it's solving diseases and farming, or finding missing children, there's some serious stuff that's really being done. But the human aspects of converging with algorithms and software and scale. Your thoughts on that, how do you see that and how would you, a lot of people are trying to really put this in a framework to try to advance more either sociology thinking, how do I bring sociology into computer science in a way that's relevant. What are some of your thought here? Can you share any color commentary? >> I think it's a very difficult thing to comment on, especially because there are these emergent dynamics. But I think what we'll see is, just as like social network have interfered in some ways and actually helped our interaction with each other, we're going to start seeing that more and more. We can have AIs that are filtering interactions for us. A positive of that is that we can actually understand more about what's going on around in our world, and we're more tightly interconnected. You can sort of think of it as a higher bandwidth communication between all of us. When we're in hunter-gatherer societies, we can only talk to so many people in a day. Now we can actually do more, and so we can gather more information. Bad things are maybe that things become more impersonal, or people have to start doing weird things to stand out in other people's view. There's all these weird interactions-- >> It's kind of like Twitter. (laughs) >> A little bit like Twitter. You can say ridiculous things sometimes to get noticed. We're going to continue to see that, we're already starting to see that at this point. And so I think that's really where the social dynamic happened. It's just how it impacts our day to day communication. >> Talk to Naveene Rao, great conversation here inside the Intel AI lounge. These are the kind of conversations that are going to be on more and more kitchen tables across the world, I'm John Furrier with theCUBE. Be right back with more after this short break. >> Thanks, John. (bright music)

Published Date : Mar 10 2017

SUMMARY :

Brought to you by Intel. the vice president general manager of You're the general manager of AI solutions. in data to do something with. So cloud computing, what you guys are doing with the chips These are the kind of problems we can start to attack. How is that impacting some of the things that sort of gave rise to the smartphone. and you have almost unlimited compute capability. and a lot of that has been more about focus. And so machine learning is the hottest trend right now Let me answer that a little bit with a question. (laughs) Kind of, a little bit, but not much. that causes that bias other than the data. that data, the way we sift through it and what are you guys doing to make that accelerate faster 'cause it's a great example of a full end to end problem. that you guys will optimize the highest performance it's the entire package end to end. it's an API world, you see integration points the open source is there to form that community Go down the what's cool to you scene, and actually taking action in the field automatically. the farming industry in general. and you get better yields. is going to come up with some really cool stuff. So we're actually getting to a point But the human aspects of converging with algorithms A positive of that is that we can actually It's kind of like Twitter. You can say ridiculous things sometimes to get noticed. that are going to be on more and more kitchen tables (bright music)

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
NaveenePERSON

0.99+

JohnPERSON

0.99+

Naveene RaoPERSON

0.99+

John FurrierPERSON

0.99+

GoogleORGANIZATION

0.99+

San FranciscoLOCATION

0.99+

iPodCOMMERCIAL_ITEM

0.99+

IntelORGANIZATION

0.99+

Naveen RaoPERSON

0.99+

AmazonORGANIZATION

0.99+

Austin, TexasLOCATION

0.99+

10 million pagesQUANTITY

0.99+

mid-90sDATE

0.98+

OneQUANTITY

0.98+

oneQUANTITY

0.98+

SXSW 2017EVENT

0.98+

AlexaTITLE

0.97+

TodayDATE

0.96+

SouthwestLOCATION

0.95+

XeonCOMMERCIAL_ITEM

0.94+

todayDATE

0.94+

TwitterORGANIZATION

0.92+

fourQUANTITY

0.91+

less than 20 years agoDATE

0.9+

Next this weekDATE

0.88+

up to 72 coresQUANTITY

0.88+

one thingQUANTITY

0.87+

Moore'sTITLE

0.86+

South by SouthwestTITLE

0.86+

next 10 yearsDATE

0.84+

AlphaGoORGANIZATION

0.82+

5GORGANIZATION

0.8+

this past weekDATE

0.8+

vice presidentPERSON

0.72+

2017DATE

0.72+

a dayQUANTITY

0.71+

theCUBEORGANIZATION

0.68+

Silicon AngleLOCATION

0.65+

TensorFlowTITLE

0.65+

CongressORGANIZATION

0.64+

NervanaCOMMERCIAL_ITEM

0.62+

Mobile WorldEVENT

0.61+

doubleQUANTITY

0.6+

peopleQUANTITY

0.6+

GoTITLE

0.6+

dataQUANTITY

0.57+

secondQUANTITY

0.57+

MooreORGANIZATION

0.57+

Narrator: LiveTITLE

0.56+

Dr.PERSON

0.53+

thingsQUANTITY

0.51+

JetsonsORGANIZATION

0.4+

Platform for Photonic and Phononic Information Processing


 

>> Thank you for coming to this talk. My name is Amir Safavi-Naeini I'm an Assistant Professor in Applied Physics at Stanford University. And today I'm going to talk about a platform that we've been developing here that allows for quantum and classical information processing using photons and phonons or mechanical motion. So first I'd like to start off, with a picture of the people who did the work. These are graduate students and postdocs in my group. In addition, I want to say that a lot of the work especially on polling of the Lithium niobate was done in collaboration with Martin Fejer's group and in particular Dr.Langrock and Jata Mishra and Marc Jankowski Now our goal is to realize a platform, for quantum coherent information processing, that enables functionality which currently does not exist in other platforms that are available. So in particular we want to have, a very low loss non-linearity that is strong and can be dispersion engineered, to be made broadband. We'd like to make circuits that are programmable and reconfigurable, and that necessitates having efficient modulation and switching. And we'd also really like to have a platform that can leverage some of the advances with superconducting circuits to enable sort of large scale programmable dynamics between many different oscillators on a chip. So, in the next few years what we're really hoping to demonstrate are few photon, optical nonlinear effects by pushing the strength of these non-linearities and reducing the amount of loss. And we also want to demonstrate these coupled, sort of qubit and many oscillators systems. Now the Material system, that we think will enable a lot of these advances is based on lithium niobate, so lithium niobate is a fair electric crystal. It's used very widely in optical components and in acousto optics and then surface acoustic wave devices. It's a fair electric crystal, that has sort of a built-in polarization. And that enables, a lot of effects, which are very useful including the piezoelectric effect, electro- optic effects. And it has a very large K2 optical non-linearity. So it allows for three wave mixing. It also has some effects that are not so great for example, pyroelectricity but because it's very, established material system there's a lot of tricks on how to deal with some of the less attractive parts of it of this material. Now most, Surface Acoustic Wave, or optical devices that you would find are based on kind of bulk lithium niobate crystals that either use surface acoustic waves that propagate on a surface or, you know, bulk waves propagating through a whole crystal, or have a very weak weakly guided low index contrast waveguide that's patterned in the lithium niobate. This was the case until just a little over a decade ago. And this work from ETH Zurich came showing that thin-film lithium niobate can be, bonded and patterned. And Photonic circuits very similar to assigning circuits made from three fives or Silicon can be implemented in this material system. And this really led to a lot of different efforts from different labs. I would say the major breakthrough came, just a few years ago from Marko Loncar, where they demonstrate that high quality factors are possible to realize in this platform. And so they showed resonators with quality factors in the tens of billions corresponding to, line widths of tens of megahertz or losses of, just a few, DB per meter. And so that really changed the picture and you know a little bit after that in collaboration with Martin Fejer's group at Stanford they were able to demonstrate polling and so very large this version engineered nonlinear effects and these types of waveguides. And, and so that showed that, sort of very new types of circuits can be possible on this platform Now our approach is very similar. So we have a thin film of lithium niobate and this time it's on Sapphire instead of oxide or some polymer. and sometimes we put oxide on top. Some Silicon oxide on top, and we can also put electrodes these electrodes can be made out of a superconductor like niobium or aluminum or they can be gold depending on what we're trying to do. The sort of important thing here is that the large index contrast means that, light is guided in a very highly confined waveguide. And it supports bends with small bending radii. And that means we can have resonators that are very small. So the mode volume for the photonic resonators can be very small and as is well known. The interaction rate scale is, one over squared of mode volume. And so we're talking about an enhancement of around six orders of magnitude in the interaction length interaction lengths, over systems using sort of bulk components. And this is in a circuit that's sort of sub millimeter in size and its made on this platform. Now interaction length is important but also quality factor is very important. So when you make these things smaller you don't want to make them much less here. That's, you know, you can look at, for example a second harmonic generation efficiency in these types of resonances and that scales as Q, to the power of three essentially. So you need to achieve, you win a lot by going to low loss circuits. Now loss and non-linearity or sort of material and waveguide properties that we can engineer, but design of these circuits, careful design of these circuits is also very important. For example, you know, because these are highly confined waves and dielectric wave guides they can, you can support several different orders of modes especially if you're working for a broad band light waves that span, you know, an octave. And now when you try to couple light in and out of these structures, you have to be very careful that you're only picking up the polarizations that you care about, and you're not inducing extra loss channels effectively reducing the queue, even though there's no material loss if you're these parasitic coupling, can lead to lower Q. so the design is very important. This plot demonstrates, you know, the types of extrinsic to intrinsic coupling that are needed to achieve very high efficiency SHG, which is unrelated to optical parametric oscillation. And, you know, you, so you sort of have to work in a regime where the extrinsic couplings are much larger than the intrinsic couplings. And this is generally true for any type of quantum operation that you want to do. So just just low material loss itself isn't enough to design is also very important. In terms of where we are, on these three important aspects like getting large G large Q and large cap up. So we've been able to achieve high Q in, in these structures. This is a Q a of a couple million, we've also been able to you can see from a broad transmission spectrum through a grading coupler you can see a very evenly spaced modes showing that we're only coupling to one mode family. And we can see that the depth of the modes is also very large, you know, 90% or more. And that means that our extrinsic coupling in intrinsic coupling is also very large. So we've been able to kind of engineer these devices and to achieve this in terms of the interaction, I won't go over it too much but, you know, in collaboration with Marty Feres group we were able to pull both lithium niobate on insulator and lithium niobate on Sapphire. We'll be able to see a very efficient, sort of high slope proficiency second harmonic generation, you know achieving approaching 5000% per watt centimeters squared for 1560 to 780 conversion. So this is all work in progress. And so for now, I'd like to talk a little bit about the integration of acoustic and mechanical components. So, first of all why would we want to integrate mechanical components? Well, there's lots of cases where, for example you want to have an extremely high extinction switching functionality. That's very difficult to do with electro optics because they need to control the phase, extremely efficiently with extreme precision. You would need very large, long resonators and or large voltages becomes very difficult to achieve you know, 60 DB types of, switching. Mechanical systems. On the other hand, they can have very small mode volumes and can give you 60 DB switching without too many complications. Of course the drawback is that they're slower, but for a lot of applications, that doesn't matter too much. So in terms of being able to make integrate memes, switching and tuning with this platform, here's a device that achieves that so that each of these beams is actuated through the Piezoelectric effect and lithium niobate via this pair of electrodes that we put a voltage across. And when you put a voltage across these have been designed to leverage one of the off diagonal terms in the piezoelectric tensor, which causes bending. And so this bending generates a very large displacement in the center of this beam, in this beam, you might notice is composed of a grading, and this grading effectively generates it's photonic crystal cavity. So it generates a localize optical mode in the center which is very sensitive to these displacements. And what we're able to see in this system is that you know, just a few millivolts so 50 millivolts here shifts the resonance frequency by much more than a line width just a few millivolts is enough to shift by a line width. And so to achieve switching we can also tune this resonance across the full telecom band and these types of devices whether in waveguide resonator form can be extremely useful for sort of phase control in a large scale system, where you might want to have many many face switches on a chip to control phases with, with low loss, because these wave guides are shorter. You have lower loss propagating across them. Now, these interactions are fairly low frequency. When we go to higher frequency, we can use the electro-optic effect. And even the electro-optic effect even though it's very widely used, and well-known on a Photonic circuit like these lithium Niobate for tying circuits has, interesting consequences and device opportunities that don't exist on the bulk devices. So for example, let's look at single sideband modulation. This is what an electro-optic sort of standard electro optics, single sideband modulator looks like you, you take your light, you split into two parts, and then you modulate each of these arms. You modulate them out of phase with an RFC tone that's out of phase. And so now you generate side bands on both and now because they're modulating out of phase when they are recombined and on the output splitter and this mock sender interferometer you end up dropping one of the side bands and then the pump and you end up with a shifted side pan. So that's possible you can do single side band modulation with an electronic device but the caveat is that this is now fundamentally lossy. So, you know, you have generated, this other side band via modulation, and the sideband is simply being lost due to interference. So it's their, It's getting combined, it's getting scattered away because there's no mode that it can get connected to. So actually you know, this is going kind of an efficiency less than 3DB usually much less than 3DB. And that's fine if you just have one of these single sideband modulators because you can always amplify, you can send more power but if you're talking about a system and you have many of these and you can't put amplifiers everywhere then, or you're working with quantum information where loss is particularly bad. This is not an option. Now, when you use resonators, you have another option. So here's a device that tries to demonstrate this. This is two resonators that are brought into the near-field of each other. So they're coupled with each other over here where they're, which causes a splitting. And now when we tune the DC voltage was tuned one of these resonators by sort of changing the effective half lengths And one of these resonators tunes, the frequency, we can see an We should see an anti crossing between the two modes and at the center of this splitting this is versus voltage, a splitting at the center at this voltage, let's say here it's around 15 volts. We can see two residences two dips, when we probed the line field going through. And now if we send in the pump resonant, with one of these, and we modulate at this difference frequency we generate this red side band but we actually don't generate the blue side band because there's no optical density of state. So the, so because there's this other side may has just not generated. This system is now much more efficient. In fact, so in Marco Loncar has give they've demonstrated. You can get a hundred percent conversion. And we've also demonstrated this in a similar experiment showing that you can get very large sideband suppression. So, you know more than 30 DB suppression of the side bands with respect to the sideband that you care about It's also interesting that these interactions now preserve quantum coherence. And this is one path to creating links between superconducting microwave systems and optical components. Because now the microwave signal that's scattered here preserves its coherence. So we've also been able to do acoustic optic interactions at these high frequencies. This is a, this is an acoustic optic modulator that operates at a few gigahertz. Basically you generate electric field here which generates a propagating wave inside this transducer made out of lithium niobate. These are aluminum electrodes on top. The phonons are focused down into a small phononic waveguides that guides mechanical waves. And then these are brought into this crystal area where the sound and the Mo and the light are both convert confined to wavelength skill mode volume and they interact very strongly with each other. And the strong interaction leads to very efficient, effective electro-optic modulation. So here we've been able to see, with just a few microwatts of power, many, many side bands being generated. So this is a fact that they like tropic much later where the VPI is, a few thousands of a volt instead of, you know, several volts, which is sort of the off the shelf, electro-optic modulator that you would find. And importantly, we've been able to combine these, photonic and phononic circuits into the same platform. So this is a lithium niobate on same Lithium niobate on Sapphire platform. This is an acoustic transducer that generates mechanical waves that propagate in this lithium niobate waveguide. You can see them here and we can make phononic circuits now. so this is a ring resonate. It's a ring resonator for phonons. So we send sound waves through. And when it's resonance, when its frequency hits the ring residences, we see peaks. and this is, this is cheeks in the drop port coming out. And what's really nice about this platform is that we actually don't need to unlike unlike many memes platforms where you have to have released steps that are usually not compatible with, you know other devices here, there's no release steps. So the phonons are guided in that thin lithium niobate layer. The high Q of these mechanical modes shows that these mechanical resonances can be very coherent oscillators. And so we've also worked towards integrating these with very non-linear microwave circuits to create strongly interacting phonons and phonon circuits. So this is a example of an experiment we did over a year ago, where we have sort of a superconducting Qubit circuit with mechanical resonances made out of lithium niobate shunting the Qubit capacitor to ground. So now vibrations of this mechanical oscillator generate a voltage across these electrodes that couples to the Qubits voltage. And so now you have an interaction between this qubit and the mechanical oscillator, and we can see that in the spectrum of the qubit as we tune it across the frequency band. And we see splittings every time the qubit frequency approaches the mechanical resonance frequency. And infact this coupling is so large, that we were able to observe for the first time, the phonon spectrum. So we can detune this qubit away from the mechanical resonance. And now you have a dispersive shift on the qubit which is proportional to the number of phonons. And because number of photons is quantized. We can actually see, the different phonon levels in the qubit spectrum. Moving forward, we've been trying to, also understand what the sources of loss are in the system. And we've been able to do this by demonstrating by fabricating very large rays in these mechanical oscillators and looking at things like, their quality factor versus frequency. This is an example of a measurement that shows a jump in the quality factor when we enter the frequency band where we expect our phononic band gap for this period, periodic material is this jump you know, in principle,if loss were only due to clamping only due to acoustic waves leaking out in these out of these ends, then this change in quality factor quality factor should go to essentially infinite or should be ex exponential losses should be exponentially suppress with the length So these, but it's not. And that means we're actually limited by other loss channels. And we've been able to determine that these are two level systems and the lithium niobate by looking at the temperature dependence of these losses and seeing that they fit very well sort of standard models that exist for the effects of two level systems on microwave and mechanical resonances. We've also started experimenting with different materials. In fact, we've been able to see that, for example, going to lithium niobate, that's dope with magnesium oxide changes or reduces significantly the effect of the two level systems. And this is a really exciting direction of research that we're pursuing. So we're understanding these materials. So with that, I'd like to thank the sponsors. NTTResearch, of course, a lot of this work was funded by DARPA, ONR, RAO, DOE very generous funding from David and Lucile Packard foundation and others that are shown here. So thank you.

Published Date : Sep 24 2020

SUMMARY :

And so that really changed the picture and

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Marc JankowskiPERSON

0.99+

90%QUANTITY

0.99+

Amir Safavi-NaeiniPERSON

0.99+

Jata MishraPERSON

0.99+

60 DBQUANTITY

0.99+

5000%QUANTITY

0.99+

50 millivoltsQUANTITY

0.99+

Marko LoncarPERSON

0.99+

two resonatorsQUANTITY

0.99+

two modesQUANTITY

0.99+

first timeQUANTITY

0.99+

DARPAORGANIZATION

0.99+

Marco LoncarPERSON

0.99+

ONRORGANIZATION

0.99+

ETH ZurichORGANIZATION

0.99+

one modeQUANTITY

0.99+

two partsQUANTITY

0.98+

1560QUANTITY

0.98+

eachQUANTITY

0.98+

todayDATE

0.98+

more than 30 DBQUANTITY

0.98+

oneQUANTITY

0.98+

bothQUANTITY

0.98+

StanfordORGANIZATION

0.98+

Dr.LangrockPERSON

0.97+

Martin FejerPERSON

0.97+

NTTResearchORGANIZATION

0.97+

tens of billionsQUANTITY

0.97+

hundred percentQUANTITY

0.97+

one pathQUANTITY

0.97+

two levelQUANTITY

0.97+

lithium niobateOTHER

0.96+

two residencesQUANTITY

0.96+

RAOORGANIZATION

0.96+

firstQUANTITY

0.95+

Lucile PackardORGANIZATION

0.95+

two dipsQUANTITY

0.95+

around 15 voltsQUANTITY

0.95+

secondQUANTITY

0.94+

less than 3DBQUANTITY

0.93+

Stanford UniversityORGANIZATION

0.93+

DOEORGANIZATION

0.92+

threeQUANTITY

0.91+

over a year agoDATE

0.9+

over a decade agoDATE

0.9+

lithiumOTHER

0.84+

few years agoDATE

0.83+

three important aspectsQUANTITY

0.82+

Marty FeresPERSON

0.82+

less than 3DBQUANTITY

0.81+

couple millionQUANTITY

0.81+

tens of megahertzQUANTITY

0.81+

two level systemsQUANTITY

0.8+

around six ordersQUANTITY

0.79+

DavidORGANIZATION

0.74+

single sidebandQUANTITY

0.73+

780 conversionQUANTITY

0.72+

singleQUANTITY

0.7+

few thousands of a voltQUANTITY

0.66+

K2OTHER

0.65+

resonatorsQUANTITY

0.59+

next few yearsDATE

0.59+

SiliconOTHER

0.58+

niobiumOTHER

0.58+

few microwattsQUANTITY

0.58+

fewQUANTITY

0.55+

several voltsQUANTITY

0.53+

Lithium niobateOTHER

0.53+

PiezoelectricOTHER

0.53+

niobateOTHER

0.52+

meterQUANTITY

0.51+

fivesQUANTITY

0.45+

SapphireCOMMERCIAL_ITEM

0.41+

NiobateOTHER

0.36+

Mike Miller, AWS | AWS re:Invent 2019


 

>> Announcer: Live from Las Vegas, it's theCUBE! Covering AWS re:Invent 2019. Brought to you by Amazon Web Services and Intel, along with its ecosystem partners. >> Hey welcome back, everyone, it's theCUBE's coverage here live in Las Vegas for re:Invent 2019, this is theCUBE's seventh year covering re:Invent, the event's only been going for eight years, it feels like a decade, so much growth, so much action, I'm John Furrier with my co-host Dave Vellante, here extracting the signal from the noise in the Intel AWS studio of theCUBE, thank you for that sponsorship. Mike Miller is our next guest, he's director of AI devices at AWS, super excited for this segment, because DeepRacer's here, and we got some music, AI is the front and center, great to see you again, thanks for coming on. >> Absolutely, thank you for having me on again, I appreciate it. >> All right, let's just jump right in, the toys. Developers are geeking out over DeepRacer and the toys you guys are putting out there as a fun way to play and learn. >> Absolutely, getting hands-on with these new broadly applicable machine learning technologies. >> Let's jump into DeepRacer, so first of all, give us a quick update on what's happened between last year and this year in the DeepRacer community, there's been a lot of froth, competitiveness, street battles, and then we'll get an update, give us a quick update on the community. >> So we launched DeepRacer last year as a 1/18 scale race car designed to teach reinforcement learning, so this thing drives by itself around the tracks. We've got an online experience where customers can train models, so we launched a DeepRacer league where we plan to visit 22 sites around the world at AWS summits, where developers can come visit us and race a car physically around a track, and we had online contests, so every month we had a new track for developers to be challenged by and race their cars around the track. We've seen tremendous engagement and excitement, a little bit of competition really gets developers' juices going. >> It's been a lot of fun, congratulations, by the way. >> Absolutely, thank you. >> All right, let's get into the new toy, so DeepRacer 2.0, whatever you're calling it, just DeepRacer-- >> DeepRacer Evo. >> Evo, okay. >> New generation, so we've basically provided more opportunities to race for developers, more challenges for them to learn, and more ways for them to win. So we integrated some new sensors on this car, so on top there's a LIDAR, which is a laser range finding device that can detect other cars or obstacles in the rear of the car and to the sides, and in the front of the car we have stereo cameras that we added so that the car can sense depth in front of it, so with those new sensors, developers can now be challenged by integrating depth sensing and object avoidance and head to head racing into their machine learning models. >> So currently it's not an obstacle course, correct, it's a race track, right? >> So we call it a time trial, so it's a single car on the track at a time, how fast can you make a lap, our world record actually is 7.44 seconds, set by a young lady from Tokyo this past year, really exciting. >> And she was holding up the trophy and said this is basically a dream come true. And so, what are they trying to optimize, is it just the speed at the turn, what are they sort of focused on? >> Yeah, it's a little bit of art and a little bit of science, so there's the reinforcement learning model that learns through what's called a reward function, so you give the car rewards for achieving specific objectives, or certain behaviors, and so it's really up to the developer to decide what kind of behaviors do they want to reward the car with, whether it's stay close to the center line, reduce the amount of turns, they can also determine its position on the track and so they can reward it for cutting corners close, speeding up or slowing down, so it's really a little bit of art and science through some experimentation and deciding. >> So we had Intel on yesterday, talking about some of their AI, Naveen Rao, great guy, but they were introducing this concept called GANs, Generative Adversarial Networks, which is kind of like neural network technology, lot of computer science in some of the tech here, this is not kiddie scripting kind of thing, this is like real deal. >> Yeah, so GANs actually formed the basis of the product that we just announced this year called DeepComposer, so DeepComposer is a keyboard and a cloud service designed to work together to teach developers about generative AI, and GANs are the technique that we teach developers. So what's interesting about generative AI is that machine learning moves from a predictions-based technology to something that can actually create new content, so create new music, new stories, new art, but also companies are using generative AI to do more practical things like take a sketch and turn it into a 3D model, or autocorrect colorize black and white photos, Autodesk even has a generative design product, where you can give, an industrial designer can give a product some constraints and it'll generate hundreds of ideas for the design. >> Now this is interesting to me, because I think this takes it to, I call basic machine learning, to really some more advanced practical examples, which is super exciting for people learning AI and machine learning. Can you talk about the composer and how it works, because pretend I'm just a musician, I'm 16 years old, I'm composing music, I got a keyboard, how can I get involved, what would be a path, do I buy a composer device, do I link it to Ableton Live, and these tools that are out there, there's a variety of different techniques, can you take us through the use case? >> Yeah, so really our target customer for this is an aspiring machine learning developer, maybe not necessarily a musician. So any developer, whether they have musical experience or machine learning background, can use the DeepComposer system to learn about the generative AI techniques. So GANs are comprised of these two networks that have to be trained in coordination, and what we do with DeepComposer is we walk users through or walk developers through exactly how to set up that structure, how these two things train, and how is it different from traditional machine learning where you've got a large data set, and you're training a single model to make a prediction. How do these multiple networks actually work against each other, and how do you make sure that they're generating new content that's actually of the right type of quality that you want, and so that's really the essence of the Generative Adversarial Networks and these two networks that work against each other. >> So a young musician who happens to like machine learning. >> So if I give this to my kid, he'll get hooked on machine learning? That's good for the college apps. >> Plug in his Looper and set two systems working together or against each other. >> When we start getting to visualization, that's going to be very interesting when you start getting the data at the fundamental level, now this is early days. Some would say day zero, because this is really early. How do you explain that to developers, and people you're trying to get attention to, because this is certainly exciting stuff, it's fun, playful, but it's got some nerd action in it, it's got some tech, what are some of the conversations you're having with folks when they say "Hey, how do I get involved, why should I get involved," and what's really going to be the impact, what's the result of all this? >> Yeah, well it's fascinating because through Amazon's 20 years of artificial intelligence investments, we've learned a lot, and we've got thousands of engineers working on artificial intelligence and machine learning, and what we want to do is try to take a lot of that knowledge and the experiences that those folks have learned through these years, and figure out how we can bring them to developers of all skill levels, so developers who don't know machine learning, through developers who might be data scientists and have some experience, we want to build tools that are engaging and tactile and actually tangible for them to learn and see the results of what machine learning can do, so in the DeepComposer case it's how do these generative networks actually create net new content, in this case music. For DeepRacer, how does reinforcement learning actually translate from a simulated environment to the real world, and how might that be applicable for, let's say, robotics applications? So it's really about reducing the learning curve and making it easy for developers to get started. >> But there is a bridge to real world applications in all this, it's a machine learning linchpin. >> Absolutely, and you can just look at all of the innovations that are being done from Amazon and from our customers, whether they're based on improving product recommendations, forecasting, streamlining supply chains, generating training data, all of these things are really practical applications. >> So what's happening at the device, and what's happening in the cloud, can you help us understand that? >> Sure, so in DeepComposer, the device is really just a way to input a signal, and in this case it's a MIDI signal, so MIDI is a digital audio format that allows machines to kind of understand music. So the keyboard allows you to input MIDI into the generative network, and then in the cloud, we've got the generative network takes that input, processes it, and then generates four-part accompaniments for the input that you provide, so say you play a little melody on the keyboard, we're going to generate a drum track, a guitar track, a keyboard track, maybe a synthesizer track, and let you play those back to hear how your input inspired the generation of this music. >> So GANs is a big deal with this. >> Absolutely, it forms the basis of the first technique that we're teaching using DeepComposer. >> All right, so I got to ask you the question that's on everyone's mind, including mine, what are some of the wackiest and/or coolest things you've seen this year with DeepComposer and DeepRacer because I can imagine developers' creativity straying off the reservation a little bit, any cool and wacky things you've seen? >> Well we've got some great stories of competitors in the DeepRacer league, so we've got father-son teams that come in and race at the New York summit, a 10 year old learning how to code with his dad. We had one competitor in the US was at our Santa Clara summit, tried again at our Atlanta summit, and then at the Chicago summit finally won a position to come back to re:Invent and race. Last year, we did the race here at re:Invent, and the winning time, the lap time, a single lap was 51 seconds, the current world record is 7.44 seconds and it's been just insane how these developers have been able to really optimize and generate models that drive this thing at incredible speeds around the track. >> I'm sure you've seen the movie Ford v Ferrari yet. You got to see that movie, because this DeepRacer, you're going to have to need a stadium soon, with eSports booming, this has got its own legs for its own business. >> Well we've got six tracks set up down at the MGM Grand Arena, so we've already got the arena set up, and that's where we're doing all the knock-out rounds and competitors. >> And you mentioned father-son, you remember when we were kids, Cub Scouts, I think it was, or Boy Scouts, whatever it was, you had the pinewood derby, right, you'd make a car and file down the nails that you use for the axles and, taking it to a whole new level here. >> It's a modern-day version. >> All right, Mike, thanks for coming on, appreciate it, let's keep in touch. If you can get us some of that B-roll for any video, I'd love to get some B-roll of some DeepRacer photos, send 'em our way, super excited, love what you're doing, I think this is a great way to make it fun, instructive, and certainly very relevant. >> Absolutely, that's what we're after. Thank you for having me. >> All right, theCUBE's coverage here, here in Las Vegas for our seventh, Amazon's eighth re:Invent, we're documenting history as the ecosystem evolves, as the industry wave is coming, IoT edge, lot of cool things happening, we're bringing it to you, we're back with more coverage after this short break. (techno music)

Published Date : Dec 4 2019

SUMMARY :

Brought to you by Amazon Web Services and Intel, great to see you again, thanks for coming on. Absolutely, thank you for having me on again, All right, let's just jump right in, the toys. Absolutely, getting hands-on with these new Let's jump into DeepRacer, so first of all, and we had online contests, so every month All right, let's get into the new toy, and in the front of the car we have stereo cameras on the track at a time, how fast can you make a lap, is it just the speed at the turn, so you give the car rewards in some of the tech here, this is not kiddie scripting and GANs are the technique that we teach developers. Now this is interesting to me, the essence of the Generative Adversarial Networks So if I give this to my kid, Plug in his Looper and set two systems working that's going to be very interesting and the experiences that those folks have learned to real world applications in all this, Absolutely, and you can just look at So the keyboard allows you to input MIDI of the first technique that we're teaching and the winning time, the lap time, a single lap You got to see that movie, because this DeepRacer, down at the MGM Grand Arena, that you use for the axles and, I think this is a great way to make it fun, Thank you for having me. as the ecosystem evolves, as the industry wave is coming,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

Mike MillerPERSON

0.99+

Amazon Web ServicesORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

John FurrierPERSON

0.99+

TokyoLOCATION

0.99+

51 secondsQUANTITY

0.99+

20 yearsQUANTITY

0.99+

Last yearDATE

0.99+

22 sitesQUANTITY

0.99+

last yearDATE

0.99+

eight yearsQUANTITY

0.99+

AWSORGANIZATION

0.99+

7.44 secondsQUANTITY

0.99+

MikePERSON

0.99+

Las VegasLOCATION

0.99+

this yearDATE

0.99+

six tracksQUANTITY

0.99+

thousandsQUANTITY

0.99+

Naveen RaoPERSON

0.99+

first techniqueQUANTITY

0.99+

USLOCATION

0.99+

IntelORGANIZATION

0.99+

two systemsQUANTITY

0.99+

MGM Grand ArenaLOCATION

0.98+

yesterdayDATE

0.98+

16 years oldQUANTITY

0.98+

AutodeskORGANIZATION

0.98+

10 year oldQUANTITY

0.97+

seventhQUANTITY

0.97+

re:Invent 2019EVENT

0.96+

one competitorQUANTITY

0.96+

AtlantaLOCATION

0.96+

four-partQUANTITY

0.95+

seventh yearQUANTITY

0.95+

two networksQUANTITY

0.94+

single carQUANTITY

0.94+

single modelQUANTITY

0.94+

New YorkLOCATION

0.93+

DeepComposerTITLE

0.93+

hundreds of ideasQUANTITY

0.92+

EvoCOMMERCIAL_ITEM

0.89+

FerrariORGANIZATION

0.87+

DeepComposORGANIZATION

0.87+

re:InventEVENT

0.85+

a decadeQUANTITY

0.84+

FordORGANIZATION

0.83+

theCUBEORGANIZATION

0.82+

Invent 2019EVENT

0.82+

DeepRacerTITLE

0.81+

AbletonORGANIZATION

0.8+

DeepRacerORGANIZATION

0.8+

this past yearDATE

0.75+

day zeroQUANTITY

0.75+

GANsORGANIZATION

0.74+

two thingsQUANTITY

0.74+

DeepComposerORGANIZATION

0.73+

single lapQUANTITY

0.72+

Santa ClaraLOCATION

0.71+

engineersQUANTITY

0.69+

theCUBEEVENT

0.68+

DeepRacerCOMMERCIAL_ITEM

0.66+

re:EVENT

0.66+

Cub ScoutsORGANIZATION

0.65+

1/18 scaleCOMMERCIAL_ITEM

0.62+

ChicagoLOCATION

0.6+

firstQUANTITY

0.6+

2.0TITLE

0.59+

InventEVENT

0.58+

eSportsTITLE

0.58+

Arijit Mukherji, SignalFx & Karthik Rau, SignalFx | PagerDuty Summit 2018


 

>> From Union Square in downtown San Francisco, it's theCUBE covering PagerDuty Summit '18. Now here's Jeff Frick. >> Hey welcome back everybody. Jeff Frick here with theCUBe. We're at PagerDuty Summit at the Westin St. Francis in Union Square, historic venue. Our second time to this show, there's about 900 people here talking about kind of the future of dev ops, but going a lot further than dev ops. And we're excited to have a couple of CUBE alumni here at the conference from SignalFX. We've got Arjit Mukarji. >> Mukarji, yeah. >> Thank you. And Karthik Rao, co-founder and CEO of Signal FX. Gentlemen, welcome. >> Thank you very much. >> So what do you do at PagerDuty Summit? >> Well we've been partners with PagerDuty for a long time now, we've known them since the very early days, we share a common investor. But we both operate very squarely in the same space, which is companies moving towards dev ops development and deployment methodologies, leveraging cloud and native architectures. We solve a different part of the problem around monitoring and observation and we partner with them very closely around incident management Once a problem is detected, we typically integrate in with PagerDuty and trigger whatever incident management paths that our customers are orchestrating by PagerDuty. So, it's been really an integral part of our entire work flow since we started the company. So we're very close partners with them. >> Yeah, it's interesting 'cause Jen announced they have 300 integrations or 300+ integrations, whatever the number is, and to the outside looking in, it might look like a lot of those are competitive, like there's a lot of work flow and notification types of partners in that ecosystem, but in fact, lots of different people with lots of different slices of the pie. >> That is good. >> Yeah, absolutely. It's a really big problem space that everyone is trying to solve in this day and age. Some of our competitors are in that list, but you know we partner very closely with PagerDuty. As I mentioned earlier, our focus really is around problem detection and leveraging the most intelligent algorithms, statistical models in real time to detect patterns that are occurring in a production environment and triggering an alert, and typically we're integrating in with PagerDuty and PagerDuty deals with the human elements of once something has been detected, how do you manage that incident? How do you router to the appropriate people? One of the things that's really interesting as this world is changing towards these dev ops models is the number of people that have to get involved is substantially greater than it was before. In the old days, you would have an alert go into a knock and you have a specialist group of people with very specific runbooks because your software wasn't changing very often. In today's world, your software is changing sometimes on a daily basis, and it could be changing across dozens of teams, hundreds of teams in larger organizations. And so, there's a problem on the detection side because companies like SignalFX have to do a really great job of detecting problems as they emerge across these disparate teams, across a much, much, much, larger environment with much larger volumes of data and then companies like PagerDuty really have to deal with a far more complex set of requirements around making sure the right people get notified at the right time. And so they're two very different problems and we're very happy to- and have been partnering with them for a number of years now. >> And again, the complexity around the APIs where the app is running, there's so many levels now of new complexity compared to when it was just one app, running on one system, probably in your own data center, probably that you wrote, compared to this kind of API centric multi-cloud world that we live in today. >> That is exactly right because what's happening is our application architectures are changing 'cause we used to have these monoliths, we used to have three tiers and whatnot, and we're replacing that with the micro-services, loosely cabled systems, and whatnot. At the same time, the substrate on which we are running those services, those are also changing. Right, so instead of servers, now we have virtual machines, we have cloud distances and containers and pods and what-have-you. So in a way, we are sort of growing below too in some sense and so that's why sort of monitoring this kind of complex, more numerous environment is becoming a harder challenge. We're doing this for a good cause, because we want to move faster, we want to innovate faster, but at the same time, it's also making the established problems harder, which is sort of what requires newer tools, which sort of brings companies like us into the picture. >> Right, yep. And then just the shear scale, volume, number of data that's flowing through the pipes now on all these different applications is growing exponentially, right? We see time and time again, so it really begs for a smarter approach. >> Absolutely, I mean on two levels right? The number of minutes of software consumption is up exponentially, right? Since the smartphone came out in 2007, you've got billions of people connected to software now, connected all the time, so the load is up order sum magnitude which is driving, even if you didn't change the architectures, you would have to build out substantially more back-end systems, but now the architectures are changing as well, where every physical server is now parceled up into VMs which are parceled up into containers. And so the number of systems are also up by order sum magnitude. And so there's no possible way for a human to respond to individual alerts happening on individual systems, you're just going to drown in noise. So the requirements of this new world really are, you have to have an analytic spaced approach to monitoring and more automation, more intelligence around detecting the patterns that really matter. >> Right. Which is such a great opportunity for artificial intelligence, right, a machine learning. And we talk about it all the time, everyone wants to talk about those, kind of as a vendor-led something that you buy. Yeah, that's kind of okay, but really where the huge benefit is, companies like you guys and PagerDuty using that technology, integrated in with what you deliver on your core to do a much better job in this crazy increasing scale of volume that's run with these machines. >> Yes, because the systems are becoming so complex that even if you asked a human to go and set up the perfect monitoring or perfect alerting, et cetera, it might be quite a hard challenge, right? So, as a result sort of automation, computer intelligence, et cetera needs to be brought in to bear, because again, it's a more complex system, we need higher order systems that have dealed with them. >> Right. >> You are very, very right, yes. And that's a trend we are starting to see within the product, we are actually focusing a lot on sort of data science capabilities which too are sort of making them more and more sort of machine running and automation. In the future, we have capabilities in the product that can look at populations and identify outliers, look at cyclical problems and identify outliers again. So the idea is to make it easy for users to monitor a complex system without having to get into the guts, so to speak. >> Right. >> And to do it on various sorts of data, right? I think you have an interesting use case that we've been experimenting with recently. >> That's right. >> If you want to talk about that. >> Yeah, so I actually have a talk tomorrow, it's called "Interesting One." It's about monitoring social signals, monitoring humans. So we have these systems, we have these metrics platforms and they are quite generic, the tools that we have nowadays and are sort of available to us are quite powerful, and the set of inputs need not be isolated to what the computers are telling me. Why not look at other things, why not look at business signals? In my case, I'm going to talk about monitoring what the humans are doing on Slack as a way for me to know whether there's something of interest that's going on in my infrastructure, in my service that I need to be aware of, right? And you'll be shocked how surprisingly accurate it tends to be. It's just an interesting thing, and it makes one wonder what else is out there for us to sort of look at? Why confine ourselves, right? >> Right. It's funny because we hear about sentiment analysis in social media all the time, but more in the context of Pepsi or a big consumer brand that's trying to figure out how people feel. But to do it inside your own company on your own internal tool, like a Slack, that's a whole different level of insight. >> You'd be surprised at the number of companies that monitor Twitter to understand whether they have an adage. >> That's right. >> Yeah, because in this day and age, users are on Twitter within seconds if something is perceived to be slow, or something is perceived to be down, they're on Twitter. So there are all sorts of other interesting signals to potentially pull from. >> Right, right. Well and guess what, we were just at AT&T Spark yesterday and the 5G's coming and it's 100x more data'll be flowing through the mobiles, so the problem's not going to get any smaller any time soon. >> No. >> Absolutely. >> So what else have you guys been up to since we last spoke? Continuing to grow, making some interesting moves. >> Absolutely- >> Crossing oceans. >> We've been very, very busy, one of the big areas of investment for us has been international growth, so we've been investing quite a bit in Europe. We have just introduced an instance of our service that's based in a European data center. For a lot of our European-based clients, they prefer to have data locality, data residency within the European Union, so that's something new that we just introduced last month, continue to have a ton of momentum, outed AMIA, they're very much on the cloud journey, and embracing cloud and embracing dev ops, so it's really great to see that momentum out there. >> Right, and clearly with GDPR and those types of things, you have to have a presence for certain types of customers, certain types of data. Anything surprising in that move that you didn't expect or? >> No, I don't know, I'll let you. >> Not in that move, but it's just interesting to see how quickly some of these modern technologies are getting adopted and how- one of the things sort of we talk about a lot in our trade is ephemeral, right? So how things are short-lived nowadays, and you used to lease these servers that used to stay in your data center for three years, then you went to Amazon and you leased your instances, which probably lived for a few months or a few days, then they became containers, and the containers sometimes only for a few hours or for- you know. And then, if you think about serverless and whatnot, it's in a whole different level, and the amount of ephemeral that's going on, especially in the more cloud native companies, was a little bit of a surprise in the sense that, it actually poses a very interesting challenge in how do you monitor something that's changing so fast? And we had to have a lot of engineering put in to sort of make that problem more tractable for us. And it continues to be an area of investment. That to me, was something that was a little bit of a surprise when we started off. Much of this doctorization and coordinating was not yet in place, and so that was an interesting technical challenge as well as a surprise. >> Well I'm curious too as instances, right so there's the core instances that are running core businesses that don't change that much, but it's a promotion, it's a this or that, right? It's a spin up app and a spin down app. Are those even going up on the same infrastructure from the first time they do it to the second time they do it. I mean, how much are you learning that you can leverage as people are doing things differently over and over again as their objectives change, their applications change, they're going to go to market around that specific application. That's changing all the time as well. >> Yeah, so I think the challenge there is to sort of build, at least from a technical point of view, from SignalFX point of view, is build something that is versatile enough to handle these different use cases. We've got new use cases, new ways of doing things are going to continue to happen, probably going to keep on accelerating. So the challenge for us is good and bad, is how do we make a platform that is generic, that can be used for anything that may come down the pike, not only just now. At the second time, how do we innovate to continue to be up to speed with the latest of that's what's going on in terms of infrastructure trends, software delivery trends, and whatnot. Because if we're not able to do that, then that puts us sort of behind. >> Right, right. >> So it's a sort of lot of phonetic innovation, but it's also exciting at the same time. >> Right, right, right. And just the whole concept too, where I think what's best practice quickly becomes expected baseline really, really fast. I mean, what's cutting edge, innovative now unfortunately or fortunately, that become the benchmark by which everything else is measured overnight. That's the thing that just amazes me, what was magical yesterday is just expected, boring behavior today. Alright good, so as we get to the end of the year a lot of exciting stuff, you guys said you're going to be at Reinvent, we will see you there. Anything else that you're looking forward to over the next couple months? >> Just, we're really excited about Reinvent's big show for us, and we'll have some good announcements around the show. And yeah, looking forward to just continuing to do what we've been doing and deliver more rally to our customers. >> Love it, just keep working hard. >> Yep. >> Alright. Arjit, hope your throat gets better before your big talk tomorrow. >> Yeah, that's right. >> Alright, thanks for stopping by Karthik, it was great to see you. >> Great to see you. >> I'm Jeff, you're watching theCUBE, we're at PagerDuty Summit at the Westin St. Francis in San Francisco. Thanks for watching, see you next time.

Published Date : Sep 11 2018

SUMMARY :

From Union Square in downtown San Francisco, kind of the future of dev ops, And Karthik Rao, co-founder and CEO of Signal FX. since the very early days, we share a common investor. of different slices of the pie. is the number of people that have to get involved of new complexity compared to when it was just one app, to move faster, we want to innovate faster, And then just the shear scale, volume, number of data And so the number of systems are also with what you deliver on your core to do a much better job et cetera needs to be brought in to bear, because again, So the idea is to make it easy for users And to do it on various sorts of data, right? and are sort of available to us are quite powerful, in social media all the time, but more in the context that monitor Twitter to understand is perceived to be slow, or something is perceived and the 5G's coming and it's 100x more data'll be flowing So what else have you guys been up to since we last spoke? so it's really great to see that momentum out there. Anything surprising in that move that you didn't expect or? Not in that move, but it's just interesting to see That's changing all the time as well. of doing things are going to continue to happen, but it's also exciting at the same time. And just the whole concept too, where I think to do what we've been doing and deliver Arjit, hope your throat gets better it was great to see you. at the Westin St. Francis in San Francisco.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Arjit MukarjiPERSON

0.99+

JeffPERSON

0.99+

Arijit MukherjiPERSON

0.99+

Karthik RaoPERSON

0.99+

2007DATE

0.99+

Jeff FrickPERSON

0.99+

EuropeLOCATION

0.99+

ArjitPERSON

0.99+

Signal FXORGANIZATION

0.99+

KarthikPERSON

0.99+

three yearsQUANTITY

0.99+

Union SquareLOCATION

0.99+

SignalFXORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

300 integrationsQUANTITY

0.99+

second timeQUANTITY

0.99+

yesterdayDATE

0.99+

PepsiORGANIZATION

0.99+

Karthik RauPERSON

0.99+

300+ integrationsQUANTITY

0.99+

JenPERSON

0.99+

tomorrowDATE

0.99+

SignalFxORGANIZATION

0.99+

GDPRTITLE

0.99+

AMIAORGANIZATION

0.99+

PagerDutyORGANIZATION

0.99+

one systemQUANTITY

0.99+

todayDATE

0.98+

last monthDATE

0.98+

dozens of teamsQUANTITY

0.98+

AT&T SparkORGANIZATION

0.98+

SlackTITLE

0.98+

one appQUANTITY

0.98+

MukarjiPERSON

0.98+

PagerDuty Summit '18EVENT

0.98+

bothQUANTITY

0.97+

TwitterORGANIZATION

0.97+

ReinventORGANIZATION

0.97+

San FranciscoLOCATION

0.97+

about 900 peopleQUANTITY

0.97+

PagerDuty SummitEVENT

0.97+

first timeQUANTITY

0.96+

CUBEORGANIZATION

0.96+

two levelsQUANTITY

0.95+

two very different problemsQUANTITY

0.95+

Westin St. FrancisLOCATION

0.94+

PagerDuty Summit 2018EVENT

0.94+

hundreds of teamsQUANTITY

0.93+

three tiersQUANTITY

0.9+

daysQUANTITY

0.9+

oneQUANTITY

0.89+

5GORGANIZATION

0.87+

OneQUANTITY

0.86+

theCUBeORGANIZATION

0.85+

next couple monthsDATE

0.84+

100x moreQUANTITY

0.76+

billions of peopleQUANTITY

0.76+

end ofDATE

0.71+

EuropeanLOCATION

0.67+

few monthsQUANTITY

0.63+

secondsQUANTITY

0.63+

few hoursQUANTITY

0.62+

yearDATE

0.61+

UnionORGANIZATION

0.59+

EuropeanOTHER

0.53+

theCUBEORGANIZATION

0.52+

Abby Fuller, AWS | DockerCon 2018


 

>> Live from San Francisco, it's theCUBE covering DockerCon 18, brought to you by Docker and its ecosystem partners. >> Welcome back to theCUBE's coverage of DockerCon 2018. We are in San Francisco at Moscone, US. It's a spectacular day in San Francisco. It's a day to play hooky frankly, or play hooky and watch theCUBE. I'm Lisa Martin with John Troyer, and we're excited to welcome to theCUBE Abby Fuller, Developer Relations from AWS. Abby, great to have you here. >> Happy to be here. >> So you were a speaker at DockerCon 2018. Tell us a little bit about that and your role in Developer Relations. >> So I work in Developer Relations for AWS. So I used to be a devops engineer, and now I go around talking to customers and developers and other software engineers, and teaching them how to use things with AWS, or this morning it was teaching everyone how to build effective Docker images. >> So I read in your bio on the DockerCon website of the speakers that you're a container fan. We know you're a music fan, but you're also a container fan. What is it about that technology that you just go, "Oh, this is awesome, "and I can't wait to teach people "about the benefits of this"? >> So I switched over to container as a customer before I started working at AWS, and the biggest reasons for me, the first one was portability, so that I could do everything that I needed to run my application all in one place. So I think a big problem for a lot of developers is the whole what works on my machine? So being able to package everything together so that it worked on my machine, but also on a staging environment, a QA environment, and on your machine, that was the biggest thing for me. And that it removed some of the spaghetti code that came before, and it just made everything, it was all packaged nicely, I could deploy it a little bit more easily, a little bit faster, and I eliminated a lot of the why doesn't it work now when it worked before? >> Abby, one of the paradoxes of where we are in 2018 is AWS has been around for a decade, but yet here at the show, about half the folks raised their hand to the question, this is your first DockerCon? Are you just getting started with Docker and containers? So as an evangelist, Evangelist Developer Relations, you're the front line of talking with people at the grassroots. So can you talk a little bit about some of the different personas you encounter? Are you meeting people who are just getting started with their container journey? Or are you spending a lot of time kind of finessing the details about that API, APIs and changes and things like that at AWS? >> I think my favorite part about talking to AWS customers is that you get the whole range, right? So you get people that are just starting and they wanna know how do I build a container? How do I run it? How do I start from zero? And then you get the people that have been doing it for maybe a year or maybe two years, and they're looking for like advanced black belt tips, and then you get the other group which is not everyone is building a greenfield application, so then you get a really interesting subset where they're trying to move over from the whole monolith to micro services story. So they're trying to containerize and kind of adopt agile containerize approaches as they're moving over, and I think the best part is being able to talk to the whole range 'cause then it's never boring. >> What are some of the big barriers that you see for organizations that are maybe on the very very beginning of the journey or maybe before it, when you're talking with customers or developers, what are some of the things that you're hearing them say, "Ah, but what about these? "How can you help me eliminate these challenges?" >> Two big ones for me. The first one is the organizational changes that go around the infrastructure change. So it doesn't always work to just containerize what you already had, and then call it a day. So a lot of people are decomposing, they're going with micro services at the same time as they're going with containers. And I think wrapping your head around that kind of decomposition is the first kind of big challenge. And I think that we really just had to educate better. So show people, so here are some ways that you can break your service up, here are some things to think about when you're figuring out service boundaries. And I think the other one is that they often want a little bit of help when they're getting started. So either educational resources or how can AWS manage part of their infrastructure? Will they focus on the container part? So it's really interesting and it runs a whole gamut. >> Abby, you in Developer Relations, I love the trend, the community orient and trend, they're great, of peers helping peers, you're out there, you're wearing a Bruce Springsteen shirt right now, you made a Wu Tang joke in your talk today which is something that one did not do a few years back, right? You had to kinda dress up, and you were usually a man, and you wore a tie. >> Got my blazer on today. >> You look very sharp. Don't get me wrong. But as you talk to people, one, what's your day like or week like? How many miles do you have this year? That's private. But also as people come up to you, what do they ask you? Are you a role model for folks? Do people come up and say, "How can I do this too?" >> Yeah, so miles for this year. I think like 175,000. >> Already just in June? >> Already this year. So, this is a lot of what I do. I talk to all kinds of customers. I do bigger events like this, I do meet-ups, I do user groups, I go to AWS summits, and dev days and builders days, and things like that. I meet with customers. So day-to-day changes everyday. I'm obviously big on Twitter, spend a lot of time tweeting on planes. It really depends. This is a lot of what I do and I think people, I don't think you can ever really call yourself a role model, right? I love showing people that there's pass into tech that didn't start off with a computer science degree, that there's tons of ways to participate and be part of the tech community, 'cause it's a great community. >> You're not just a talker, you're a coder too. >> Yeah, yeah, so every job before this one with the exception of my very first job which was in sales. I was a dev ops engineer right up until I took the job at AWS, and I like to think that I never left, I'm just no longer on call. But I build my own demos, I write my own blog posts, I do all my own slides and workshops, so still super active, just not on call, so it's the best of all the worlds. >> So you went to Tufts, you didn't major in computer science. >> No. >> You are, I would say, a role model. You might not consider yourself one-- >> Well you can say it, yeah. >> I can say it exactly. It's PC if I say it. But, one of the things that's exciting to have females on the show, and I geek out on this is, we don't have a lot of females in tech. I mean, I think the last stat that I saw recently was less than 25% of technical roles are held by women. What was your career path if we can kinda pivot on that for a second, 'cause I think that's quite interesting. And what are some of the things that you've said, "You know what, I don't care. "I enjoy this, I wanna do this,"? 'Cause in all circumstances you are a role model, but I'd love to understand some of the things you encountered, and maybe some of your advice to those that'll be following in your footsteps. >> Yeah, so I went to school for politics. Programming was a little bit of a side hobby before that, mostly of the how can I do this thing, do this thing that it's not supposed to be doing? So I did that, I went to school. I took a computer science class my very last semester in school. I did not know that it was a thing before then, so I'm I guess a little slow in the comp sci uptake. And I was like, oh wow cool, this is an awesome, this could be an awesome career, but I don't know how to get into it. So I was like okay, I'm gonna go to a startup, and I'm gonna do whatever. So I take a sales job. I did that for maybe nine or 10 months. And I started taking on side projects. So how to write email templates in HTML that I could use that directly showed an impact to my sales job. Then the startup, as startups do, got acquired. And as part of the acquisition I moved my little CRM engineering job to the product team. And then, I'm gonna be honest, I bothered the CTO a lot. And I learned side projects. I was like I've learned Python now, what can you have for me? So I basically bothered him a lot until he helped me do some projects, and totally old enough now to admit that he was very kind to take a chance on me. And then I worked hard. I did a lot of online classes. I read a lot of books. I read a lot of blogs. I'm a big proponent in learning by doing. So I still learn things the same way. I read about it, I decide that I wanna use it, I try it out, and then at the point where I get where I don't quite know what's happening, I go back to documentation. And that got me through a couple of devops jobs until I got to evangelism. And I think the biggest advice I have for people is it's okay to not know what you want right away which is how I have a politics degree. But you can work at it. And don't be afraid to have mentors and communities and peers that can help you 'cause it's the best way to participate, and it's actually whether you have a comp sci job or not, it's still the best way to participate, and that you can have, there are so many nontraditional paths to tech, and I think everyone is equally valuable, because I think I write better coming from a liberal arts degree than I would have otherwise. So I think every skill that you bring in is valuable. So once you figure out what you want, don't be afraid to ask for it. >> The thing I'm hearing here is persistence. And it just reminded me, a quick pivot, of I hosted theCUBE at Women Transforming Technology just a couple weeks ago at VMWare, and they just made a massive investment, 15 million into a lab, a research lab at Stanford, to look at the barriers that women in tech are facing. And one of our guests, Pratima Rao Gluckman, just wrote a book called Nevertheless, She Persisted. It reminded me of you because that's one of the things that I'm hearing from you is that persistence that I think is a really unique thing there. Sorry, I just had to take a little side. >> I saw you looked that up. And actually I saw the title and I have not read it yet, but I have a flight back to New York after this so I'll have to find that. >> You've got time. >> Yeah. >> Over and over again as I talk with folks about IT and tech careers, right? It's that thinking expansively about your job, trying things, being a continuous learner, that is the thing that actually works. Maybe pivoting back to the tech for a sec then, obviously here container central, DockerCon 2018, Kubernetes actually was a big news this morning at the keynote, a big announcement, how Docker EE is gonna connect to Amazon EKS among others, kind of being able to manage the Kubernetes clusters up there in the cloud. And EKS actually just had, it just had its general availability I believe, right? In the last week or so? >> Yeah, so, excited to see EKS in the keynote this morning. We're always happy to deepen our partnerships. Yeah, and we've been in preview since re:Invent, and then we announced the general though of EKS, so Amazon Elastic Container Service for Kubernetes, long acronym. So EKS, we announced the GA last Tuesday. >> The interesting thing about AWS is somebody just compared it, I saw a tweet today to an industrial supply store and it's a huge warehouse full of tools that you can use, and that includes containers. But for containers, the three pieces that are the largest are EKS, ECS, and Fargate. Can you kinda tease those out for us really briefly? >> Yeah so envision if you would a flow chart. So if you wanna run a managed container on AWS, first you pick your orchestration tool, so EKS or ECS. ECS is the one that we've been working on for quite a few years now, so Elastic Container Service. Once you've chosen your orchestration tool, for ECS you have another set of choices which is either to run your containers in the EC2 mode which is manager, cluster, infrastructure as well, so the underlying EC2 hosts. And Fargate mode, where you only manage everything at the container level and task definition level, so no cluster management. >> And that's all taken care of for you. >> That's all taken care of for you. So Fargate I think is not actually a service in the traditional way that we would say that ECS is a service, and more of like an underlying technology, so that's what enables you to manage everything at just the container level and not at the cluster level. But I think the best way of describing it is actually is, there's a really nice quote floating around that said, "When I ask someone for a sandwich, "they don't wanna know the whole sandwich logistics chain, "so how do I get turkey, how do I get cheese, "how do I get mayo on the bread, "they just want the sandwich." So Fargate for, I think, a lot of people, is the sandwich. So I just want the sandwich, just give me your container, don't worry about the rest. >> So we've already established Abby has a lot of miles already in half a year, so I'm thinking two things. One, we should travel with her 'cause we're probably gonna get free upgrades. And two, you speak with a lot of customers. So tell us about that customer feedback loop. >> Something that I really love about working at Amazon is that so much of our roadmap is driven by customer feedback. So actually something that was really cool is that this morning, so ECS announced a daemon-scheduler, so run tasks one per host on every host in the cluster, so for things like metrics, containers, and log containers. And something that is so cool for me is that I asked for that as a customer, and I just watched us announced it this morning. It's incredible to see every single time that the feedback loop is closed, that people ask for it and then we build it. The same thing with EKS, right? We want you to have a great experience running your infrastructure on AWS, full stop. >> Can you give us an example of a customer that's really been impactful in terms of that feedback loop? One that really sticks out to you as a great hallmark of what you guys are enabling. >> I think that all of our customers are impactful in the feedback loop, right? Anyone from a really small startup to a really large enterprise. I think one that was really exciting to me was a very small Israeli startup. They went all in on managing no EC2 instances very quickly. They're called The Tree. So they were my customer speaker at the Tel Aviv summit, and they managed zero EC2 instances. So they have Fargate, they have Lambda, they managed no infrastructure themselves. And I just think it's so cool to watch people want things, and then adopt them so quickly. And the response on Twitter after the daemon-scheduler this morning is like, my favorite tweet was, "This is customer feedback done right." And I love seeing how happy people are when they ask for something or are saying, "Now that you've added that, "I can delete three Lambda functions "because you made it easy." And I love seeing feedback like that. So I think everyone is impactful, but that one stuck out to me as someone that adopted something incredibly quickly and have been so, they're just so happy to have a need solved for them. >> Well that's the best validation that you can get is through the voice of the customer. So to hear that must feel good that not only are we listening, but we're doing things right in a way that our customers are feeling how valuable they are to us. >> Happy customers are the best customers. >> They definitely are. >> Yeah. >> We learn a lot from the ones that aren't happy, and there's a lot of learnings there, but hearing that validation is icing on the cake. >> Always. >> Last question for you. With some of the announcements that came out today, and as this conference and its figure has grown tremendously, when I was walking out of the general session this morning, I took a photo because I don't think I've seen a general session room that big in a long time, and that was just at the Sapphire last week which has 20,000 attendees. I was impressed with how captivated the audience was. So last question, what excites you about some of the things that Docker announced today? >> So I think that's interesting. Something that's excited me in general is watching the community itself flourished. So there's many, there's Kubernetes CGroups, and there's user groups, the discussion online is always incredibly rich and vibrant, and there are so many people that are just so excited for anything. It's all companies building what they're looking for. And I love seeing things like the Docker Enterprise Edition announcement this morning where the demo is EKS, but I just love seeing customers get the choice to do whatever they want. They have all the options out there, and that you can see how much more rich and vibrant everything is. From even a couple years ago, there's more people every year, there's more sessions every year, the sessions are bigger every year. And I just love that. And I love seeing when people get so excited, and then seeing people that came to your talk two years ago, come back and give their own talk I think is amazing. >> Oh, talk about feedback. That must have felt really good. >> I think it's not a reflection on me, it's a reflection on the community. And it's a very supportive community, and it's a very excited and curious audience. So if you see their reception to other people that talk a lot being like, oh we're really happy to have you, then the next year you're like, well I have a story and I wanna tell it, so I'm gonna sit in my own session, and I think that's the best. >> Well Abby, it's been such a pleasure to have you on theCUBE, thank you. >> Thank you for having me. >> Thank you for stopping by. And your energy is infectious so you'll have to come back. >> Anytime. >> We wanna thank you for watching theCUBE. I'm Lisa Martin with John Troyer, live from San Francisco at DockerCon 2018. Stick around, we'll be right back after a short break. (upbeat music)

Published Date : Jun 13 2018

SUMMARY :

brought to you by Docker Abby, great to have you here. So you were a speaker and now I go around talking to customers that you just go, "Oh, this is awesome, and I eliminated a lot of the So can you talk a little bit about is that you get the whole range, right? that you can break your service up, I love the trend, as you talk to people, I think like 175,000. I don't think you can ever really talker, you're a coder too. and I like to think that I never left, So you went to Tufts, You might not consider yourself one-- some of the things you encountered, and that you can have, that I think is a really I saw you looked that up. that is the thing that actually works. in the keynote this morning. and that includes containers. So if you wanna run a and not at the cluster level. And two, you speak with that the feedback loop is closed, to you as a great hallmark And I just think it's so cool So to hear that must feel good that is icing on the cake. and that was just at and that you can see how much Oh, talk about feedback. So if you see their reception to have you on theCUBE, thank you. Thank you for stopping by. We wanna thank you

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
AWSORGANIZATION

0.99+

New YorkLOCATION

0.99+

Lisa MartinPERSON

0.99+

2018DATE

0.99+

San FranciscoLOCATION

0.99+

AmazonORGANIZATION

0.99+

John TroyerPERSON

0.99+

Abby FullerPERSON

0.99+

15 millionQUANTITY

0.99+

Pratima Rao GluckmanPERSON

0.99+

AbbyPERSON

0.99+

nineQUANTITY

0.99+

JuneDATE

0.99+

two yearsQUANTITY

0.99+

last weekDATE

0.99+

10 monthsQUANTITY

0.99+

PythonTITLE

0.99+

DockerORGANIZATION

0.99+

20,000 attendeesQUANTITY

0.99+

firstQUANTITY

0.99+

twoQUANTITY

0.99+

two thingsQUANTITY

0.99+

OneQUANTITY

0.99+

a yearQUANTITY

0.99+

EKSORGANIZATION

0.99+

175,000QUANTITY

0.99+

three piecesQUANTITY

0.99+

first jobQUANTITY

0.99+

DockerCon 2018EVENT

0.99+

two years agoDATE

0.99+

ECSORGANIZATION

0.98+

less than 25%QUANTITY

0.98+

EC2TITLE

0.98+

todayDATE

0.98+

this yearDATE

0.98+

half a yearQUANTITY

0.98+

FargateORGANIZATION

0.98+

last TuesdayDATE

0.98+

first oneQUANTITY

0.97+

ECSTITLE

0.97+

Moscone, USLOCATION

0.97+

Bruce SpringsteenPERSON

0.97+

a dayQUANTITY

0.97+

TwitterORGANIZATION

0.97+

oneQUANTITY

0.97+

one placeQUANTITY

0.96+

LambdaTITLE

0.96+

Nevertheless, She PersistedTITLE

0.96+

couple weeks agoDATE

0.96+

this morningDATE

0.95+

next yearDATE

0.94+

DockerCon 18EVENT

0.94+

AI for Good Panel - Precision Medicine - SXSW 2017 - #IntelAI - #theCUBE


 

>> Welcome to the Intel AI Lounge. Today, we're very excited to share with you the Precision Medicine panel discussion. I'll be moderating the session. My name is Kay Erin. I'm the general manager of Health and Life Sciences at Intel. And I'm excited to share with you these three panelists that we have here. First is John Madison. He is a chief information medical officer and he is part of Kaiser Permanente. We're very excited to have you here. Thank you, John. >> Thank you. >> We also have Naveen Rao. He is the VP and general manager for the Artificial Intelligence Solutions at Intel. He's also the former CEO of Nervana, which was acquired by Intel. And we also have Bob Rogers, who's the chief data scientist at our AI solutions group. So, why don't we get started with our questions. I'm going to ask each of the panelists to talk, introduce themselves, as well as talk about how they got started with AI. So why don't we start with John? >> Sure, so can you hear me okay in the back? Can you hear? Okay, cool. So, I am a recovering evolutionary biologist and a recovering physician and a recovering geek. And I implemented the health record system for the first and largest region of Kaiser Permanente. And it's pretty obvious that most of the useful data in a health record, in lies in free text. So I started up a natural language processing team to be able to mine free text about a dozen years ago. So we can do things with that that you can't otherwise get out of health information. I'll give you an example. I read an article online from the New England Journal of Medicine about four years ago that said over half of all people who have had their spleen taken out were not properly vaccinated for a common form of pneumonia, and when your spleen's missing, you must have that vaccine or you die a very sudden death with sepsis. In fact, our medical director in Northern California's father died of that exact same scenario. So, when I read the article, I went to my structured data analytics team and to my natural language processing team and said please show me everybody who has had their spleen taken out and hasn't been appropriately vaccinated and we ran through about 20 million records in about three hours with the NLP team, and it took about three weeks with a structured data analytics team. That sounds counterintuitive but it actually happened that way. And it's not a competition for time only. It's a competition for quality and sensitivity and specificity. So we were able to indentify all of our members who had their spleen taken out, who should've had a pneumococcal vaccine. We vaccinated them and there are a number of people alive today who otherwise would've died absent that capability. So people don't really commonly associate natural language processing with machine learning, but in fact, natural language processing relies heavily and is the first really, highly successful example of machine learning. So we've done dozens of similar projects, mining free text data in millions of records very efficiently, very effectively. But it really helped advance the quality of care and reduce the cost of care. It's a natural step forward to go into the world of personalized medicine with the arrival of a 100-dollar genome, which is actually what it costs today to do a full genome sequence. Microbiomics, that is the ecosystem of bacteria that are in every organ of the body actually. And we know now that there is a profound influence of what's in our gut and how we metabolize drugs, what diseases we get. You can tell in a five year old, whether or not they were born by a vaginal delivery or a C-section delivery by virtue of the bacteria in the gut five years later. So if you look at the complexity of the data that exists in the genome, in the microbiome, in the health record with free text and you look at all the other sources of data like this streaming data from my wearable monitor that I'm part of a research study on Precision Medicine out of Stanford, there is a vast amount of disparate data, not to mention all the imaging, that really can collectively produce much more useful information to advance our understanding of science, and to advance our understanding of every individual. And then we can do the mash up of a much broader range of science in health care with a much deeper sense of data from an individual and to do that with structured questions and structured data is very yesterday. The only way we're going to be able to disambiguate those data and be able to operate on those data in concert and generate real useful answers from the broad array of data types and the massive quantity of data, is to let loose machine learning on all of those data substrates. So my team is moving down that pathway and we're very excited about the future prospects for doing that. >> Yeah, great. I think that's actually some of the things I'm very excited about in the future with some of the technologies we're developing. My background, I started actually being fascinated with computation in biological forms when I was nine. Reading and watching sci-fi, I was kind of a big dork which I pretty much still am. I haven't really changed a whole lot. Just basically seeing that machines really aren't all that different from biological entities, right? We are biological machines and kind of understanding how a computer works and how we engineer those things and trying to pull together concepts that learn from biology into that has always been a fascination of mine. As an undergrad, I was in the EE, CS world. Even then, I did some research projects around that. I worked in the industry for about 10 years designing chips, microprocessors, various kinds of ASICs, and then actually went back to school, quit my job, got a Ph.D. in neuroscience, computational neuroscience, to specifically understand what's the state of the art. What do we really understand about the brain? And are there concepts that we can take and bring back? Inspiration's always been we want to... We watch birds fly around. We want to figure out how to make something that flies. We extract those principles, and then build a plane. Don't necessarily want to build a bird. And so Nervana's really was the combination of all those experiences, bringing it together. Trying to push computation in a new a direction. Now, as part of Intel, we can really add a lot of fuel to that fire. I'm super excited to be part of Intel in that the technologies that we were developing can really proliferate and be applied to health care, can be applied to Internet, can be applied to every facet of our lives. And some of the examples that John mentioned are extremely exciting right now and these are things we can do today. And the generality of these solutions are just really going to hit every part of health care. I mean from a personal viewpoint, my whole family are MDs. I'm sort of the black sheep of the family. I don't have an MD. And it's always been kind of funny to me that knowledge is concentrated in a few individuals. Like you have a rare tumor or something like that, you need the guy who knows how to read this MRI. Why? Why is it like that? Can't we encapsulate that knowledge into a computer or into an algorithm, and democratize it. And the reason we couldn't do it is we just didn't know how. And now we're really getting to a point where we know how to do that. And so I want that capability to go to everybody. It'll bring the cost of healthcare down. It'll make all of us healthier. That affects everything about our society. So that's really what's exciting about it to me. >> That's great. So, as you heard, I'm Bob Rogers. I'm chief data scientist for analytics and artificial intelligence solutions at Intel. My mission is to put powerful analytics in the hands of every decision maker and when I think about Precision Medicine, decision makers are not just doctors and surgeons and nurses, but they're also case managers and care coordinators and probably most of all, patients. So the mission is really to put powerful analytics and AI capabilities in the hands of everyone in health care. It's a very complex world and we need tools to help us navigate it. So my background, I started with a Ph.D. in physics and I was computer modeling stuff, falling into super massive black holes. And there's a lot of applications for that in the real world. No, I'm kidding. (laughter) >> John: There will be, I'm sure. Yeah, one of these days. Soon as we have time travel. Okay so, I actually, about 1991, I was working on my post doctoral research, and I heard about neural networks, these things that could compute the way the brain computes. And so, I started doing some research on that. I wrote some papers and actually, it was an interesting story. The problem that we solved that got me really excited about neural networks, which have become deep learning, my office mate would come in. He was this young guy who was about to go off to grad school. He'd come in every morning. "I hate my project." Finally, after two weeks, what's your project? What's the problem? It turns out he had to circle these little fuzzy spots on these images from a telescope. So they were looking for the interesting things in a sky survey, and he had to circle them and write down their coordinates all summer. Anyone want to volunteer to do that? No? Yeah, he was very unhappy. So we took the first two weeks of data that he created doing his work by hand, and we trained an artificial neural network to do his summer project and finished it in about eight hours of computing. (crowd laughs) And so he was like yeah, this is amazing. I'm so happy. And we wrote a paper. I was the first author of course, because I was the senior guy at age 24. And he was second author. His first paper ever. He was very, very excited. So we have to fast forward about 20 years. His name popped up on the Internet. And so it caught my attention. He had just won the Nobel Prize in physics. (laughter) So that's where artificial intelligence will get you. (laughter) So thanks Naveen. Fast forwarding, I also developed some time series forecasting capabilities that allowed me to create a hedge fund that I ran for 12 years. After that, I got into health care, which really is the center of my passion. Applying health care to figuring out how to get all the data from all those siloed sources, put it into the cloud in a secure way, and analyze it so you can actually understand those cases that John was just talking about. How do you know that that person had had a splenectomy and that they needed to get that pneumovax? You need to be able to search all the data, so we used AI, natural language processing, machine learning, to do that and then two years ago, I was lucky enough to join Intel and, in the intervening time, people like Naveen actually thawed the AI winter and we're really in a spring of amazing opportunities with AI, not just in health care but everywhere, but of course, the health care applications are incredibly life saving and empowering so, excited to be here on this stage with you guys. >> I just want to cue off of your comment about the role of physics in AI and health care. So the field of microbiomics that I referred to earlier, bacteria in our gut. There's more bacteria in our gut than there are cells in our body. There's 100 times more DNA in that bacteria than there is in the human genome. And we're now discovering a couple hundred species of bacteria a year that have never been identified under a microscope just by their DNA. So it turns out the person who really catapulted the study and the science of microbiomics forward was an astrophysicist who did his Ph.D. in Steven Hawking's lab on the collision of black holes and then subsequently, put the other team in a virtual reality, and he developed the first super computing center and so how did he get an interest in microbiomics? He has the capacity to do high performance computing and the kind of advanced analytics that are required to look at a 100 times the volume of 3.2 billion base pairs of the human genome that are represented in the bacteria in our gut, and that has unleashed the whole science of microbiomics, which is going to really turn a lot of our assumptions of health and health care upside down. >> That's great, I mean, that's really transformational. So a lot of data. So I just wanted to let the audience know that we want to make this an interactive session, so I'll be asking for questions in a little bit, but I will start off with one question so that you can think about it. So I wanted to ask you, it looks like you've been thinking a lot about AI over the years. And I wanted to understand, even though AI's just really starting in health care, what are some of the new trends or the changes that you've seen in the last few years that'll impact how AI's being used going forward? >> So I'll start off. There was a paper published by a guy by the name of Tegmark at Harvard last summer that, for the first time, explained why neural networks are efficient beyond any mathematical model we predict. And the title of the paper's fun. It's called Deep Learning Versus Cheap Learning. So there were two sort of punchlines of the paper. One is is that the reason that mathematics doesn't explain the efficiency of neural networks is because there's a higher order of mathematics called physics. And the physics of the underlying data structures determined how efficient you could mine those data using machine learning tools. Much more so than any mathematical modeling. And so the second thing that was a reel from that paper is that the substrate of the data that you're operating on and the natural physics of those data have inherent levels of complexity that determine whether or not a 12th layer of neural net will get you where you want to go really fast, because when you do the modeling, for those math geeks in the audience, a factorial. So if there's 12 layers, there's 12 factorial permutations of different ways you could sequence the learning through those data. When you have 140 layers of a neural net, it's a much, much, much bigger number of permutations and so you end up being hardware-bound. And so, what Max Tegmark basically said is you can determine whether to do deep learning or cheap learning based upon the underlying physics of the data substrates you're operating on and have a good insight into how to optimize your hardware and software approach to that problem. >> So another way to put that is that neural networks represent the world in the way the world is sort of built. >> Exactly. >> It's kind of hierarchical. It's funny because, sort of in retrospect, like oh yeah, that kind of makes sense. But when you're thinking about it mathematically, we're like well, anything... The way a neural can represent any mathematical function, therfore, it's fully general. And that's the way we used to look at it, right? So now we're saying, well actually decomposing the world into different types of features that are layered upon each other is actually a much more efficient, compact representation of the world, right? I think this is actually, precisely the point of kind of what you're getting at. What's really exciting now is that what we were doing before was sort of building these bespoke solutions for different kinds of data. NLP, natural language processing. There's a whole field, 25 plus years of people devoted to figuring out features, figuring out what structures make sense in this particular context. Those didn't carry over at all to computer vision. Didn't carry over at all to time series analysis. Now, with neural networks, we've seen it at Nervana, and now part of Intel, solving customers' problems. We apply a very similar set of techniques across all these different types of data domains and solve them. All data in the real world seems to be hierarchical. You can decompose it into this hierarchy. And it works really well. Our brains are actually general structures. As a neuroscientist, you can look at different parts of your brain and there are differences. Something that takes in visual information, versus auditory information is slightly different but they're much more similar than they are different. So there is something invariant, something very common between all of these different modalities and we're starting to learn that. And this is extremely exciting to me trying to understand the biological machine that is a computer, right? We're figurig it out, right? >> One of the really fun things that Ray Chrisfall likes to talk about is, and it falls in the genre of biomimmicry, and how we actually replicate biologic evolution in our technical solutions so if you look at, and we're beginning to understand more and more how real neural nets work in our cerebral cortex. And it's sort of a pyramid structure so that the first pass of a broad base of analytics, it gets constrained to the next pass, gets constrained to the next pass, which is how information is processed in the brain. So we're discovering increasingly that what we've been evolving towards, in term of architectures of neural nets, is approximating the architecture of the human cortex and the more we understand the human cortex, the more insight we get to how to optimize neural nets, so when you think about it, with millions of years of evolution of how the cortex is structured, it shouldn't be a surprise that the optimization protocols, if you will, in our genetic code are profoundly efficient in how they operate. So there's a real role for looking at biologic evolutionary solutions, vis a vis technical solutions, and there's a friend of mine who worked with who worked with George Church at Harvard and actually published a book on biomimmicry and they wrote the book completely in DNA so if all of you have your home DNA decoder, you can actually read the book on your DNA reader, just kidding. >> There's actually a start up I just saw in the-- >> Read-Write DNA, yeah. >> Actually it's a... He writes something. What was it? (response from crowd member) Yeah, they're basically encoding information in DNA as a storage medium. (laughter) The company, right? >> Yeah, that same friend of mine who coauthored that biomimmicry book in DNA also did the estimate of the density of information storage. So a cubic centimeter of DNA can store an hexabyte of data. I mean that's mind blowing. >> Naveen: Highly done soon. >> Yeah that's amazing. Also you hit upon a really important point there, that one of the things that's changed is... Well, there are two major things that have changed in my perception from let's say five to 10 years ago, when we were using machine learning. You could use data to train models and make predictions to understand complex phenomena. But they had limited utility and the challenge was that if I'm trying to build on these things, I had to do a lot of work up front. It was called feature engineering. I had to do a lot of work to figure out what are the key attributes of that data? What are the 10 or 20 or 100 pieces of information that I should pull out of the data to feed to the model, and then the model can turn it into a predictive machine. And so, what's really exciting about the new generation of machine learning technology, and particularly deep learning, is that it can actually learn from example data those features without you having to do any preprogramming. That's why Naveen is saying you can take the same sort of overall approach and apply it to a bunch of different problems. Because you're not having to fine tune those features. So at the end of the day, the two things that have changed to really enable this evolution is access to more data, and I'd be curious to hear from you where you're seeing data come from, what are the strategies around that. So access to data, and I'm talking millions of examples. So 10,000 examples most times isn't going to cut it. But millions of examples will do it. And then, the other piece is the computing capability to actually take millions of examples and optimize this algorithm in a single lifetime. I mean, back in '91, when I started, we literally would have thousands of examples and it would take overnight to run the thing. So now in the world of millions, and you're putting together all of these combinations, the computing has changed a lot. I know you've made some revolutionary advances in that. But I'm curious about the data. Where are you seeing interesting sources of data for analytics? >> So I do some work in the genomics space and there are more viable permutations of the human genome than there are people who have ever walked the face of the earth. And the polygenic determination of a phenotypic expression translation, what are genome does to us in our physical experience in health and disease is determined by many, many genes and the interaction of many, many genes and how they are up and down regulated. And the complexity of disambiguating which 27 genes are affecting your diabetes and how are they up and down regulated by different interventions is going to be different than his. It's going to be different than his. And we already know that there's four or five distinct genetic subtypes of type II diabetes. So physicians still think there's one disease called type II diabetes. There's actually at least four or five genetic variants that have been identified. And so, when you start thinking about disambiguating, particularly when we don't know what 95 percent of DNA does still, what actually is the underlining cause, it will require this massive capability of developing these feature vectors, sometimes intuiting it, if you will, from the data itself. And other times, taking what's known knowledge to develop some of those feature vectors, and be able to really understand the interaction of the genome and the microbiome and the phenotypic data. So the complexity is high and because the variation complexity is high, you do need these massive members. Now I'm going to make a very personal pitch here. So forgive me, but if any of you have any role in policy at all, let me tell you what's happening right now. The Genomic Information Nondiscrimination Act, so called GINA, written by a friend of mine, passed a number of years ago, says that no one can be discriminated against for health insurance based upon their genomic information. That's cool. That should allow all of you to feel comfortable donating your DNA to science right? Wrong. You are 100% unprotected from discrimination for life insurance, long term care and disability. And it's being practiced legally today and there's legislation in the House, in mark up right now to completely undermine the existing GINA legislation and say that whenever there's another applicable statute like HIPAA, that the GINA is irrelevant, that none of the fines and penalties are applicable at all. So we need a ton of data to be able to operate on. We will not be getting a ton of data to operate on until we have the kind of protection we need to tell people, you can trust us. You can give us your data, you will not be subject to discrimination. And that is not the case today. And it's being further undermined. So I want to make a plea to any of you that have any policy influence to go after that because we need this data to help the understanding of human health and disease and we're not going to get it when people look behind the curtain and see that discrimination is occurring today based upon genetic information. >> Well, I don't like the idea of being discriminated against based on my DNA. Especially given how little we actually know. There's so much complexity in how these things unfold in our own bodies, that I think anything that's being done is probably childishly immature and oversimplifying. So it's pretty rough. >> I guess the translation here is that we're all unique. It's not just a Disney movie. (laughter) We really are. And I think one of the strengths that I'm seeing, kind of going back to the original point, of these new techniques is it's going across different data types. It will actually allow us to learn more about the uniqueness of the individual. It's not going to be just from one data source. They were collecting data from many different modalities. We're collecting behavioral data from wearables. We're collecting things from scans, from blood tests, from genome, from many different sources. The ability to integrate those into a unified picture, that's the important thing that we're getting toward now. That's what I think is going to be super exciting here. Think about it, right. I can tell you to visual a coin, right? You can visualize a coin. Not only do you visualize it. You also know what it feels like. You know how heavy it is. You have a mental model of that from many different perspectives. And if I take away one of those senses, you can still identify the coin, right? If I tell you to put your hand in your pocket, and pick out a coin, you probably can do that with 100% reliability. And that's because we have this generalized capability to build a model of something in the world. And that's what we need to do for individuals is actually take all these different data sources and come up with a model for an individual and you can actually then say what drug works best on this. What treatment works best on this? It's going to get better with time. It's not going to be perfect, because this is what a doctor does, right? A doctor who's very experienced, you're a practicing physician right? Back me up here. That's what you're doing. You basically have some categories. You're taking information from the patient when you talk with them, and you're building a mental model. And you apply what you know can work on that patient, right? >> I don't have clinic hours anymore, but I do take care of many friends and family. (laughter) >> You used to, you used to. >> I practiced for many years before I became a full-time geek. >> I thought you were a recovering geek. >> I am. (laughter) I do more policy now. >> He's off the wagon. >> I just want to take a moment and see if there's anyone from the audience who would like to ask, oh. Go ahead. >> We've got a mic here, hang on one second. >> I have tons and tons of questions. (crosstalk) Yes, so first of all, the microbiome and the genome are really complex. You already hit about that. Yet most of the studies we do are small scale and we have difficulty repeating them from study to study. How are we going to reconcile all that and what are some of the technical hurdles to get to the vision that you want? >> So primarily, it's been the cost of sequencing. Up until a year ago, it's $1000, true cost. Now it's $100, true cost. And so that barrier is going to enable fairly pervasive testing. It's not a real competitive market becaue there's one sequencer that is way ahead of everybody else. So the price is not $100 yet. The cost is below $100. So as soon as there's competition to drive the cost down, and hopefully, as soon as we all have the protection we need against discrimination, as I mentioned earlier, then we will have large enough sample sizes. And so, it is our expectation that we will be able to pool data from local sources. I chair the e-health work group at the Global Alliance for Genomics and Health which is working on this very issue. And rather than pooling all the data into a single, common repository, the strategy, and we're developing our five-year plan in a month in London, but the goal is to have a federation of essentially credentialed data enclaves. That's a formal method. HHS already does that so you can get credentialed to search all the data that Medicare has on people that's been deidentified according to HIPPA. So we want to provide the same kind of service with appropriate consent, at an international scale. And there's a lot of nations that are talking very much about data nationality so that you can't export data. So this approach of a federated model to get at data from all the countries is important. The other thing is a block-chain technology is going to be very profoundly useful in this context. So David Haussler of UC Santa Cruz is right now working on a protocol using an open block-chain, public ledger, where you can put out. So for any typical cancer, you may have a half dozen, what are called sematic variance. Cancer is a genetic disease so what has mutated to cause it to behave like a cancer? And if we look at those biologically active sematic variants, publish them on a block chain that's public, so there's not enough data there to reidentify the patient. But if I'm a physician treating a woman with breast cancer, rather than say what's the protocol for treating a 50-year-old woman with this cell type of cancer, I can say show me all the people in the world who have had this cancer at the age of 50, wit these exact six sematic variants. Find the 200 people worldwide with that. Ask them for consent through a secondary mechanism to donate everything about their medical record, pool that information of the core of 200 that exactly resembles the one sitting in front of me, and find out, of the 200 ways they were treated, what got the best results. And so, that's the kind of future where a distributed, federated architecture will allow us to query and obtain a very, very relevant cohort, so we can basically be treating patients like mine, sitting right in front of me. Same thing applies for establishing research cohorts. There's some very exciting stuff at the convergence of big data analytics, machine learning, and block chaining. >> And this is an area that I'm really excited about and I think we're excited about generally at Intel. They actually have something called the Collaborative Cancer Cloud, which is this kind of federated model. We have three different academic research centers. Each of them has a very sizable and valuable collection of genomic data with phenotypic annotations. So you know, pancreatic cancer, colon cancer, et cetera, and we've actually built a secure computing architecture that can allow a person who's given the right permissions by those organizations to ask a specific question of specific data without ever sharing the data. So the idea is my data's really important to me. It's valuable. I want us to be able to do a study that gets the number from the 20 pancreatic cancer patients in my cohort, up to the 80 that we have in the whole group. But I can't do that if I'm going to just spill my data all over the world. And there are HIPAA and compliance reasons for that. There are business reasons for that. So what we've built at Intel is this platform that allows you to do different kinds of queries on this genetic data. And reach out to these different sources without sharing it. And then, the work that I'm really involved in right now and that I'm extremely excited about... This also touches on something that both of you said is it's not sufficient to just get the genome sequences. You also have to have the phenotypic data. You have to know what cancer they've had. You have to know that they've been treated with this drug and they've survived for three months or that they had this side effect. That clinical data also needs to be put together. It's owned by other organizations, right? Other hospitals. So the broader generalization of the Collaborative Cancer Cloud is something we call the data exchange. And it's a misnomer in a sense that we're not actually exchanging data. We're doing analytics on aggregated data sets without sharing it. But it really opens up a world where we can have huge populations and big enough amounts of data to actually train these models and draw the thread in. Of course, that really then hits home for the techniques that Nervana is bringing to the table, and of course-- >> Stanford's one of your academic medical centers? >> Not for that Collaborative Cancer Cloud. >> The reason I mentioned Standford is because the reason I'm wearing this FitBit is because I'm a research subject at Mike Snyder's, the chair of genetics at Stanford, IPOP, intrapersonal omics profile. So I was fully sequenced five years ago and I get four full microbiomes. My gut, my mouth, my nose, my ears. Every three months and I've done that for four years now. And about a pint of blood. And so, to your question of the density of data, so a lot of the problem with applying these techniques to health care data is that it's basically a sparse matrix and there's a lot of discontinuities in what you can find and operate on. So what Mike is doing with the IPOP study is much the same as you described. Creating a highly dense longitudinal set of data that will help us mitigate the sparse matrix problem. (low volume response from audience member) Pardon me. >> What's that? (low volume response) (laughter) >> Right, okay. >> John: Lost the school sample. That's got to be a new one I've heard now. >> Okay, well, thank you so much. That was a great question. So I'm going to repeat this and ask if there's another question. You want to go ahead? >> Hi, thanks. So I'm a journalist and I report a lot on these neural networks, a system that's beter at reading mammograms than your human radiologists. Or a system that's better at predicting which patients in the ICU will get sepsis. These sort of fascinating academic studies that I don't really see being translated very quickly into actual hospitals or clinical practice. Seems like a lot of the problems are regulatory, or liability, or human factors, but how do you get past that and really make this stuff practical? >> I think there's a few things that we can do there and I think the proof points of the technology are really important to start with in this specific space. In other places, sometimes, you can start with other things. But here, there's a real confidence problem when it comes to health care, and for good reason. We have doctors trained for many, many years. School and then residencies and other kinds of training. Because we are really, really conservative with health care. So we need to make sure that technology's well beyond just the paper, right? These papers are proof points. They get people interested. They even fuel entire grant cycles sometimes. And that's what we need to happen. It's just an inherent problem, its' going to take a while. To get those things to a point where it's like well, I really do trust what this is saying. And I really think it's okay to now start integrating that into our standard of care. I think that's where you're seeing it. It's frustrating for all of us, believe me. I mean, like I said, I think personally one of the biggest things, I want to have an impact. Like when I go to my grave, is that we used machine learning to improve health care. We really do feel that way. But it's just not something we can do very quickly and as a business person, I don't actually look at those use cases right away because I know the cycle is just going to be longer. >> So to your point, the FDA, for about four years now, has understood that the process that has been given to them by their board of directors, otherwise known as Congress, is broken. And so they've been very actively seeking new models of regulation and what's really forcing their hand is regulation of devices and software because, in many cases, there are black box aspects of that and there's a black box aspect to machine learning. Historically, Intel and others are making inroads into providing some sort of traceability and transparency into what happens in that black box rather than say, overall we get better results but once in a while we kill somebody. Right? So there is progress being made on that front. And there's a concept that I like to use. Everyone knows Ray Kurzweil's book The Singularity Is Near? Well, I like to think that diadarity is near. And the diadarity is where you have human transparency into what goes on in the black box and so maybe Bob, you want to speak a little bit about... You mentioned that, in a prior discussion, that there's some work going on at Intel there. >> Yeah, absolutely. So we're working with a number of groups to really build tools that allow us... In fact Naveen probably can talk in even more detail than I can, but there are tools that allow us to actually interrogate machine learning and deep learning systems to understand, not only how they respond to a wide variety of situations but also where are there biases? I mean, one of the things that's shocking is that if you look at the clinical studies that our drug safety rules are based on, 50 year old white guys are the peak of that distribution, which I don't see any problem with that, but some of you out there might not like that if you're taking a drug. So yeah, we want to understand what are the biases in the data, right? And so, there's some new technologies. There's actually some very interesting data-generative technologies. And this is something I'm also curious what Naveen has to say about, that you can generate from small sets of observed data, much broader sets of varied data that help probe and fill in your training for some of these systems that are very data dependent. So that takes us to a place where we're going to start to see deep learning systems generating data to train other deep learning systems. And they start to sort of go back and forth and you start to have some very nice ways to, at least, expose the weakness of these underlying technologies. >> And that feeds back to your question about regulatory oversight of this. And there's the fascinating, but little known origin of why very few women are in clinical studies. Thalidomide causes birth defects. So rather than say pregnant women can't be enrolled in drug trials, they said any woman who is at risk of getting pregnant cannot be enrolled. So there was actually a scientific meritorious argument back in the day when they really didn't know what was going to happen post-thalidomide. So it turns out that the adverse, unintended consequence of that decision was we don't have data on women and we know in certain drugs, like Xanax, that the metabolism is so much slower, that the typical dosing of Xanax is women should be less than half of that for men. And a lot of women have had very serious adverse effects by virtue of the fact that they weren't studied. So the point I want to illustrate with that is that regulatory cycles... So people have known for a long time that was like a bad way of doing regulations. It should be changed. It's only recently getting changed in any meaningful way. So regulatory cycles and legislative cycles are incredibly slow. The rate of exponential growth in technology is exponential. And so there's impedance mismatch between the cycle time for regulation cycle time for innovation. And what we need to do... I'm working with the FDA. I've done four workshops with them on this very issue. Is that they recognize that they need to completely revitalize their process. They're very interested in doing it. They're not resisting it. People think, oh, they're bad, the FDA, they're resisting. Trust me, there's nobody on the planet who wants to revise these review processes more than the FDA itself. And so they're looking at models and what I recommended is global cloud sourcing and the FDA could shift from a regulatory role to one of doing two things, assuring the people who do their reviews are competent, and assuring that their conflicts of interest are managed, because if you don't have a conflict of interest in this very interconnected space, you probably don't know enough to be a reviewer. So there has to be a way to manage the conflict of interest and I think those are some of the keypoints that the FDA is wrestling with because there's type one and type two errors. If you underregulate, you end up with another thalidomide and people born without fingers. If you overregulate, you prevent life saving drugs from coming to market. So striking that balance across all these different technologies is extraordinarily difficult. If it were easy, the FDA would've done it four years ago. It's very complicated. >> Jumping on that question, so all three of you are in some ways entrepreneurs, right? Within your organization or started companies. And I think it would be good to talk a little bit about the business opportunity here, where there's a huge ecosystem in health care, different segments, biotech, pharma, insurance payers, etc. Where do you see is the ripe opportunity or industry, ready to really take this on and to make AI the competitive advantage. >> Well, the last question also included why aren't you using the result of the sepsis detection? We do. There were six or seven published ways of doing it. We did our own data, looked at it, we found a way that was superior to all the published methods and we apply that today, so we are actually using that technology to change clinical outcomes. As far as where the opportunities are... So it's interesting. Because if you look at what's going to be here in three years, we're not going to be using those big data analytics models for sepsis that we are deploying today, because we're just going to be getting a tiny aliquot of blood, looking for the DNA or RNA of any potential infection and we won't have to infer that there's a bacterial infection from all these other ancillary, secondary phenomenon. We'll see if the DNA's in the blood. So things are changing so fast that the opportunities that people need to look for are what are generalizable and sustainable kind of wins that are going to lead to a revenue cycle that are justified, a venture capital world investing. So there's a lot of interesting opportunities in the space. But I think some of the biggest opportunities relate to what Bob has talked about in bringing many different disparate data sources together and really looking for things that are not comprehensible in the human brain or in traditional analytic models. >> I think we also got to look a little bit beyond direct care. We're talking about policy and how we set up standards, these kinds of things. That's one area. That's going to drive innovation forward. I completely agree with that. Direct care is one piece. How do we scale out many of the knowledge kinds of things that are embedded into one person's head and get them out to the world, democratize that. Then there's also development. The underlying technology's of medicine, right? Pharmaceuticals. The traditional way that pharmaceuticals is developed is actually kind of funny, right? A lot of it was started just by chance. Penicillin, a very famous story right? It's not that different today unfortunately, right? It's conceptually very similar. Now we've got more science behind it. We talk about domains and interactions, these kinds of things but fundamentally, the problem is what we in computer science called NP hard, it's too difficult to model. You can't solve it analytically. And this is true for all these kinds of natural sorts of problems by the way. And so there's a whole field around this, molecular dynamics and modeling these sorts of things, that are actually being driven forward by these AI techniques. Because it turns out, our brain doesn't do magic. It actually doesn't solve these problems. It approximates them very well. And experience allows you to approximate them better and better. Actually, it goes a little bit to what you were saying before. It's like simulations and forming your own networks and training off each other. There are these emerging dynamics. You can simulate steps of physics. And you come up with a system that's much too complicated to ever solve. Three pool balls on a table is one such system. It seems pretty simple. You know how to model that, but it actual turns out you can't predict where a balls going to be once you inject some energy into that table. So something that simple is already too complex. So neural network techniques actually allow us to start making those tractable. These NP hard problems. And things like molecular dynamics and actually understanding how different medications and genetics will interact with each other is something we're seeing today. And so I think there's a huge opportunity there. We've actually worked with customers in this space. And I'm seeing it. Like Rosch is acquiring a few different companies in space. They really want to drive it forward, using big data to drive drug development. It's kind of counterintuitive. I never would've thought it had I not seen it myself. >> And there's a big related challenge. Because in personalized medicine, there's smaller and smaller cohorts of people who will benefit from a drug that still takes two billion dollars on average to develop. That is unsustainable. So there's an economic imperative of overcoming the cost and the cycle time for drug development. >> I want to take a go at this question a little bit differently, thinking about not so much where are the industry segments that can benefit from AI, but what are the kinds of applications that I think are most impactful. So if this is what a skilled surgeon needs to know at a particular time to care properly for a patient, this is where most, this area here, is where most surgeons are. They are close to the maximum knowledge and ability to assimilate as they can be. So it's possible to build complex AI that can pick up on that one little thing and move them up to here. But it's not a gigantic accelerator, amplifier of their capability. But think about other actors in health care. I mentioned a couple of them earlier. Who do you think the least trained actor in health care is? >> John: Patients. >> Yes, the patients. The patients are really very poorly trained, including me. I'm abysmal at figuring out who to call and where to go. >> Naveen: You know as much the doctor right? (laughing) >> Yeah, that's right. >> My doctor friends always hate that. Know your diagnosis, right? >> Yeah, Dr. Google knows. So the opportunities that I see that are really, really exciting are when you take an AI agent, like sometimes I like to call it contextually intelligent agent, or a CIA, and apply it to a problem where a patient has a complex future ahead of them that they need help navigating. And you use the AI to help them work through. Post operative. You've got PT. You've got drugs. You've got to be looking for side effects. An agent can actually help you navigate. It's like your own personal GPS for health care. So it's giving you the inforamation that you need about you for your care. That's my definition of Precision Medicine. And it can include genomics, of course. But it's much bigger. It's that broader picture and I think that a sort of agent way of thinking about things and filling in the gaps where there's less training and more opportunity, is very exciting. >> Great start up idea right there by the way. >> Oh yes, right. We'll meet you all out back for the next start up. >> I had a conversation with the head of the American Association of Medical Specialties just a couple of days ago. And what she was saying, and I'm aware of this phenomenon, but all of the medical specialists are saying, you're killing us with these stupid board recertification trivia tests that you're giving us. So if you're a cardiologist, you have to remember something that happens in one in 10 million people, right? And they're saying that irrelevant anymore, because we've got advanced decision support coming. We have these kinds of analytics coming. Precisely what you're saying. So it's human augmentation of decision support that is coming at blazing speed towards health care. So in that context, it's much more important that you have a basic foundation, you know how to think, you know how to learn, and you know where to look. So we're going to be human-augmented learning systems much more so than in the past. And so the whole recertification process is being revised right now. (inaudible audience member speaking) Speak up, yeah. (person speaking) >> What makes it fathomable is that you can-- (audience member interjects inaudibly) >> Sure. She was saying that our brain is really complex and large and even our brains don't know how our brains work, so... are there ways to-- >> What hope do we have kind of thing? (laughter) >> It's a metaphysical question. >> It circles all the way down, exactly. It's a great quote. I mean basically, you can decompose every system. Every complicated system can be decomposed into simpler, emergent properties. You lose something perhaps with each of those, but you get enough to actually understand most of the behavior. And that's really how we understand the world. And that's what we've learned in the last few years what neural network techniques can allow us to do. And that's why our brain can understand our brain. (laughing) >> Yeah, I'd recommend reading Chris Farley's last book because he addresses that issue in there very elegantly. >> Yeah we're seeing some really interesting technologies emerging right now where neural network systems are actually connecting other neural network systems in networks. You can see some very compelling behavior because one of the things I like to distinguish AI versus traditional analytics is we used to have question-answering systems. I used to query a database and create a report to find out how many widgets I sold. Then I started using regression or machine learning to classify complex situations from this is one of these and that's one of those. And then as we've moved more recently, we've got these AI-like capabilities like being able to recognize that there's a kitty in the photograph. But if you think about it, if I were to show you a photograph that happened to have a cat in it, and I said, what's the answer, you'd look at me like, what are you talking about? I have to know the question. So where we're cresting with these connected sets of neural systems, and with AI in general, is that the systems are starting to be able to, from the context, understand what the question is. Why would I be asking about this picture? I'm a marketing guy, and I'm curious about what Legos are in the thing or what kind of cat it is. So it's being able to ask a question, and then take these question-answering systems, and actually apply them so that's this ability to understand context and ask questions that we're starting to see emerge from these more complex hierarchical neural systems. >> There's a person dying to ask a question. >> Sorry. You have hit on several different topics that all coalesce together. You mentioned personalized models. You mentioned AI agents that could help you as you're going through a transitionary period. You mentioned data sources, especially across long time periods. Who today has access to enough data to make meaningful progress on that, not just when you're dealing with an issue, but day-to-day improvement of your life and your health? >> Go ahead, great question. >> That was a great question. And I don't think we have a good answer to it. (laughter) I'm sure John does. Well, I think every large healthcare organization and various healthcare consortiums are working very hard to achieve that goal. The problem remains in creating semantic interoperatability. So I spent a lot of my career working on semantic interoperatability. And the problem is that if you don't have well-defined, or self-defined data, and if you don't have well-defined and documented metadata, and you start operating on it, it's real easy to reach false conclusions and I can give you a classic example. It's well known, with hundreds of studies looking at when you give an antibiotic before surgery and how effective it is in preventing a post-op infection. Simple question, right? So most of the literature done prosectively was done in institutions where they had small sample sizes. So if you pool that, you get a little bit more noise, but you get a more confirming answer. What was done at a very large, not my own, but a very large institution... I won't name them for obvious reasons, but they pooled lots of data from lots of different hospitals, where the data definitions and the metadata were different. Two examples. When did they indicate the antibiotic was given? Was it when it was ordered, dispensed from the pharmacy, delivered to the floor, brought to the bedside, put in the IV, or the IV starts flowing? Different hospitals used a different metric of when it started. When did surgery occur? When they were wheeled into the OR, when they were prepped and drapped, when the first incision occurred? All different. And they concluded quite dramatically that it didn't matter when you gave the pre-op antibiotic and whether or not you get a post-op infection. And everybody who was intimate with the prior studies just completely ignored and discounted that study. It was wrong. And it was wrong because of the lack of commonality and the normalization of data definitions and metadata definitions. So because of that, this problem is much more challenging than you would think. If it were so easy as to put all these data together and operate on it, normalize and operate on it, we would've done that a long time ago. It's... Semantic interoperatability remains a big problem and we have a lot of heavy lifting ahead of us. I'm working with the Global Alliance, for example, of Genomics and Health. There's like 30 different major ontologies for how you represent genetic information. And different institutions are using different ones in different ways in different versions over different periods of time. That's a mess. >> Our all those issues applicable when you're talking about a personalized data set versus a population? >> Well, so N of 1 studies and single-subject research is an emerging field of statistics. So there's some really interesting new models like step wedge analytics for doing that on small sample sizes, recruiting people asynchronously. There's single-subject research statistics. You compare yourself with yourself at a different point in time, in a different context. So there are emerging statistics to do that and as long as you use the same sensor, you won't have a problem. But people are changing their remote sensors and you're getting different data. It's measured in different ways with different sensors at different normalization and different calibration. So yes. It even persists in the N of 1 environment. >> Yeah, you have to get started with a large N that you can apply to the N of 1. I'm actually going to attack your question from a different perspective. So who has the data? The millions of examples to train a deep learning system from scratch. It's a very limited set right now. Technology such as the Collaborative Cancer Cloud and The Data Exchange are definitely impacting that and creating larger and larger sets of critical mass. And again, not withstanding the very challenging semantic interoperability questions. But there's another opportunity Kay asked about what's changed recently. One of the things that's changed in deep learning is that we now have modules that have been trained on massive data sets that are actually very smart as certain kinds of problems. So, for instance, you can go online and find deep learning systems that actually can recognize, better than humans, whether there's a cat, dog, motorcycle, house, in a photograph. >> From Intel, open source. >> Yes, from Intel, open source. So here's what happens next. Because most of that deep learning system is very expressive. That combinatorial mixture of features that Naveen was talking about, when you have all these layers, there's a lot of features there. They're actually very general to images, not just finding cats, dogs, trees. So what happens is you can do something called transfer learning, where you take a small or modest data set and actually reoptimize it for your specific problem very, very quickly. And so we're starting to see a place where you can... On one end of the spectrum, we're getting access to the computing capabilities and the data to build these incredibly expressive deep learning systems. And over here on the right, we're able to start using those deep learning systems to solve custom versions of problems. Just last weekend or two weekends ago, in 20 minutes, I was able to take one of those general systems and create one that could recognize all different kinds of flowers. Very subtle distinctions, that I would never be able to know on my own. But I happen to be able to get the data set and literally, it took 20 minutes and I have this vision system that I could now use for a specific problem. I think that's incredibly profound and I think we're going to see this spectrum of wherever you are in your ability to get data and to define problems and to put hardware in place to see really neat customizations and a proliferation of applications of this kind of technology. >> So one other trend I think, I'm very hopeful about it... So this is a hard problem clearly, right? I mean, getting data together, formatting it from many different sources, it's one of these things that's probably never going to happen perfectly. But one trend I think that is extremely hopeful to me is the fact that the cost of gathering data has precipitously dropped. Building that thing is almost free these days. I can write software and put it on 100 million cell phones in an instance. You couldn't do that five years ago even right? And so, the amount of information we can gain from a cell phone today has gone up. We have more sensors. We're bringing online more sensors. People have Apple Watches and they're sending blood data back to the phone, so once we can actually start gathering more data and do it cheaper and cheaper, it actually doesn't matter where the data is. I can write my own app. I can gather that data and I can start driving the correct inferences or useful inferences back to you. So that is a positive trend I think here and personally, I think that's how we're going to solve it, is by gathering from that many different sources cheaply. >> Hi, my name is Pete. I've very much enjoyed the conversation so far but I was hoping perhaps to bring a little bit more focus into Precision Medicine and ask two questions. Number one, how have you applied the AI technologies as you're emerging so rapidly to your natural language processing? I'm particularly interested in, if you look at things like Amazon Echo or Siri, or the other voice recognition systems that are based on AI, they've just become incredibly accurate and I'm interested in specifics about how I might use technology like that in medicine. So where would I find a medical nomenclature and perhaps some reference to a back end that works that way? And the second thing is, what specifically is Intel doing, or making available? You mentioned some open source stuff on cats and dogs and stuff but I'm the doc, so I'm looking at the medical side of that. What are you guys providing that would allow us who are kind of geeks on the software side, as well as being docs, to experiment a little bit more thoroughly with AI technology? Google has a free AI toolkit. Several other people have come out with free AI toolkits in order to accelerate that. There's special hardware now with graphics, and different processors, hitting amazing speeds. And so I was wondering, where do I go in Intel to find some of those tools and perhaps learn a bit about the fantastic work that you guys are already doing at Kaiser? >> Let me take that first part and then we'll be able to talk about the MD part. So in terms of technology, this is what's extremely exciting now about what Intel is focusing on. We're providing those pieces. So you can actually assemble and build the application. How you build that application specific for MDs and the use cases is up to you or the one who's filling out the application. But we're going to power that technology for multiple perspectives. So Intel is already the main force behind The Data Center, right? Cloud computing, all this is already Intel. We're making that extremely amenable to AI and setting the standard for AI in the future, so we can do that from a number of different mechanisms. For somebody who wants to develop an application quickly, we have hosted solutions. Intel Nervana is kind of the brand for these kinds of things. Hosted solutions will get you going very quickly. Once you get to a certain level of scale, where costs start making more sense, things can be bought on premise. We're supplying that. We're also supplying software that makes that transition essentially free. Then taking those solutions that you develop in the cloud, or develop in The Data Center, and actually deploying them on device. You want to write something on your smartphone or PC or whatever. We're actually providing those hooks as well, so we want to make it very easy for developers to take these pieces and actually build solutions out of them quickly so you probably don't even care what hardware it's running on. You're like here's my data set, this is what I want to do. Train it, make it work. Go fast. Make my developers efficient. That's all you care about, right? And that's what we're doing. We're taking it from that point at how do we best do that? We're going to provide those technologies. In the next couple of years, there's going to be a lot of new stuff coming from Intel. >> Do you want to talk about AI Academy as well? >> Yeah, that's a great segway there. In addition to this, we have an entire set of tutorials and other online resources and things we're going to be bringing into the academic world for people to get going quickly. So that's not just enabling them on our tools, but also just general concepts. What is a neural network? How does it work? How does it train? All of these things are available now and we've made a nice, digestible class format that you can actually go and play with. >> Let me give a couple of quick answers in addition to the great answers already. So you're asking why can't we use medical terminology and do what Alexa does? Well, no, you may not be aware of this, but Andrew Ian, who was the AI guy at Google, who was recruited by Google, they have a medical chat bot in China today. I don't speak Chinese. I haven't been able to use it yet. There are two similar initiatives in this country that I know of. There's probably a dozen more in stealth mode. But Lumiata and Health Cap are doing chat bots for health care today, using medical terminology. You have the compound problem of semantic normalization within language, compounded by a cross language. I've done a lot of work with an international organization called Snowmed, which translates medical terminology. So you're aware of that. We can talk offline if you want, because I'm pretty deep into the semantic space. >> Go google Intel Nervana and you'll see all the websites there. It's intel.com/ai or nervanasys.com. >> Okay, great. Well this has been fantastic. I want to, first of all, thank all the people here for coming and asking great questions. I also want to thank our fantastic panelists today. (applause) >> Thanks, everyone. >> Thank you. >> And lastly, I just want to share one bit of information. We will have more discussions on AI next Tuesday at 9:30 AM. Diane Bryant, who is our general manager of Data Centers Group will be here to do a keynote. So I hope you all get to join that. Thanks for coming. (applause) (light electronic music)

Published Date : Mar 12 2017

SUMMARY :

And I'm excited to share with you He is the VP and general manager for the And it's pretty obvious that most of the useful data in that the technologies that we were developing So the mission is really to put and analyze it so you can actually understand So the field of microbiomics that I referred to earlier, so that you can think about it. is that the substrate of the data that you're operating on neural networks represent the world in the way And that's the way we used to look at it, right? and the more we understand the human cortex, What was it? also did the estimate of the density of information storage. and I'd be curious to hear from you And that is not the case today. Well, I don't like the idea of being discriminated against and you can actually then say what drug works best on this. I don't have clinic hours anymore, but I do take care of I practiced for many years I do more policy now. I just want to take a moment and see Yet most of the studies we do are small scale And so that barrier is going to enable So the idea is my data's really important to me. is much the same as you described. That's got to be a new one I've heard now. So I'm going to repeat this and ask Seems like a lot of the problems are regulatory, because I know the cycle is just going to be longer. And the diadarity is where you have and deep learning systems to understand, And that feeds back to your question about regulatory and to make AI the competitive advantage. that the opportunities that people need to look for to what you were saying before. of overcoming the cost and the cycle time and ability to assimilate Yes, the patients. Know your diagnosis, right? and filling in the gaps where there's less training We'll meet you all out back for the next start up. And so the whole recertification process is being are there ways to-- most of the behavior. because he addresses that issue in there is that the systems are starting to be able to, You mentioned AI agents that could help you So most of the literature done prosectively So there are emerging statistics to do that that you can apply to the N of 1. and the data to build these And so, the amount of information we can gain And the second thing is, what specifically is Intel doing, and the use cases is up to you that you can actually go and play with. You have the compound problem of semantic normalization all the websites there. I also want to thank our fantastic panelists today. So I hope you all get to join that.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Diane BryantPERSON

0.99+

Bob RogersPERSON

0.99+

Kay ErinPERSON

0.99+

JohnPERSON

0.99+

David HausslerPERSON

0.99+

ChinaLOCATION

0.99+

sixQUANTITY

0.99+

Chris FarleyPERSON

0.99+

Naveen RaoPERSON

0.99+

100%QUANTITY

0.99+

BobPERSON

0.99+

10QUANTITY

0.99+

Ray KurzweilPERSON

0.99+

IntelORGANIZATION

0.99+

LondonLOCATION

0.99+

MikePERSON

0.99+

John MadisonPERSON

0.99+

American Association of Medical SpecialtiesORGANIZATION

0.99+

fourQUANTITY

0.99+

GoogleORGANIZATION

0.99+

three monthsQUANTITY

0.99+

HHSORGANIZATION

0.99+

Andrew IanPERSON

0.99+

20 minutesQUANTITY

0.99+

$100QUANTITY

0.99+

first paperQUANTITY

0.99+

CongressORGANIZATION

0.99+

95 percentQUANTITY

0.99+

second authorQUANTITY

0.99+

UC Santa CruzORGANIZATION

0.99+

100-dollarQUANTITY

0.99+

200 waysQUANTITY

0.99+

two billion dollarsQUANTITY

0.99+

George ChurchPERSON

0.99+

Health CapORGANIZATION

0.99+

NaveenPERSON

0.99+

25 plus yearsQUANTITY

0.99+

12 layersQUANTITY

0.99+

27 genesQUANTITY

0.99+

12 yearsQUANTITY

0.99+

KayPERSON

0.99+

140 layersQUANTITY

0.99+

first authorQUANTITY

0.99+

one questionQUANTITY

0.99+

200 peopleQUANTITY

0.99+

20QUANTITY

0.99+

FirstQUANTITY

0.99+

CIAORGANIZATION

0.99+

NLPORGANIZATION

0.99+

TodayDATE

0.99+

two questionsQUANTITY

0.99+

yesterdayDATE

0.99+

PetePERSON

0.99+

MedicareORGANIZATION

0.99+

LegosORGANIZATION

0.99+

Northern CaliforniaLOCATION

0.99+

EchoCOMMERCIAL_ITEM

0.99+

EachQUANTITY

0.99+

100 timesQUANTITY

0.99+

nervanasys.comOTHER

0.99+

$1000QUANTITY

0.99+

Ray ChrisfallPERSON

0.99+

NervanaORGANIZATION

0.99+

Data Centers GroupORGANIZATION

0.99+

Global AllianceORGANIZATION

0.99+

Global Alliance for Genomics and HealthORGANIZATION

0.99+

millionsQUANTITY

0.99+

intel.com/aiOTHER

0.99+

four yearsQUANTITY

0.99+

StanfordORGANIZATION

0.99+

10,000 examplesQUANTITY

0.99+

todayDATE

0.99+

one diseaseQUANTITY

0.99+

Two examplesQUANTITY

0.99+

Steven HawkingPERSON

0.99+

five years agoDATE

0.99+

firstQUANTITY

0.99+

two sortQUANTITY

0.99+

bothQUANTITY

0.99+

OneQUANTITY

0.99+

first timeQUANTITY

0.99+

Suresh Acharya, JDA Labs - #IntelAI - #theCUBE


 

>> Narrator: Live from Austin, Texas it's theCUBE covering South by Southwest 2017 brought to you by Intel. Now here's John Furrier. Welcome back everyone, we are here live inside theCUBE SiliconANGLE Media's flagship program. We go out to events, and extract the signal from the noise, I'm John Furrier, we're here in the Intel AI Lounge for South by Southwest special, three days of coverage, interviews all day, some interviews tomorrow and some super demos and panels with Intel's top AI staff and thought leaders and experts and management. My next guest is Suresh Acharya with JDA Software, I've got it right? Welcome to theCUBE. >> Thank you. >> We were chatting before we were coming on about the IOT in your world, but you had made a comment about you were walking around the convention center-- >> Suresh: Yeah. >> What's it like outside? What's the scene look like out there? >> Well, I mean first of all, it's really fun to be here for South to Southwest, of course, and just walking from the convention center here, there are a lot of places, but you guys have something going on here, long lines, it's just a very, you know, a ... There's a huge buzz if you will. Very exciting. >> People are partying here, they got free beer, free booze-- >> Suresh: It's great! >> If you're watching and you're here at South pie, you definitely want to be at the Intel AI Lounge, one it's cooler, all the cool kids are here-- >> Suresh: That's right. Talking AI which is onstage, it's an AI VR show. You've seen a lot of virtual reality, you've seen a lot of AI. >> Suresh: Uh, huh. >> This speaks to a new interface, a new interface from a virtual augmented reality, but also AI from a data centric world-- >> Suresh: Of course. Yes. >> Your thoughts, cuz this is what you're involved in. >> Sure, let me tell you a little bit more about what I do, just to set the context. JDA we work in the supply chain and those are manufacturing plants into transportation into warehouse into stores. Things that are-- >> Known businesses, known processes. >> Known exactly. But what is now changing dramatically is the fact that a lot of this is being digitized. And not only is data being generated, the smarts, that's where the AI comes in has really helped or will continue to help improve efficiencies. So in your question around what the role of hollow ends or whatever the VR capabilities could be and where the smarts come in, if you will, is what we're trying to do is how do these technologies, how do you use them in the store, how do you use them in the warehouse, so that dynamically you can use the smarts for better efficiency. So that's where the machine learning as well as the VR technology comes together. >> So Suresh talk about the dynamics between data science and math and software, because what's happening is it's a real intersection now of confluence of maths, math and science, data, that's really available, and software. >> Suresh: Yeah. >> This is the power trend. This is the big tailwinds to the marketplace. >> Sure, so I'm a data scientist by training, you know I've always done algorithmic work and I've always worked in an industry where my mathematical models make it into the software. It's just music to my ears that a lot of this is now really, really becoming very, very important. Data science is just a word, there's two pieces. There's a data piece. There's a science piece. We all get trained in school on the science, and what we're finding early on was that data sometimes simply wasn't there. >> John: Yeah. >> But now, there's a lot more data, there's a lot more clean data and you can do a lot more with it. So it's a great time to be in AI, machine learning, and just the broader space of the data side. >> Well databases are changing, you're making more unstructured data available-- >> Suresh: Yes. >> Addressable, okay let's get back to your example of manufacturing in supply chain because I was going to say, boring, but it's never boring, it's business. >> Suresh: Yeah. >> We have a world we live in, an analog world, but you mentioned digitizing. This is not trivial. So I want you to take me through in your opinion and working in the labs of JDA Software, what are the key things for digitizing businesses, because you've got to bolt on senors, you got to have actuators, you got to have all kinds of new potentially hardware-- >> Suresh: Yeah. >> You need more processors. But now you got to connect it to the network, that's the Internet of Things. How hard is it to digitize a business? >> Sure, so it is hard and so this is more of a journey than something that's going to happen over night. Let me walk you through a couple of use cases both upstream to the end, and then the other way around, just so that you see the value and how complex, but yet how much value one can add. As you know, there are production plants all over the world, so it's quite possible then that there's a vessel that's carrying your product from China to Long Beach, California. A lot of times currently there's no visibility around when that ship will ever make it to Long Beach. But with sensors, with real-time tracking of all these vessels, we're now able to say that rather than it arriving in Long Beach on the 22nd because of weather reasons, it's now going to arrive on the 25th instead. And how that then drives the downstream supply chain around when should the product make it to the distribution center, when will it make it to the store, and oh, by the way, I might need to make alternate plans now because I don't have the luxury to wait for the three day delay that I am incurring, what are my alternate sources. So that's upstream down to the store. We don't really see it when we go buy something at the store, the fact that this has had such a long journey upstream, is typically shielded from us. >> So it's a ripple effect. >> Ripple effect. >> So the old days was, hey where's my product? Oh, it's on a boat from China, so you didn't know where it's coming from and the guild expression-- >> Suresh: Exactly. >> Maybe it was China or not. >> Suresh: Right. >> But the point was that you had a delay in impact, a disruption-- >> That's right. >> Here you can say, okay contingency policy, software, trigger, hey it's here, get some supply from somewhere else, it could be produce or other goods. >> Suresh: Exactly. >> Am I getting it right? >> You're absolutely right. So that's the kind of upstream down to the consumer, but how about the consumer or the store upstream, right, so sometimes what happens is folks go to the store and then they start to get on social media to say these are awesome products, everyone's got to buy em, these things start to sell off the shelf, if you will, very, very rapidly. And now can you start to detect that social sentiment trend to start to realign your supply chain so that you avoid out of stock. Alternatively, you could have the rewards-- >> Or you could game it like they're doing now. Create scarcity, then make the retail market move. >> There's that as well. >> Supreme is doing it. My kids are buying these things, Supreme, these jackets and backpacks. >> Correct. You can gamify as well. On the other hand, what you can also do is what if you introduce a new product, which you're now finding out is not selling as well as you thought it would. You're not going to continue to push inventory there, you're going to be smart about where you now send those and potentially also manage the manufacturing upstream. >> So it's the classic effect of efficiency opportunities are every. >> Suresh: Exactly. >> Talk how about Intel, what do you think Intel's doing right? Because if you think about about what's powering all this, it's the chips. >> Suresh: Yep. >> It's not just the processor and the PC, it's software end-to-end solutions. >> Suresh: Yeah. >> I was just covering Mobile World Congress two weeks ago, and 5G is bringing potentially a gigabit, I mean not that you need a sensor on a boat or a machine to use a gigabit-- >> Suresh: Sure. >> But still it does create more bandwidth-- >> Suresh: Yeah. >> Cuz you got to connect to the network. (laughs) >> Suresh: Sure. Exactly. (laughs) >> Your data's got to go somewhere. >> So one of the pieces of work that we're doing with Intel is really at the store level to have sensors detect where an object is. You'd be surprised. People sometimes, not sometimes a lot of times what happens is retailers will say that they're out of stock, when it's still in the store, it's just that they don't know where it is. >> John: Yeah. >> To now have sensors to precisely detect whether it's in the back office, whether it's in a fitting room, whether it's somewhere else and really track that inventory real-time to then provide the visibility around inventory is huge. This is the holy grail. You and I may not realize it, but this is the holy grail for a lot of retailers. Because they simply do not know where their inventory is and the work that we're doing around sensors, you know connecting the devices and of course adding the smarts with AI, that's the value. >> I love to hear the word holy grail, great stuff. I want to ask you a question on a personal note. >> Suresh: Yeah. >> Someone who's in labs and you've been in the industry of data science with a math background in retail, in supply chain, you kind of see the big picture. What are the coolest things out there right now, for the folks watching, whether it's a young kid or someone in college or an executive or a developer. Can you highlight some things of the coolest things that people should pay attention to, and what is cool that people aren't paying attention to. >> Yeah, well I think I'm going to be biased when I say just the space of machine learning is actually exploding, but it is. So that's my own heritage as well. To me it's just fascinating to see how things that were very rudimentary have now really caught on. So the area of AI and machine learning has endless potential in my mind. Around a lot of the devices then that actually generate the data that then feeds into it, that space is exploding as well. One of the pieces of work-- >> John: You mean IoT data? >> IoT data. I'd like to give you a specific example of things that are now possible. We are doing research in the space of cognitive robotics. These are not robots that will help automate things or make things faster, these are robots in the stores that will actually interact with you, so they will actually talk to you. You can go up it and say, "Hey, I'm trying to find "these shoes and I can't find them." What it's going to tell you is it's going to bring that immense power of AI to tell you where the products are, it could be in that store and it's going to have someone go fetch it for you, or it's going to tell you, oh it's in another store five miles down the road, would you rather go there to pick it up or it can say I can have it be mailed to your house. So that's in terms of the cognitive robot understanding your emotions that you're angry trying to find something or you're a happy customer and being able to respond that way, but it's also continuously collecting data about you. That it's a male of a certain age group coming into the store at this time, coming out of aisle number 19 looking for this kind of product. This is all pieces of info ... So our goal is even when you're 10 feet away from the robot, it's going to know what questions you're going to ask. >> So robotics is really hot right now, >> Suresh: Right. >> Because this is the interactivity potential, not just a static machine. >> Suresh: Correct. This is more ... >> It's the whole experience. >> We had Dr. Naveen, on earlier, Rao, he said it's like the Jetsons, go clean my room, I mean we're getting there. >> Suresh: We are getting there. >> Almost there. >> We're almost getting there and so ... So the notion that users will use software in a two-dimensional screen manner that we're doing now, that's already changing. So to your point earlier on VR being submersing yourself into your supply chain, which we never have done-- >> John: Yeah. >> Is really where this is going. >> John: Got it. >> So-- >> Suresh, so final question, shoot the arrow forward five years, what does our future look like, what's going to change, what's it going to look like? >> Well, there's a lot of buzz around the autonomous self driving car. In my world it's really the autonomous self-learning supply chain. Think about it, it's going to detect things, it's going to know things, it's going to predict things so much better and also be able to prescribe things dynamically. There's a lot of inefficiencies built into the supply chain that will gradually over time get better and better. So a lot of folks that could be scary, just like driverless car to a lot of folks is scary, but if you really grasp the value of it, where we're going is tremendous in terms of operational efficiencies, in terms of smart, just making our everyday lives so much better. >> Alright Suresh Acharya inside theCUBE, we're here in the Intel AI Lounge, I'm John Furrier with SiliconANGLE Media. We're breaking it down here at South by Southwest where all the buzz is happening virtual reality, artificial intelligence, machine learning is the hottest reality trend right now. Software developers are booming, it's Suresh great, it's the holy grail! This is theCUBE here at the Intel AI Lounge. Back with more coverage after this short break. (upbeat music)

Published Date : Mar 10 2017

SUMMARY :

brought to you by Intel. There's a huge buzz if you will. Suresh: That's right. Suresh: Of course. just to set the context. is the fact that a lot of this is being digitized. So Suresh talk about the dynamics This is the big tailwinds to the marketplace. it into the software. and just the broader space of the data side. Addressable, okay let's get back to your example So I want you to take me through How hard is it to digitize a business? because I don't have the luxury to wait Here you can say, okay contingency policy, software, So that's the kind of upstream down to the consumer, Or you could game it like they're doing now. Supreme is doing it. On the other hand, what you can also do is So it's the classic effect of efficiency it's the chips. It's not just the processor and the PC, Cuz you got to connect to the network. (laughs) So one of the pieces of work that we're doing with Intel This is the holy grail. I love to hear the word holy grail, great stuff. for the folks watching, whether it's a young kid Around a lot of the devices then What it's going to tell you is it's going to bring Because this is the interactivity potential, This is more ... he said it's like the Jetsons, go clean my room, So the notion that users will use software There's a lot of inefficiencies built into the supply chain it's Suresh great, it's the holy grail!

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
SureshPERSON

0.99+

Suresh AcharyaPERSON

0.99+

ChinaLOCATION

0.99+

JohnPERSON

0.99+

John FurrierPERSON

0.99+

JDA LabsORGANIZATION

0.99+

Long BeachLOCATION

0.99+

10 feetQUANTITY

0.99+

five milesQUANTITY

0.99+

two piecesQUANTITY

0.99+

three dayQUANTITY

0.99+

RaoPERSON

0.99+

SiliconANGLE MediaORGANIZATION

0.99+

tomorrowDATE

0.99+

five yearsQUANTITY

0.99+

IntelORGANIZATION

0.99+

Austin, TexasLOCATION

0.99+

JDA SoftwareORGANIZATION

0.99+

three daysQUANTITY

0.99+

two weeks agoDATE

0.98+

Long Beach, CaliforniaLOCATION

0.98+

OneQUANTITY

0.98+

NaveenPERSON

0.98+

5GORGANIZATION

0.97+

Dr.PERSON

0.96+

#IntelAIORGANIZATION

0.94+

oneQUANTITY

0.93+

bothQUANTITY

0.93+

SupremeORGANIZATION

0.9+

JDAORGANIZATION

0.9+

22ndQUANTITY

0.87+

Mobile World CongressEVENT

0.85+

25thQUANTITY

0.8+

twoQUANTITY

0.8+

AI LoungeLOCATION

0.78+

LoungeORGANIZATION

0.76+

JetsonsORGANIZATION

0.74+

SouthTITLE

0.74+

SouthwestLOCATION

0.72+

theCUBEORGANIZATION

0.71+

Ripple effectOTHER

0.7+

SouthwestTITLE

0.66+

South pieLOCATION

0.6+

timesQUANTITY

0.56+

coupleQUANTITY

0.56+

casesQUANTITY

0.53+

#theCUBEORGANIZATION

0.52+

piecesQUANTITY

0.52+

AILOCATION

0.52+

South by SouthwestEVENT

0.47+

2017DATE

0.47+

Alison Yu, Cloudera - SXSW 2017 - #IntelAI - #theCUBE


 

(electronic music) >> Announcer: Live from Austin, Texas, it's The Cube. Covering South By Southwest 2017. Brought to you by Intel. Now, here's John Furrier. >> Hey, welcome back, everyone, we're here live in Austin, Texas, for South By Southwest Cube coverage at the Intel AI Lounge, #IntelAI if you're watching, put it out on Twitter. I'm John Furrier of Silicon Angle for the Cube. Our next guest is Alison Yu who's with Cloudera. And in the news today, although they won't comment on it. It's great to see you, social media manager at Cloudera. >> Yes, it's nice to see you as well. >> Great to see you. So, Cloudera has a strategic relationship with Intel. You guys have a strategic investment, Intel, and you guys partner up, so it's well-known in the industry. But what's going on here is interesting, AI for social good is our theme. >> Alison: Yes. >> Cloudera has always been a pay-it-forward company. And I've known the founders, Mike Olson and Amr Awadallah. >> Really all about the community and paying it forward. So Alison, talk about what you guys are working on. Because you're involved in a panel, but also Cloudera Cares. And you guys have teamed up with Thorn, doing some interesting things. >> Alison: Yeah (laughing). >> Take it away! >> Sure, thanks. Thanks for the great intro. So I'll give you a little bit of a brief introduction to Cloudera Cares. Cloudera Cares was founded roughly about three years ago. It was really an employee-driven and -led effort. I kind of stepped into the role and ended up being a little bit more of the leader just by the way it worked out. So we've really gone from, going from, you know, we're just doing soup kitchens and everything else, to strategic partnerships, donating software, professional service hours, things along those lines. >> Which has been very exciting to see our nonprofit partnerships grow in that way. So it really went from almost grass-root efforts to an organized organization now. And we start stepping up our strategic partnerships about a year and a half ago. We started with DataKind, is our initial one. About two years ago, we initiated that. Then we a year ago, about in September, we finalized our donation of an enterprise data hub to Thorn, which if you're not aware of they're all about using technology and innovation to stop child-trafficking. So last year, around September or so, we announced the partnership and we donated professional service hours. And then in October, we went with them to Grace Hopper, which is obviously the largest Women in Tech Conference in North America. And we hosted a hackathon and we helped mentor women entering into the tech workforce, and trying to come up with some really cool innovative solutions for them to track and see what's going on with the dark web, so we had quite a few interesting ideas coming out of that. >> Okay, awesome. We had Frederico Gomez Suarez on, who was the technical advisor. >> Alison: Yeah. >> A Microsoft employee, but he's volunteering at Thorn, and this is interesting because this is not just donating to the soup kitchens and what not. >> Alison: Yeah. >> You're starting to see a community approach to philanthropy that's coding RENN. >> Yeah. >> Hackathons turning into community galvanizing communities, and actually taking it to the next level. >> Yeah. So, I think one of the things we realize is tech, while it's so great, we have actually introduced a lot of new problems. So, I don't know if everyone's aware, but in the '80s and '90s, child exploitation had almost completely died. They had almost resolved the issue. With the introduction of technology and the Internet, it opened up a lot more ways for people to go ahead and exploit children, arrange things, in the dark web. So we're trying to figure out a way to use technology to combat a problem that technology kind of created as well, but not only solving it, but rescuing people. >> It's a classic security problem, the surface area has increased for this kind of thing. But big data, which is where you guys were founded on in the cloud era that we live in. >> Alison: Yeah. >> Pun intended. (laughing) Using the machine learning now you start with some scale now involved. >> Yes, exactly, and that's what we're really hoping, so we're partnering with Intel in the National Center of Missing Exploited Children. We're actually kicking off a virtual hackathon tomorrow, and our hope is we can figure out some different innovative ways that AI can be applied to scraping data and finding children. A lot of times we'll see there's not a lot of clues, but for example, if we can upload, if there can be a tool that can upload three or four different angles of a child's face when they go missing, maybe what happens is someone posts a picture on Instagram or Twitter that has a geo tag and this kid is in the background. That would be an amazing way of using AI and machine learning-- >> Yeah. >> Alison: To find a child, right. >> Well, I'll give you guy a plug for Cloudera. And I'll reference Dr. Naveen Rao, who's the GM of Intel's AI group, was on earlier. And he was talking about how there's a lot of storage available, not a lot of compute. Now, Cloudera, you guys have really pioneered the data lake, data hub concept where storage is critical. >> Yeah. >> Now, you got this compute power and machine learning, that's kind of where it comes together. Did I get that right? >> Yeah, and I think it's great that with the partnership with Intel we're able to integrate our technology directly into the hardware, which makes it so much more efficient. You're able to compute massive amounts of data in a very short amount of time, and really come up with real results. And with this partnership, specifically with Thorn and NCMEC, we're seeing that it's real impact for thousands of people last year, I think. In the 2016 impact report, Thorn said they identified over 6,000 trafficking victims, of which over 2,000 were children. Right, so that tool that they use is actually built on Cloudera. So, it's great seeing our technology put into place. >> Yeah, that's awesome. I was talking to an Intel person the other day, they have 72 cores now on a processor, on the high-end Xeons. Let's get down to some other things that you're working on. What are you doing here at the show? Do you have things that you're doing? You have a panel? >> Yeah, so at the show, at South by Southwest, we're kicking off a virtual hackathon tomorrow at our Austin offices for South by Southwest. Everyone's welcome to come. I just did the liquor order, so yes, everyone please come. (laughing) >> You just came from Austin's office, you're just coming there. >> Yeah, exactly. So we've-- >> Unlimited Red Bull, pizza, food. (laughing) >> Well, we'll be doing lots and lots tomorrow, but we're kicking that off, we have representatives from Thorn, NCMEC, Google, Intel, all on site to answer questions. That's kind of our kickoff of this month-long virtual hackathon. You don't need to be in Austin to participate, but that is one of the things that we are kicking off. >> And then on Sunday, actually here at the Intel AI Lounge we're doing a panel on AI for Good, and using artificial intelligence to solve problems. >> And we'll be broadcasting that live here on The Cube. So, folks, SiliconAngle.tv will carry that. Alison, talk about the trend that, you weren't here when we were talking about how there's now a new counterculture developing in a good way around community and social change. How real is the trend that you're starting to see these hackathons evolve from what used to be recruiting sessions to people just jamming together to meet each other. Now, you're starting to see the next level of formation where people are organizing collectively-- >> Yeah. >> To impact real issues. >> Yeah. >> Is this a real trend or where is that trend, can you speak to that? >> Sure, so from what I've seen from the hackathons what we've been seeing before was it's very company-specific. Only one company wanted to do it, and they would kind of silo themselves, right? Now, we're kind of seeing this coming together of companies that are generally competitors, but they see a great social cause and they decide that they want to band together, regardless of their differences in technology, product, et cetera, for a common good. And, so. >> Like a Thorn. >> For Thorn, you'll see a lot of competitors, so you'll see Facebook and Twitter or Google and Amazon, right? >> John: Yeah. >> And we'll see all these different competitors come together, lend their workforce to us, and have them code for one great project. >> So, you see it as a real trend. >> I do see it as a trend. I saw Thorn last year did a great one with Facebook and on-site with Facebook. This year as we started to introduce this hackathon, we decided that we wanted to do a hackathon series versus just a one-off hackathon. So we're seeing people being able to share code, contribute, work on top of other code, right, and it's very much a sharing community, so we're very excited for that. >> All right, so I got to ask you what's they culture like at Cloudera these days, as you guys prepare to go public? What's the vibe internally of the company, obviously Mike Olson, the founder, is still around, Amr's around. You guys have been growing really fast. Got your new space. What's the vibe like in Cloudera now? >> Honestly, the culture at Cloudera hasn't really changed. So, when I joined three years ago we were much smaller than we are now. But I think one thing that we're really excited about is everyone's still so collaborative, and everyone makes sure to help one another out. So, I think our common goal is really more along the lines of we're one team, and let's put out the best product we can. >> Awesome. So, what's South by Southwest mean to you this year? If you had to kind of zoom out and say, okay. What's the theme? We heard Robert Scoble earlier say it's a VR theme. We hear at Intel it's AI. So, there's a plethora of different touchpoints here. What do you see? >> Yeah, so I actually went to the opening keynote this morning, which was great. There was an introduction, and then I don't know if you realized, but Cory Booker was on as well, which is great. >> John: Yep. >> But I think a lot of what we had seen was they called out on stage that artificial intelligence is something that will be a trend for the next year. And I think that's very exciting that Intel really hit the nail on the head with the AI Lounge, right? >> Cory Booker, I'm a big fan. He's from my neighborhood, went to the same school I went to, that my family. So in Northern Valley, Old Tappan. Cory, if you're watching, retweet us, hashtag #IntelAI. So AI's there. >> AI is definitely there. >> No doubt, it's on stage. >> Yes, but I think we're also seeing a very large, just community around how can we make our community better versus let's try to go in these different silos, and just be hyper-aware of what's only in front of us, right? So, we're seeing a lot more from the community as well, just being interested in things that are not immediately in front of us, the wider, either nation, global, et cetera. So, I think that's very exciting people are stepping out of just their own little bubbles, right? And looking and having more compassion for other people, and figuring out how they can give back. >> And, of course, open source at the center of all the innovation as always. (laughing) >> I would like to think so, right? >> It is! I would testify. Machine learning is just a great example, how that's now going up into the cloud. We started to see that really being part of all the apps coming out, which is great because you guys are in the big data business. >> Alison: Yeah. >> Okay, Alison, thanks so much for taking the time. Real quick plug for your panel on Sunday here. >> Yeah. >> What are you going to talk about? >> So we're going to be talking a lot about AI for good. We're really going to be talking about the NCMEC, Thorn, Google, Intel, Cloudera partnership. How we've been able to do that, and a lot of what we're going to also concentrate on is how the everyday tech worker can really get involved and give back and contribute. I think there is generally a misconception of if there's not a program at my company, how do I give back? >> John: Yeah. >> And I think Cloudera's a shining example of how a few employees can really enact a lot of change. We went from grassroots, just a few employees, to a global program pretty quickly, so. >> And it's organically grown, which is the formula for success versus some sort of structured company program (laughing). >> Exactly, so we definitely gone from soup kitchen to strategic partnerships, and being able to donate our own time, our engineers' times, and obviously our software, so. >> Thanks for taking the time to come on our Cube. It's getting crowded in here. It's rocking the house, the house is rocking here at the Intel AI Lounge. If you're watching, check out the hashtag #IntelAI or South by Southwest. I'm John Furrie. I'll be back with more after this short break. (electronic music)

Published Date : Mar 10 2017

SUMMARY :

Brought to you by Intel. And in the news today, although they won't comment on it. and you guys partner up, And I've known the founders, Mike Olson and Amr Awadallah. So Alison, talk about what you guys are working on. I kind of stepped into the role for them to track and see what's going on with the dark web, We had Frederico Gomez Suarez on, donating to the soup kitchens and what not. You're starting to see a community approach and actually taking it to the next level. but in the '80s and '90s, child exploitation in the cloud era that we live in. Using the machine learning now and our hope is we can figure out some different the data lake, data hub concept Now, you got this compute power and machine learning, into the hardware, which makes it so much more efficient. on the high-end Xeons. I just did the liquor order, so yes, everyone please come. You just came from Austin's office, So we've-- (laughing) but that is one of the things that we are kicking off. actually here at the Intel AI Lounge Alison, talk about the trend that, you weren't here and they would kind of silo themselves, right? and have them code for one great project. and on-site with Facebook. All right, so I got to ask you the best product we can. What's the theme? and then I don't know if you realized, that Intel really hit the nail on the head I went to, that my family. and just be hyper-aware of And, of course, open source at the center which is great because you guys are in the Okay, Alison, thanks so much for taking the time. and a lot of what we're going to also concentrate on is And I think Cloudera's a shining example of And it's organically grown, and being able to donate our own time, Thanks for taking the time to come on our Cube.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Mike OlsonPERSON

0.99+

AlisonPERSON

0.99+

Robert ScoblePERSON

0.99+

NCMECORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

John FurriePERSON

0.99+

ClouderaORGANIZATION

0.99+

AustinLOCATION

0.99+

John FurrierPERSON

0.99+

AmazonORGANIZATION

0.99+

JohnPERSON

0.99+

OctoberDATE

0.99+

Naveen RaoPERSON

0.99+

MicrosoftORGANIZATION

0.99+

Cory BookerPERSON

0.99+

Alison YuPERSON

0.99+

SundayDATE

0.99+

IntelORGANIZATION

0.99+

Cloudera CaresORGANIZATION

0.99+

72 coresQUANTITY

0.99+

ThornORGANIZATION

0.99+

last yearDATE

0.99+

This yearDATE

0.99+

Amr AwadallahPERSON

0.99+

a year agoDATE

0.99+

FacebookORGANIZATION

0.99+

CoryPERSON

0.99+

tomorrowDATE

0.99+

Austin, TexasLOCATION

0.99+

TwitterORGANIZATION

0.99+

Northern ValleyLOCATION

0.99+

SeptemberDATE

0.99+

2016DATE

0.99+

DataKindORGANIZATION

0.99+

over 6,000 trafficking victimsQUANTITY

0.99+

Frederico Gomez SuarezPERSON

0.99+

next yearDATE

0.99+

todayDATE

0.99+

over 2,000QUANTITY

0.99+

three years agoDATE

0.99+

National Center of Missing Exploited ChildrenORGANIZATION

0.98+

SXSW 2017EVENT

0.98+

oneQUANTITY

0.98+

About two years agoDATE

0.98+

AmrORGANIZATION

0.98+

thousands of peopleQUANTITY

0.97+

North AmericaLOCATION

0.95+

about a year and a half agoDATE

0.95+

this yearDATE

0.95+

one teamQUANTITY

0.95+

Karthik Rau, SignalFX | BigDataSV 2015


 

hi Jeff Rick here with the cube welcome were excited to to get out and talk to startups people that are founding companies when they come out of stealth mode we're in a great position that we get a chance to talk to him early and we're really excited to have a cute conversation with karthik rao the founder and CEO of signal effects just coming out of stealth congratulations thank you Jeff so how long you've been working behind the scenes trying to get this thing going yeah we've been at it for two years now so two years a founder and I started the company in February of 2013 so excited to finally launch and make our product available to the world all right excellent congratulations that's always a great thing we've launched a few companies on the cube so hopefully this will be another great success so talk a little bit about first off you and your journey we have a lot of entrepreneurs that watch a show and I think it's it's an interesting topic as to how do you get to the place where you basically found in launched a company yeah absolutely I started my career at a company at a cloud company before cloud really exists this is a market there's a company called loud cloud oh yeah Marc Andreessen right recent horse or two of the company and we were trying to do what the public cloud vendors are doing today before the market was really all that big and before the technologies really existed to do it well but that was my first introduction to cloud o came out of college and that's where I met my co-founder Phillip Lou as well Phil and I were both working on the monitoring products at loud cloud from there I ended up at VMware for a good run of about seven years where I ran product had always wanted to start a company and then a couple of years ago Phil and I thought the timing was right and we had a great idea and decided to go build signal effects together okay so what was kind of the genesis of the idea you know a lot of times it's a cool technology looking for a problem to solve a lot of times it's a problem that you know and if I only had one of these they would solve my problems so how did the how did that whole process work yeah it was rooted in personal experience my co-founder phil was at Facebook for several years and was responsible for building the monitoring systems at Facebook and through our personal experience and what we'd seen in the marketplace we had a fundamental belief and a vision that monitoring for modern applications is now an analytics problem modern applications are distributed they're not you know a single database running on is system you know even small companies now have hundreds of VMs running on public cloud infrastructure and so the only way to really understand what's happening across all of these distributed applications is to collect the data centrally and use analytics and so that was our fundamental insight when we started signal effects what we saw in the marketplace was that most of the monitoring technologies haven't really evolved in the past 15 or 20 years and they're still largely designed for traditional static enterprise applications where if you get an alert when an individual node is down or a static thresholds been passed that's enough but that doesn't really work for modern apps because they're so distributed right if one node out of your twenty nodes is having a problem it doesn't necessarily mean that your application is having a trough having a problem and so the only way to really draw that insight is to collect the data and do analytics on it and that's what signal okay really because that distributed nature of modern of modern apps and modern architecture yes there are three things that are fundamentally different number one modern applications are distributed in nature and so you really have to look at patterns across many systems number two they're changing for more frequently than traditional enterprise apps because they're hosted for the most part route applications so you can push changes out every day if you want to and then third they're typically operated by product organizations and not IT organizations so you have developers or DevOps organizations that are actually operating the software and those three changes are quite substantial and require a new set of products right and so the other guys are just they're still kind of in the you know fire off the pager alert something is going down it's very noisy yes when you're firing off alerts every time an individual alert goes off when you've got thousands of a DM and we all know that the trend these days is towards micro services architectures you know small componentized you know containers or VMs and so you don't have to have a very sophisticated large application to have a lot of systems it's so do you fit into other existing kind of infrastructure monitoring systems or kind of infrastructure management systems so I'm sure you know it's another tool right guys got to manage a lot of stuff how does that work yeah we are focused on the analytics part of the problem okay so we collect data from any sources so our customers are typically sending us data you know infrastructure data that they're collecting using their own agents we have agents that we can provide to collect it a lot of the developers are instrumenting their own metrics that they care about so for example they might care about latency metrics and knowing Layton sees by customer by region so they'll send us all that data and then we provide a very rich analytics solution and platform for them to monitor all of this and and in real time detect patterns and anomalies so you just said you have customers but you coming out stealth so you have some beta customers already yes we have great customers already now just beta customers right are great console customers awesome yes congratulation thank you very much they're very excited about our product and we you know they range from small startups to fairly large web companies that are sending in tens of billions of data points every day into signal effects right right and again in the interest of sharing the knowledge with all of our entrepreneurs out there you know when did they get involved in the process how much of the kind of product development definition did they did they participate in you said you've been at it for a couple years yeah we've had a lot of conviction about this space from the very beginning because we our team had solved this problem for themselves and in previous experiences but we did include we've been in beta for about six months but better to launch and so over the course of those six months we recalibrated based on feedback we got from customers but on the whole we you know are we philosophy and the approach that we took was was pretty much validated by the early customers that we engaged with okay excellent and so um I assume your venture funded we are can you can you talk about who your who your backers are yes we raised twenty eight and a half million dollars eight million dollars yeah twenty-eight point five million dollars from andreessen horowitz okay with Ben Horowitz on our board okay and Charles River ventures with a lurker on our board and how big are you now time in terms of the company well we're just getting started now right at this is 1 million all that money - well we we've got a great group of engineers or our company is you know and still in the few dozen people stage at this point ok we're planning to invest aggressively in building out our team both on R&D and on the go-to-market side this excellent once you detect patterns and anomalies what's kind of the action steps you work with with other systems to swap stuff out together because now I hear like it's these huge data centers they don't swap out this they don't swap out machines they swap out racks it's soon they'll be swapping out data centers so what are some of the prescriptive things that people are using they couldn't do before by using your yeah I'll give you a great example of that one of our early beta customers they do code pushes very aggressively you know once a week they'll push out changes into their environment and they had a signal effects console open which and we're a real-time solution so every second they're seeing updates of what was happening in their infrastructure they pushed out their code and they immediately detected a memory leak and they saw their memory usage just growing immediately after they did their code Bush and they were able to roll it back before any of their users noticed any issues and so that's an example of these days a lot of problems introduced into environments are human driven problems it's a code push it's a new user gets onboard it or a new customer gets onboard and all of a sudden there's 10x the load onto your systems and so when you have a product like signal effects where you can in real time understand everything that's happening in your environment you can quickly detect these changes and determine what the appropriate next step is and that appropriate next step will depend on your application and who you are and what you're building right so our key philosophies we get out of your way but we give you all of the insights and the tools to figure out what's happening in your arm right it's interesting that really kind of two comes from from your partners you know kind of Facebook experience right because they're pushing out new code all the time when there's no fast and break things right right exactly and then you're at VMware so you know kind of the enterprise site so what if you could speak a little bit about kind of this consumerization of IT on the enterprise side and not so much the way that the look and feel of the thing works but really taking best practices from a consumer IT companies like Facebook like Amazon that really changed the game because it used to be the big enterprise software guys had the best apps now it's it's really flipped for people like Google and Netflix and those guys have the best apps and even more importantly they drive the expectation of the behavior of an application every Enterprise is finally getting it and then are they really embracing it we're definitely seeing a growth in new application development I think you know when I spend a lot of time talking to CIOs at enterprises as well and they all understand that in order to be competitive you have to invest in applications it's not enough to just view IT as a cost center and they're all beginning to invest in application development and in some cases these are digital media teams that are separate from traditional IT and other places it's you know they're they're more closely tied together but we absolutely see a kind of growth in application development in many of these end up looking a lot like the development teams that we see here in the Bay Area you know and companies that are building staffs and consumer cloud apps yeah exciting time so you should coming out of stealth what's kind of your your next kind of milestone that you're looking forward to you have a big some announcements you got show you're gonna kind of watch out we're we're we're gonna see you make a big splash well for us it's it's steadily building our business and so we hope to you know we're launching now and we've got a lot of great customers already and hope to sign on several more and help our customers build great applications about that's our focus again congratulations two years that's a big development project Karthik thank growl the founder and CEO of signal effects just launching their company coming out of stealth we'd love to get them on the cube share the knowledge with you guys both the people that are trying to start your own company take a little inspiration as well as as the people that need the service tomorrow with the cloud with a modern application thanks a lot thank you Jeff thank you you're watching Jeff Rick cube conversation see you next time

Published Date : Mar 12 2015

**Summary and Sentiment Analysis are not been shown because of improper transcript**

ENTITIES

EntityCategoryConfidence
February of 2013DATE

0.99+

JeffPERSON

0.99+

PhilPERSON

0.99+

Marc AndreessenPERSON

0.99+

Ben HorowitzPERSON

0.99+

two yearsQUANTITY

0.99+

FacebookORGANIZATION

0.99+

10xQUANTITY

0.99+

KarthikPERSON

0.99+

karthik raoPERSON

0.99+

signal effectsORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

Karthik RauPERSON

0.99+

1 millionQUANTITY

0.99+

andreessen horowitzPERSON

0.99+

three changesQUANTITY

0.99+

five million dollarsQUANTITY

0.99+

six monthsQUANTITY

0.99+

SignalFXORGANIZATION

0.99+

twoQUANTITY

0.99+

Jeff RickPERSON

0.99+

eight million dollarsQUANTITY

0.99+

loud cloudORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

NetflixORGANIZATION

0.98+

twenty eight and a half million dollarsQUANTITY

0.98+

oneQUANTITY

0.98+

about six monthsQUANTITY

0.98+

VMwareORGANIZATION

0.98+

three thingsQUANTITY

0.98+

Bay AreaLOCATION

0.98+

bothQUANTITY

0.98+

hundreds of VMsQUANTITY

0.97+

tens of billions of data pointsQUANTITY

0.96+

Jeff RickPERSON

0.96+

firstQUANTITY

0.96+

about seven yearsQUANTITY

0.95+

todayDATE

0.94+

a couple of years agoDATE

0.94+

2015DATE

0.93+

dozen peopleQUANTITY

0.93+

tomorrowDATE

0.93+

twenty-eightQUANTITY

0.92+

several yearsQUANTITY

0.92+

BigDataSVORGANIZATION

0.92+

first introductionQUANTITY

0.91+

single databaseQUANTITY

0.87+

twenty nodesQUANTITY

0.86+

LaytonORGANIZATION

0.86+

once a weekQUANTITY

0.85+

thirdQUANTITY

0.84+

Phillip LouPERSON

0.76+

thousands of a DMQUANTITY

0.74+

one nodeQUANTITY

0.74+

philPERSON

0.71+

20 yearsQUANTITY

0.71+

timesQUANTITY

0.7+

lotQUANTITY

0.69+

a couple yearsQUANTITY

0.65+

lot of stuffQUANTITY

0.64+

every secondQUANTITY

0.62+

Charles RiverLOCATION

0.59+

past 15DATE

0.59+

companiesQUANTITY

0.56+

systemsQUANTITY

0.55+

every dayQUANTITY

0.53+