Image Title

Search Results for Janet George:

4 Breaking Down Your Data Grant Gibson and Janet George


 

from the cube studios in Palo Alto in Boston it's the cube covering empowering the autonomous enterprise brought to you by Oracle consulting welcome back everybody to this special digital event coverage that the cube is looking into the rebirth of Oracle consulting Janet George is here she's group vp autonomous for advanced analytics with machine learning and artificial intelligence at oracle and she's joined by grant gibson is a group vp of growth and strategy at oracle folks welcome to the cube thanks so much for coming on thank you thank you great I want to start with you because you get strategy in your title like just start big picture what is the strategy with Oracle specifically as it relates to autonomous and also consulting sure so I think you know Oracle has a deep legacy of strengthened data and over the company's successful history it's evolved what that is from steps along the way if you look at the modern enterprise of Oracle client I think there's no denying that we've entered the age of AI that everyone knows that artificial intelligence and machine learning are a key to their success in the business marketplace going forward and while generally it's acknowledge that it's a transformative technology and people know that they need to take advantage of it it's the how that's really tricky and that most enterprises in order to really get an enterprise level ROI on an AI investment need to engage in projects of significant scope and going from realizing there's an opportunity to realize and there's a threat to mobilizing yourself to capitalize on it is a is a daunting task for an enemy certainly one that's you know anybody that's got any sort of legacy of success has built-in processes that's built in systems has built in skillsets and making that leap to be an autonomous enterprise is is challenging for companies to wrap their heads around so as part of the rebirth of Oracle consulting we've developed a practice around how to both manage the the technology needs for that transformation as well as the human needs as well as the data science needs to it so rather there's about five or six things that I want to followup with you there so there's gonna be good conversations Janet so ever since I've been in the industry we're talking about AI in sort of start stop start stop we had the AI winter and now it seems to be here it's almost feel like that the the technology never lived up to its promise you didn't have the horsepower a compute power you know enough data maybe so we're here today feels like we are entering a new era why is that and and how will the technology perform this time so for AI to perform it's very reliant on the data we entered the age of AI without having the right data for AI so you can imagine that we we just launched into AI without our data being ready to be training sex for AI so we started with bi data or we started the data that was already historically transformed formatted had logical structures physical structures this data was sort of trapped in many different tools and then suddenly AI comes along and we say take this data our historical data we haven't tested to see if this has labels in it this has learning capability in it we just thrust the data to AI and that's why we saw the initial wave of AI sort of failing because it was not ready to fall AI ready for the generation of AI and part of I think the leap that clients are finding success with now is getting the Apple data types and you're moving from the zeros and ones of structured data to image language written language spoken language you're capturing different data sets in ways that prior tools never could and so the classifications that come out of it the insights that come out of it the business process transformation comes out of it is different than what we would have understood under the structured data format so I think it's that combination of really being able to push massive amounts of data through a cloud product to be able to process it at scale that is what I think is the combination that takes it to the next plateau for sure the language that we use today I feel like is going to change and you just started to touch on some of them you know sensing you know they're our senses and you know the visualization and the the the the auditory so it's it's sort of this new experience that customers are saying a lot of this machine intelligence behind them I call it the autonomous enterprise right the journey to be the autonomous enterprise and when you're on this journey to be the autonomous enterprise you need really the platform that can help you be cloud is that platform which can help you get to the autonomous journey but the autonomous journey does not end with the cloud right or doesn't end with the dead lake these are just infrastructures that are basic necessary necessities for being on that on that autonomous journey but at the end it's about how do you train and scale at a very large scale training that needs to happen on this platform for AI to be successful and if you are an autonomous enterprise then you have really figured out how to tap into AI and machine learning in a way that nobody else has to derive business value if you will so you've got the platform you've got the data and now you're actually tapping into the autonomous components AI and machine learning to derive business intelligence and business value so I want to get into a little bit of Oracle's role but to do that I want to talk a little bit more about the industry so if you think about the way this the industry seems to be restructuring around data there historically Industries had their own stack or value chain and if you were in the finance industry you were there for life you know so when you think about banking for example highly regulated industry think about our geek culture these are highly regulated industries they're come it was very difficult to disrupt these industries but now you look at an Amazon right and what does an Amazon or any other tech giant like Apple have they have incredible amounts of data they understand how people use or how they want to do banking and so they've cut off the tap of cash or Amazon pay and these things are starting to eat into the market right so you would have never thought an Amazon could be a competition to your banking industry just because of regulations but they are not hindered by the regulations because they're starting at a different level and so they become an instant threat and an instant destructor to these highly regulated industries that's what data does right then you use data as you DNA for your business and you are sort of born in data or you figured out how to be autonomous if you will capture value from that data in a very significant manner then you can get into industries that are not traditionally your own industry it can be like the food industry it can be the cloud industry the book industry you know different industries so you know that that's what I see happening with the tech giants so great this is a really interesting point that Gina is making that you mentioned you started off with like a couple of industries that are highly regulated harder to disrupt you know music got disrupted publishing got disrupted but you've got these regulated businesses you know defense automotive actually hasn't been truly disrupted yet so I'm Tesla maybes a harbinger and so you've got this spectrum of disruption but is anybody safe from disruption okay I don't think anyone's ever safe from it it's it's changed in evolution right that you whether it's you know swapping horseshoes for cars or TV for movies or Netflix or any sort of evolution of a business you I wouldn't coast on any of them and I think to earlier question around the value that we can help bring to Oracle customers is that you know we have a rich stack of applications and I find that the space between the applications the data that that spans more than one of them is a ripe playground for innovations that where the data already exists inside a company but it's trapped from both a technology and a business perspective and that's where I think really any company can take advantage of knowing its data better and changing itself to take advantage of what's already there yet powerful bit people always throw the bromide out the data is the new oil and we've said no data is far more valuable because you can use it in a lot of different places or you can use once and it's has to follow laws of scarcity data if you can unlock it and so a lot of the incumbents they have built a business around whatever a factory or you know process and people a lot of the the trillion-dollar start in us that they're become trillionaires you know I'm talking about data is at the core their data company so so it seems like a big challenge for you you're incumbent customers clients is to put data hit the core be able to break down those silos how do they do that grading down silos is really super critical for any business it was okay to operate in a silo for example you would think that oh you know I could just be payroll in expense reports and it wouldn't man matter if I get into vendor performance management or purchasing that can operate as a silo but anymore we are finding that there are tremendous insights between vendor performance management I expensive all these things are all connected so you can't afford to have your data set in silos so grading down that silo actually gives the business very good performance right insights that they didn't have before so that's one way to go but but another phenomena happens when you start to great down the silos you start to recognize what data you don't have to take your business to the next level right that awareness will not happen when you're working with existing data so that awareness comes into form when you great the silos and you start to figure out you need to go after different set of data to get you to new product creation what would that look like new test insights or new capex avoidance then that data is just you have to go through the eye tration to be able to figure that out which takes is what you're saying happy so this notion of the autonomous under president help me here because I get kind of autonomous and automation coming into IT IT ops I'm interested in how you see customers taking that beyond the technology organization into the enterprise I think when AI is a technology problem the company is it at a loss ai has to be a business problem ai has to inform the business strategy ai has two main companies the successful companies that have done so 90 percent of our investments are going towards data we know that and and most of it going towards AI data out there about this right and so we looked at what are these ninety cup ninety percent of the company's investments where are these going and who is doing this right and who's not doing this right one of the things we are seeing as results is that the companies that are doing it right have brought data into their business strategy they've changed their business model right so it's not like making a better taxi but coming up with uber right so it's not like saying okay I'm going to have all these I'm going to be the drug manufacturing company I'm going to put drugs out there in the market versus I'm going to do connected health right and so how does data serve the business model of being connected health rather than being a drug company selling drugs to my customers right it's a completely different way of looking at it and so now I is informing drug discovery AI is not helping you just put more drugs to the market rather it's helping you come up with new drugs that will help the process of connected game there's a lot of discussion in the press about you know the ethics of AI and how far should we take AI and how far can we take it from a technology standpoint long roadmap there but how far should we take it do you feel as though public policy will take care of that a lot of that narrative is just kind of journalists looking for you know the negative story well that's sort itself out how much time do you spend with your customers talking about that we in Oracle we're building our data science platform with an explicit feature called explain ability off the model on how the model came up with the features what features it picked we can rearrange the features that the model picked so I think explain ability is very important for ordinary people to trust AI because we can't trust AI even even data scientists contrast AI right to a large extent so for us to get to that level where we can really trust what AI is picking in terms of a model we need to have explained ability and I think a lot of the companies right now are starting to make that as part of their platform well we're definitely entering a new era the the age of AI of the autonomous enterprise folks thanks very much for a great segment really appreciate it yeah our pleasure thank you for having us thank you alright and thank you and keep it right there we're right back with our next guest for this short break you're watching the cubes coverage of the rebirth of Oracle consulting right back you [Music]

Published Date : May 8 2020

**Summary and Sentiment Analysis are not been shown because of improper transcript**

ENTITIES

EntityCategoryConfidence
Janet GeorgePERSON

0.99+

AmazonORGANIZATION

0.99+

AppleORGANIZATION

0.99+

Palo AltoLOCATION

0.99+

90 percentQUANTITY

0.99+

oracleORGANIZATION

0.99+

OracleORGANIZATION

0.99+

TeslaORGANIZATION

0.99+

grant gibsonPERSON

0.99+

oneQUANTITY

0.98+

JanetPERSON

0.98+

ninety percentQUANTITY

0.98+

BostonLOCATION

0.98+

uberORGANIZATION

0.98+

GinaPERSON

0.97+

todayDATE

0.97+

Breaking Down Your DataTITLE

0.96+

two main companiesQUANTITY

0.95+

six thingsQUANTITY

0.95+

NetflixORGANIZATION

0.95+

bothQUANTITY

0.94+

ninety cupQUANTITY

0.93+

more than oneQUANTITY

0.92+

one wayQUANTITY

0.9+

zerosQUANTITY

0.85+

about fiveQUANTITY

0.83+

lot of the companiesQUANTITY

0.73+

waveEVENT

0.7+

Grant GibsonPERSON

0.68+

trillionQUANTITY

0.65+

lotQUANTITY

0.58+

onceQUANTITY

0.57+

Janet George & Grant Gibson, Oracle Consulting | Empowering the Autonomous Enterprise of the Future


 

>>Yeah, yeah, >>yeah! >>Welcome back, everybody. To this special digital event coverage, the Cube is looking into the rebirth of Oracle Consulting. Janet George is here. She's group VP Autonomous for Advanced Analytics with machine learning and artificial intelligence at Oracle. And she's joined by Grant Gibson Group VP of growth and strategy at Oracle. Folks, welcome to the Cube. Thanks so much for coming on. Great. I want to start with you because you get strategy in your title like this. Start big picture. What is the strategy with Oracle specifically as it relates to autonomous and also consulting? >>Sure. So I think you know, Oracle has a deep legacy of strength and data and, uh uh, over the company's successful history. It's evolved what that is from steps along the way. And if you look at the modern enterprise Oracle client, I think there's no denying that we've entered the age of AI, that everyone knows that artificial intelligence and machine learning are a key to their success in the business marketplace going forward. And while generally it's acknowledged that it's a transformative technology and people know that they need to take advantage of it, it's the how that's really tricky and that most enterprises, in order to really get an enterprise level, are rely on AI investment. Need to engage in projects of significant scope, and going from realizing there's an opportunity of realizing there's a threat to mobilize yourself to capitalize on it is a daunting task or certainly one that's, you know, Anybody that's got any sort of legacy of success has built in processes as building systems has built in skill sets, and making that leap to be an autonomous enterprise is challenging for companies to wrap their heads around. So as part of the rebirth of Oracle Consulting, we've developed a practice around how to both manage the technology needs for that transformation as well as the human needs as well as the data science needs. >>So there's about five or six things that I want to follow up with you there. So this is a good conversation. Ever since I've been in the industry, we were talking about a sort of start stop start stop at the Ai Winter, and now it seems to be here is almost feel like the technology never lived up to its promise. If you didn't have the horsepower compute power data may be so we're here today. It feels like we are entering a new era. Why is that? And how will the technology perform this time? >>So for AI to perform it's very remind on the data we entered the age of Ai without having the right data for AI. So you can imagine that we just launched into Ai without our data being ready to be training sex for AI. So we started with B I data or we started the data that was already historically transformed. Formatted had logical structures, physical structures. This data was sort of trapped in many different tools. And then suddenly Ai comes along and we see Take this data, our historical data we haven't tested to see if this has labels in it. This has learning capability in it. Just trust the data to AI. And that's why we saw the initial wave of ai sort of failing because it was not ready to full ai ready for the generation of Ai, if you will. >>So, to me, this is I always say, this was the contribution that Hadoop left us, right? I mean, the dupe everybody was crazy. It turned into big data. Oracle was never that nuts about it is gonna watch, Setback and wash obviously participated, but it gathered all this data created Chief Data Lakes, which people always joke turns into data swamps. But the data is often times now within organizations least present. Now it's a matter of what? What what's The next step is >>basically about Hadoop did to the world of data. Was her dupe freed data from being stuck in tools it basically brought forth. This concept of a platform and platform is very essential because as we enter the age of AI and be entered, the better wide range of data. We can't have tools handling all of the state of the data needs to scale. The data needs to move, the data needs to grow. And so we need the concept of platforms so we can be elastic for the growth of the data, right, it can be distributed. It can grow based on the growth of the data, and it can learn from that data. So that is that's the reason why Hadoop sort of brought us into the platform board, >>right? A lot of that data ended up in the cloud. I always say, You know, for years we marched to the cadence of Moore's law. That was the innovation engine in this industry and fastest, you could get a chip in, you know, you get a little advantage, and then somebody would leapfrog. Today it's got all this data you apply machine intelligence and cloud gives you scale. It gives you agility of your customers. Are they taking advantage of the new innovation cocktail? First of all, do you buy that? How do you see them taking >>advantage of? Yeah, I think part of what James mentioned makes a lot of sense is that at the beginning, when you know you're taking the existing data in an enterprise and trying to do AI to it, you often get things that look a lot like what you already knew because you're dealing with your existing data set in your existing expertise. And part of I think the leap that clients are finding success with now is getting novel data types, and you're moving from, uh, zeros and ones of structured data, too. Image language, written language, spoken language. You're capturing different data sets in ways that prior tools never could. And so the classifications that come out of it, the insights that come out of it, the business process transformation comes out of it is different than what we would have understood under the structure data format. So I think it's that combination of really being able to push massive amounts of data through a cloud product to be able to process it at scale. That is what I think is the combination that takes it to the next plateau for sure. >>So you talked about sort of. We're entering a new era Age of a AI. You know, a lot of people, you know, kind of focus on the cloud is the current era, but it really does feel like we're moving beyond that. The language that we use today, I feel like it's going to change, and you just started to touch on some of it. Sensing, you know, there are senses and you know the visualization in the the auditory. So it's It's sort of this new experience that customers are seeing a lot of this machine intelligence behind. >>I call it the autonomous and a price right. The journey to be the autonomous enterprise. And then you're on this journey to be the autonomous enterprise you need. Really? The platform that can help you be cloud is that platform which can help you get to the autonomous journey. But the autonomous journey does not end with the cloud or doesn't end with the data lake. These are just infrastructures that are basic necessary necessities for being on that on that autonomous journey. But at the end, it's about how do you train and scale at, um, very large scale training that needs to happen on this platform for AI to be successful. And if you are an autonomous and price, then you have really figured out how to tap into AI and machine learning in a way that nobody else has to derive business value, if you will. So you've got the platform, you've got the data, and now you're actually tapping into the autonomous components ai and machine learning to derive business, intelligence and business value. >>So I want to get into a little bit of Oracle's role. But to do that I want to talk a little bit more about the industry. So if you think about the way that the industry seems to be restructuring around data. Historically, industries had their own stack value chain, and if you were in in in the finance industry, you were there for life. We had your own sales channel distribution, etcetera. But today you see companies traversing industries, which has never happened before. You know, you see apple getting into content and music, and there's so many examples are buying whole foods data is sort of the enabler. There you have a lot of organizations, your customers, that are incumbents that they don't wanna get disrupted your part big party roles to help them become that autonomous and press so they don't get disrupted. I wonder if you could maybe maybe comment on How are you doing? >>Yeah, I'll comment and then grant you China, you know. So when you think about banking, for example, highly regulated industry think about RG culture. These are highly regulated industries there. It was very difficult to destruct these industries. But now you look at an Amazon, right? And what is an Amazon or any other tech giants like Apple have? They have incredible amounts of data. They understand how people use for how they want to do banking. And so they've come up with Apple cash or Amazon pay, and these things are starting to eat into the market, right? So you would have never thought and Amazon could be a competition to a banking industry just because of regulations. But they're not hindered by the regulations because they're starting at a different level. And so they become an instant threat in an instant destructive to these highly regulated industries. That's what data does, right when you use data as your DNA for your business and you are sort of born in data or you figured out how to be autonomous. If you will capture value from that data in a very significant manner, then you can get into industries that are not traditionally your own industry. It can be like the food industry can be the cloud industry, the book industry, you know, different industries. So you know that that's what I see happening with the tech giants. >>So great, there's a really interesting point that the Gina is making that you mentioned. You started off with a couple of industries that are highly regulated, the harder to disrupt use, it got disrupted, publishing got disrupted. But you've got these regulated businesses. Defense or automotive actually hasn't been truly disrupted yet. Some Tesla, maybe a harbinger. And so you've got this spectrum of disruption. But is anybody safe from disruption? >>Kind of. I don't think anyone's ever say from it. It's It's changing evolution, right? That you whether it's, you know, swapping horseshoes for cars are TV for movies or Netflix are any sort of evolution of a business You're I wouldn't coast on any of them. And I think to the earlier question around the value that we can help bring the Oracle customers is that you know, we have a rich stack of applications, and I find that the space between the applications, the data that that spans more than one of them is a ripe playground for innovations that where the data already exists inside a company. But it's trapped from both a technology and a business perspective. Uh, and that's where I think really any company can take advantage of knowing it's data better and changing itself to take advantage of what's already there. >>Yet powerful people always throw the bromide out. The data is the new oil, and we've said. No data is far more valuable because you can use it in a lot of different places. Oil you can use once and it's follow the laws of scarcity data if you can unlock it. And so a lot of the incumbents they have built a business around, whatever a factory or a process and people, a lot of the trillion are starting us that have become billionaires. You know, I'm talking about Data's at the core. They're data companies. So So it seems like a big challenge for your incumbent customers. Clients is to put data at the core, be able to break down those silos. How do they do that? >>Grading down silos is really super critical for any business. It was okay to operate in a silo, for example. You would think that, Oh, you know, I could just be payroll and expense reports and it wouldn't matter matter if I get into vendor performance management or purchasing that can operate as a silo. But any movie of finding that there are tremendous insights between vendor performance management I expensive for these things are all connected, so you can't afford to have your data sits in silos. So grading down that silo actually gives the business very good performance, right? Insights that they didn't have before. So that's one way to go. But but another phenomena happens when you start to great down the silos, you start to recognize what data you don't have to take your business to the next level, right. That awareness will not happen when you're working with existing data so that a Venice comes into form when you great the silos and you start to figure out you need to go after a different set of data to get you to a new product creation. What would that look like? New test insights or new cap ex avoidance that that data is just you have to go through the iteration to be able to figure that out. >>It becomes it becomes a business problem, right? If you got a process now where you can identify 75% of the failures and you know the value of the other 25% of failures, that becomes a simple investment. How much money am I willing to invest to knock down some portion that 25% and it changes it from simply an I t problem or expense management problem to you know, the cash problem. >>But you still need a platform that has AP eyes that allows you to bring in those data sets that you don't have access to this enable an enabler. It's not the answer. It's not the outcome in and of itself, but it enables. And >>I always say, you can't have the best toilet if you're coming, doesn't work. You know what I mean? So you have to have your plumbing. Your plumbing has to be more modern. So you have to bring in modern infrastructure distributed computing that that you cannot. There's no compromise there, right? You have to have the right equal system for you to be able to be technologically advanced on a leader in that >>table. Stakes is what you're saying. And so this notion of the autonomous enterprise I would help me here cause I get kind of autonomous and automation coming into I t I t ops. I'm interested in how you see customers taking that beyond the technology organization into the enterprise. >>Yeah, this is this is such a great question, right? This is what I've been talking about all morning. Um, I think when AI is a technology problem, the company is that at a loss AI has to be a business problem. AI has to inform the business strategy. AI has to been companies. The successful companies that have done so. 90% of my investments are going towards state. We know that and most of it going towards AI. There's data out there about this, right? And so we look at what are these? 90 90% of the company's investments. Where are these going and whose doing this right? Who's not doing this right? One of the things we're seeing as results is that the companies that are doing it right have brought data into their business strategy. They've changed their business model, right? So it's not like making a better taxi, but coming up with a bow, right? So it's not like saying Okay, I'm going to have all these. I'm going to be the drug manufacturing company. I'm gonna put drugs out there in the market forces. I'm going to do connected help, right? And so how does data serve the business model of being connected? Help rather than being a drug company selling drugs to my customers, right? It's a completely different way of looking at it. And so now you guys informing drug discovery is not helping you just put more drugs to the market. Rather, it's helping you come up with new drugs that would help the process of connected games. There's a >>lot of discussion in the press about, you know, the ethics of AI, and how far should we take? A far. Can we take it from a technology standpoint, Long road map there? But how far should we take it? Do you feel as though of public policy will take care of that? A lot of that narrative is just kind of journalists looking for, You know, the negative story. Well, that's sort itself out. How much time do you spend with your customers talking about that and is what's Oracle's role there? I mean, Facebook says, Hey, the government should figure this out. What's your point? >>I think everybody has a role. It's a joint role, and none of us could give up our responsibilities as data scientists. We have heavy responsibility in this area on. We have heavy responsibility to advise the clients on the state area. Also, the data we come from the past has to change. That is inherently biased, right? And we tend to put data signs on biased data with the one dimensional view of the data. So we have to start looking at multiple dimensions of the data. It's got to start examining. I call it a responsible AI when you just simply take one variable or start to do machine learning with that because that's not that's not right. You have to examine the data. You got to understand how much biases in the data are you training a machine learning model with the bias? Is there diversity in the models? Is their diversity in the data? These are conversations we need to have. And we absolutely need policy around this because unless our lawmakers start to understand that we need the source of the data to change. And if we look at this, if we look at the source of the data and the source of the data is inherently biased or the source of the data has only a single representation, we're never going to change that downstream. AI is not going to help us. There so that has to change upstream. That's where the policy makers come into into play. The lawmakers come into play, but at the same time as we're building models, I think we have a responsibility to say can be triangle can be built with multiple models. Can we look at the results of these models? How are these feature's ranked? Are they ranked based on biases, sex, HP II, information? Are we taking the P I information out? Are we really looking at one variable? Somebody fell to pay their bill, but they just felt they they build because they were late, right? Voices that they don't have a bank account and be classified. Them is poor and having no bank account, you know what I mean? So all of this becomes part of response >>that humans are inherently biased, and so humans or building algorithms right there. So you say that through iteration, we can stamp out, the buyers >>can stamp out, or we can confront the bias. >>Let's make it transparent, >>make transparent. So I think that even if we can have the trust to be able to have the discussion on, is this data the right data that we're doing the analysis on On start the conversation day, we start to see the change. >>We'll wait so we could make it transparent. And I'm thinking a lot of AI is black box. Is that a problem? Is the black box you know, syndrome an issue or we actually >>is not a black box. We in Oracle, we're building our data science platform with an explicit feature called Explained Ability. Off the model on how the model came up with the features what features they picked. We can rearrange the features that the model picked, citing Explain ability is very important for ordinary people. Trust ai because we can't trust even even they designed This contrast ai right to a large extent. So for us to get to that level, where we can really trust what ai speaking in terms of a modern, we need to have explain ability. And I think a lot of the companies right now are starting to make that as part of their platform. >>So that's your promise. Toe clients is that your AI will be a that's not everybody's promised. I mean, there's a lot of black box and, you know, >>there is, if you go to open source and you start downloading, you'll get a lot of black boss. The other advantage to open source is sometimes you can just modify the black box. You know they can give you access, and you could modify the black box. But if you get companies that have released to open, source it somewhat of a black box, so you have to figure out the balance between you. Don't really worry too much about the black box. If you can see that the model has done a pretty good job as compared to other models, right if I take if I triangulate the results off the algorithm and the triangulation turns out to be reasonable, the accuracy on our values and the Matrix is show reasonable results. Then I don't really have to brief one model is to bias compared to another moderate. But I worry if if there's only one dimension to it. >>Well, ultimately much too much of the data scientists to make dismay, somebody in the business side is going to ask about cause I think this is what the model says. Why is it saying that? And you know, ethical reasons aside, you're gonna want to understand why the predictions are what they are, and certainly as you're going to examine those things as you look at the factors that are causing the predictions on the outcomes, I think there's any sort of business should be asking those responsibility questions of everything they do, ai included, for sure. >>So we're entering a new era. We kind of all agree on that. So I want to just throw a few questions out, have a little fun here, so feel free to answer in any order. So when do you think machines will be able to make better diagnoses than doctors? >>I think they already are making better diagnosis. And there's so much that I found out recently that most of the very complicated cancel surgeries are done by machines doctors to standing by and making sure that the machines are doing it well, right? And so I think the machines are taking over in some aspects. I wouldn't say all aspects. And then there's the bedside manners. You really need the human doctor and you need the comfort of talking to >>a CIO inside man. Okay, when >>do you >>think that driving and owning your own vehicle is going to be the exception rather than the rule >>that I think it's so far ahead. It's going to be very, very near future, you know, because if you've ever driven in an autonomous car, you'll find that after your initial reservations, you're going to feel a lot more safer in an autonomous car because it's it's got a vision that humans don't. It's got a communication mechanism that humans don't right. It's talking to all the fleets of cars. Richardson Sense of data. It's got a richer sense of vision. It's got a richer sense of ability to react when a kid jumps in front of the car where a human will be terrified, not able to make quick decisions, the car can right. But at the same time we're going to have we're gonna have some startup problems, right? We're going to see a I miss file in certain areas, and junk insurance companies are getting gearing themselves up for that because that's just but the data is showing us that we will have tremendously decreased death rates, right? That's a pretty good start to have AI driving up costs right >>believer. Well, as you're right, there's going to be some startup issues because this car, the vehicle has to decide. Teoh kill the person who jumped in front of me. Or do I kill the driver killing? It's overstating, but those are some of the stories >>and humans you don't. You don't question the judgment system for that. >>There's no you person >>that developed right. It's treated as a one off. But I think if you look back, you look back five years where we're way. You figure the pace of innovation and the speed and the gaps that we're closing now, where we're gonna be in five years, you have to figure it's I mean, I don't I have an eight year old son. My question. If he's ever gonna drive a car, yeah, >>How about retail? Do you think retail stores largely will disappear? >>I think retail. Will there be a customer service element to retail? But it will evolve from where it's at in a very, very high stakes, right, because now, with our if I did, you know we used to be invisible as we want. We still aren't invisible as you walk into a retail store, right, Even if you spend a lot of money in in retail. And you know now with buying patterns and knowing who the customer is and your profile is out there on the Web, you know, just getting a sense of who this person is, what their intent is walking into the store and doing doing responsible ai like bringing value to that intent right, not responsible. That will gain the trust. And as people gain the trust and then verify these, you're in the location. You're nearby. You normally by the sword suits on sale, you know, bring it all together. So I think there's a lot of connective tissue work that needs to happen. But that's all coming. It's coming together, >>not the value and what the what? The proposition of the customers. If it's simply there as a place where you go and buy, pick up something, you already know what you're going to get. That story doesn't add value. But if there's something in the human expertise and the shared felt, that experience of being in the store, that's that's where you'll see retailers differentiate themselves. I >>like, yeah, yeah, yeah, >>you mentioned Apple pay before you think traditional banks will lose control of payment systems, >>They're already losing control of payment systems, right? I mean, if you look at there was no reason for the banks to create Siri like assistance. They're all over right now, right? And we started with Alexa first. So you can see the banks are trying to be a lot more customized customer service, trying to be personalized, trying to really make it connect to them in a way that you have not connected to the bank before. The way we connected to the bank is you know, you knew the person at the bank for 20 years or since when you had your first bank account, right? That's how you connect with the banks. And then you go to a different branch, and then all of a sudden you're invisible, right? Nobody knows you. Nobody knows that you were 20 years with the bank. That's changing, right? They're keeping track of which location you're going to and trying to be a more personalized. So I think ai is is a forcing function in some ways to provide more value. If anything, >>we're definitely entering a new era. The age of of AI of the autonomous enterprise folks, thanks very much for great segment. Really appreciate it. >>Yeah. Pleasure. Thank you for having us. >>All right. And thank you and keep it right there. We'll be back with our next guest right after this short break. You're watching the Cube's coverage of the rebirth of Oracle consulting right back. Yeah, yeah, yeah, yeah.

Published Date : Mar 25 2020

SUMMARY :

I want to start with you because you get strategy And if you look at the modern enterprise So there's about five or six things that I want to follow up with you there. for the generation of Ai, if you will. I mean, the dupe everybody was crazy. of the data needs to scale. Today it's got all this data you apply machine intelligence and cloud gives you scale. you often get things that look a lot like what you already knew because you're dealing with your existing data set I feel like it's going to change, and you just started to touch on some of it. that nobody else has to derive business value, if you will. So if you think about the way that the industry seems to be restructuring around data. It can be like the food industry can be the cloud industry, the book industry, you know, different industries. So great, there's a really interesting point that the Gina is making that you mentioned. question around the value that we can help bring the Oracle customers is that you the laws of scarcity data if you can unlock it. the silos, you start to recognize what data you don't have to take your business to the of the failures and you know the value of the other 25% of failures, that becomes a simple investment. that you don't have access to this enable an enabler. You have to have the right equal system for you to be able to be technologically advanced on I'm interested in how you see customers taking that beyond the And so now you guys informing drug discovery lot of discussion in the press about, you know, the ethics of AI, and how far should we take? You got to understand how much biases in the data are you training a machine learning So you say that through iteration, we can stamp out, the buyers So I think that even if we can have the trust to be able to have the discussion Is the black box you know, syndrome an issue or we And I think a lot of the companies right now are starting to make that I mean, there's a lot of black box and, you know, The other advantage to open source is sometimes you can just modify the black box. And you know, ethical reasons aside, you're gonna want to understand why the So when do you think machines will be able to make better diagnoses than doctors? and you need the comfort of talking to a CIO inside man. you know, because if you've ever driven in an autonomous car, you'll find that after Or do I kill the driver killing? and humans you don't. the gaps that we're closing now, where we're gonna be in five years, you have to figure it's I mean, And you know now with buying patterns and knowing who the customer is and your profile where you go and buy, pick up something, you already know what you're going to get. And then you go to a different branch, and then all of a sudden you're invisible, The age of of AI of the autonomous enterprise Thank you for having us. And thank you and keep it right there.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
AppleORGANIZATION

0.99+

Janet GeorgePERSON

0.99+

OracleORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

FacebookORGANIZATION

0.99+

20 yearsQUANTITY

0.99+

JamesPERSON

0.99+

SiriTITLE

0.99+

75%QUANTITY

0.99+

25%QUANTITY

0.99+

90%QUANTITY

0.99+

90QUANTITY

0.99+

five yearsQUANTITY

0.99+

RichardsonPERSON

0.99+

appleORGANIZATION

0.99+

TeslaORGANIZATION

0.99+

Grant GibsonPERSON

0.99+

Oracle ConsultingORGANIZATION

0.99+

NetflixORGANIZATION

0.99+

MoorePERSON

0.98+

OneQUANTITY

0.98+

TodayDATE

0.98+

one variableQUANTITY

0.98+

todayDATE

0.97+

one modelQUANTITY

0.97+

bothQUANTITY

0.97+

six thingsQUANTITY

0.97+

AlexaTITLE

0.97+

one dimensionQUANTITY

0.97+

eight year oldQUANTITY

0.97+

FirstQUANTITY

0.96+

first bank accountQUANTITY

0.95+

one wayQUANTITY

0.94+

CubePERSON

0.93+

more than oneQUANTITY

0.9+

firstQUANTITY

0.86+

single representationQUANTITY

0.84+

about fiveQUANTITY

0.8+

oneQUANTITY

0.76+

Gibson GroupORGANIZATION

0.74+

GinaPERSON

0.73+

ChinaLOCATION

0.73+

HadoopTITLE

0.71+

CubeCOMMERCIAL_ITEM

0.68+

VeniceLOCATION

0.67+

WinterEVENT

0.66+

B IORGANIZATION

0.65+

HadoopPERSON

0.63+

Chief DataORGANIZATION

0.6+

HadoopORGANIZATION

0.59+

GrantPERSON

0.56+

AnalyticsORGANIZATION

0.52+

AiORGANIZATION

0.47+

AiTITLE

0.37+

Janet George & Grant Gibson, Oracle Consulting | Empowering the Autonomous Enterprise of the Future


 

>> Announcer: From Chicago, it's theCUBE, covering Oracle Transformation Day 2020. Brought to you by Oracle Consulting. >> Welcome back, everybody, to this special digital event coverage that theCUBE is looking into the rebirth of Oracle Consulting. Janet George is here, she's a group VP, autonomous for advanced analytics with machine learning and artificial intelligence at Oracle, and she's joined by Grant Gibson, who's a group VP of growth and strategy at Oracle. Folks, welcome to theCUBE, thanks so much for coming on. >> Thank you. >> Thank you. >> Grant, I want to start with you because you've got strategy in your title. I'd like to start big-picture. What is the strategy with Oracle, specifically as it relates to autonomous, and also consulting? >> Sure, so, I think Oracle has a deep legacy of strength in data, and over the company's successful history, it's evolved what that is from steps along the way. And if you look at the modern enterprise, an Oracle client, I think there's no denying that we've entered the age of AI, that everyone knows that artificial intelligence and machine learning are a key to their success in the business marketplace going forward. And while generally it's acknowledged that it's a transformative technology, and people know that they need to take advantage of it, it's the how that's really tricky, and that most enterprises, in order to really get an enterprise-level ROI on an AI investment, need to engage in projects of significant scope. And going from realizing there's an opportunity or realizing there's a threat to mobilizing yourself to capitalize on it is a daunting task for enterprise. Certainly one that's, anybody that's got any sort of legacy of success has built-in processes, has built-in systems, has built-in skill sets, and making that leap to be an autonomous enterprise is challenging for companies to wrap their heads around. So as part of the rebirth of Oracle Consulting, we've developed a practice around how to both manage the technology needs for that transformation as well as the human needs, as well as the data science needs to it. So there's-- >> So, wow, there's about five or six things that I want to (Grant chuckles) follow up with you there, so this is a good conversation. Janet, ever since I've been in the industry, when you're talking about AI, it's sort of start-stop, start-stop. We had the AI winter, and now it seems to be here. It almost feels like the technology never lived up to its promise, 'cause we didn't have the horsepower, the compute power, it didn't have enough data, maybe. So we're here today, it feels like we are entering a new era. Why is that, and how will the technology perform this time? >> So for AI to perform, it's very reliant on the data. We entered the age of AI without having the right data for AI. So you can imagine that we just launched into AI without our data being ready to be training sets for AI. So we started with BI data, or we started with data that was already historically transformed, formatted, had logical structures, physical structures. This data was sort of trapped in many different tools, and then, suddenly, AI comes along, and we say, take this data, our historical data, we haven't tested it to see if this has labels in it, this has learning capability in it. We just thrust the data to AI. And that's why we saw the initial wave of AI sort of failing, because it was not ready for AI, ready for the generation of AI, if you will. >> So, to me, this is, I always say this was the contribution that Hadoop left us, right? I mean, Hadoop, everybody was crazy, it turned into big data. Oracle was never that nuts about it, they just kind of watched, sat back and watched, obviously participated. But it gathered all this data, it created cheap data lakes, (laughs) which people always joke, turns into data swamps. But the data is oftentimes now within organizations, at least present, right. >> Yes, yes, yes. >> Like now, it's a matter of what? What's the next step for really good value? >> Well, basically, what Hadoop did to the world of data was Hadoop freed data from being stuck in tools. It basically brought forth this concept of platform. And platform is very essential, because as we enter the age of AI and we enter the petabyte range of data, we can't have tools handling all of this data. The data needs to scale. The data needs to move. The data needs to grow. And so, we need the concept of platform so we can be elastic for the growth of the data. It can be distributed. It can grow based on the growth of the data. And it can learn from that data. So that's the reason why Hadoop sort of brought us into the platform world. And-- >> Right, and a lot of that data ended up in the cloud. I always say for years, we marched to the cadence of Moore's law. That was the innovation engine in this industry. As fast as you could get a chip in, you'd get a little advantage, and then somebody would leapfrog. Today, it's, you've got all this data, you apply machine intelligence, and cloud gives you scale, it gives you agility. Your customers, are they taking advantage of that new innovation cocktail? First of all, do you buy that, and how do you see them taking advantage of this? >> Yeah, I think part of what Janet mentioned makes a lot of sense, is that at the beginning, when you're taking the existing data in an enterprise and trying to do AI to it, you often get things that look a lot like what you already knew, because you're dealing with your existing data set and your existing expertise. And part of, I think, the leap that clients are finding success with now is getting novel data types. You're moving from the zeroes and ones of structured data to image, language, written language, spoken language. You're capturing different data sets in ways that prior tools never could, and so, the classifications that come out of it, the insights that come out of it, the business process transformation that comes out of it is different than what we would have understood under the structured data format. So I think it's that combination of really being able to push massive amounts of data through a cloud product to be able to process it at scale. That is what I think is the combination that takes it to the next plateau for sure. >> So you talked about sort of we're entering the new era, age of AI. A lot of people kind of focus on the cloud as sort of the current era, but it really does feel like we're moving beyond that. The language that we use today, I feel like, is going to change, and you just started to touch on some of it, sensing, our senses, and the visualization, and the auditory, so it's sort of this new experience that customers are seeing, and a lot of this machine intelligence behind that. >> I call it the autonomous enterprise, right? >> Okay. >> The journey to be the autonomous enterprise. And when you're on this journey to be the autonomous enterprise, you need, really, the platform that can help you be. Cloud is that platform which can help you get to the autonomous journey. But the autonomous journey does not end with the cloud, or doesn't end with the data lake. These are just infrastructures that are basic, necessary, necessities for being on that autonomous journey. But at the end, it's about, how do you train and scale very large-scale training that needs to happen on this platform for AI to be successful? And if you are an autonomous enterprise, then you have really figured out how to tap into AI and machine learning in a way that nobody else has to derive business value, if you will. So you've got the platform, you've got the data, and now you're actually tapping into the autonomous components, AI and machine learning, to derive business intelligence and business value. >> So I want to get into a little bit of Oracle's role, but to do that, I want to talk a little bit more about the industry. So if you think about the way the industry seems to be restructuring around data, historically, industries had their own stack or value chain, and if you were in the finance industry, you were there for life, you know? >> Yes. >> You had your own sales channel, distribution, et cetera. But today, you see companies traversing industries, which has never happened before. You see Apple getting into content, and music, and there's so many examples, Amazon buying Whole Foods. Data is sort of the enabler there. You have a lot of organizations, your customers, that are incumbents, that they don't want to get disrupted. A big part of your role is to help them become that autonomous enterprise so they don't get disrupted. I wonder if you could maybe comment on how you're doing. >> Yeah, I'll comment, and then, Grant, you can chime in. >> Great. >> So when you think about banking, for example, highly regulated industry, think about agriculture, these are highly regulated industries. It is very difficult to disrupt these industries. But now you're looking at Amazon, and what does an Amazon or any other tech giant like Apple have? They have incredible amounts of data. They understand how people use, or how they want to do, banking. And so, they've come up with Apple Cash, or Amazon Pay, and these things are starting to eat into the market. So you would have never thought an Amazon could be a competition to a banking industry, just because of regulations, but they are not hindered by the regulations because they're starting at a different level, and so, they become an instant threat and an instant disruptor to these highly regulated industries. That's what data does. When you use data as your DNA for your business, and you are sort of born in data, or you've figured out how to be autonomous, if you will, capture value from that data in a very significant manner, then you can get into industries that are not traditionally your own industry. It can be the food industry, it can be the cloud industry, the book industry, you know, different industries. So that's what I see happening with the tech giants. >> So, Grant, this is a really interesting point that Janet is making, that, you mentioned you started off with a couple of industries that are highly regulated and harder to disrupt. You know, music got disrupted, publishing got disrupted, but you've got these regulated businesses, defense. Automotive hasn't been truly disrupted yet, so Tesla maybe is a harbinger. And so, you've got this spectrum of disruption. But is anybody safe from disruption? >> (laughs) I don't think anyone's ever safe from it. It's change and evolution, right? Whether it's swapping horseshoes for cars, or TV for movies, or Netflix, or any sort of evolution of a business, I wouldn't coast on any of it. And I think, to your earlier question around the value that we can help bring to Oracle customers is that we have a rich stack of applications, and I find that the space between the applications, the data that spans more than one of them, is a ripe playground for innovations where the data already exists inside a company but it's trapped from both a technology and a business perspective, and that's where, I think, really, any company can take advantage of knowing its data better and changing itself to take advantage of what's already there. >> The powerful people always throw the bromide out that data is the new oil, and we've said, no, data's far more valuable, 'cause you can use it in a lot of different places. Oil, you can use once and it's all you can do. >> Yeah. >> It has to follow the laws of scarcity. Data, if you can unlock it, and so, a lot of the incumbents, they have built a business around whatever, a factory or process and people. A lot of the trillion-dollar startups, that become trillionaires, you know who I'm talking about, data's at the core, they're data companies. So it seems like a big challenge for your incumbent customers, clients, is to put data at the core, be able to break down those silos. How do they do that? >> Mm, grating down silos is really super critical for any business. If it's okay to operate in a silo, for example, you would think that, "Oh, I could just be payroll and expense reports, "and it wouldn't matter if I get into vendor "performance management or purchasing. "That can operate as a silo." But anymore, we are finding that there are tremendous insights between vendor performance management and expense reports, these things are all connected. So you can't afford to have your data sit in silos. So grating down that silo actually gives the business very good performance, insights that they didn't have before. So that's one way to go. But another phenomena happens. When you start to grate down the silos, you start to recognize what data you don't have to take your business to the next level. That awareness will not happen when you're working with existing data. So that awareness comes into form when you grate the silos and you start to figure out you need to go after a different set of data to get you to new product creation, what would that look like, new test insights, or new capex avoidance, that data is just, you have to go through the iteration to be able to figure that out. >> And then it becomes a business problem, right? If you've got a process now where you can identify 75% of the failures, and you know the value of the other 25% of the failures, it becomes a simple investment. "How much money am I willing to invest "to knock down some portion of that 25%?" And it changes it from simply an IT problem or an expense management problem to the universal cash problem. >> To a business problem. >> But you still need a platform that has APIs, that allows you to bring in-- >> Yes, yes. >> Those data sets that you don't have access to, so it's an enabler. It's not the answer, it's not the outcome, in and of itself, but it enables the outcome. >> Yeah, and-- >> I always say you can't have the best toilet if your plumbing doesn't work, you know what I mean? So you have to have your plumbing. Your plumbing has to be more modern. So you have to bring in modern infrastructure, distributed computing, that, there's no compromise there. You have to have the right ecosystem for you to be able to be technologically advanced and a leader in that space. >> But that's kind of table stakes, is what you're saying. >> Stakes. >> So this notion of the autonomous enterprise, help me here. 'Cause I get kind of autonomous and automation coming into IT, IT ops. I'm interested in how you see customers taking that beyond the technology organization into the enterprise. >> Yeah, this is such a great question. This is what I've been talking about all morning. I think when AI is a technology problem, the company is at a loss. AI has to be a business problem. AI has to inform the business strategy. When companies, the successful companies that have done, so, 90% of our investments are going towards data, we know that, and most of it going towards AI. There's data out there about this. And so, we look at, what are these 90% of the companies' investments, where are these going, and who is doing this right, and who is not doing this right? One of the things we are seeing as results is that the companies that are doing it right have brought data into their business strategy. They've changed their business model. So it's not making a better taxi, but coming up with Uber. So it's not like saying, "Okay, I'm going to be "the drug manufacturing company, "I'm going to put drugs out there in the market," versus, "I'm going to do connected health." And so, how does data serve the business model of being connected health, rather than being a drug company selling drugs to my customers? It's a completely different way of looking at it. And so now, AI's informing drug discovery. AI is not helping you just put more drugs to the market. Rather, it's helping you come up with new drugs that will help the process of connected care. >> There's a lot of discussion in the press about the ethics of AI, and how far should we take AI, and how far can we take it from a technology standpoint, (laughs) long road map, there. But how far should we take it? Do you feel as though public policy will take care of that, a lot of that narrative is just kind of journalists looking for the negative story? Will that sort itself out? How much time do you spend with your customers talking about that, and what's Oracle's role there? Facebook says, "Hey, the government should figure this out." What's your sort of point of view on that? >> I think everybody has a role, it's a joint role, and none of us can give up our responsibilities. As data scientists, we have heavy responsibility in this area, and we have heavy responsibility to advise the clients on this area also. The data we come from, the past, has to change. That is inherently biased. And we tend to put data science on biased data with a one-dimensional view of the data. So we have to start looking at multiple dimensions of the data. We've got to start examining, I call it irresponsible AI, when you just simply take one variable, we'll start to do machine learning with that, 'cause that's not right. You have to examine the data. You've got to understand how much bias is in the data. Are you training a machine learning model with the bias? Is there diversity in the models? Is there diversity in the data? These are conversations we need to have. And we absolutely need policy around this, because unless our lawmakers start to understand that we need the source of the data to change, and if we look at the source of the data, and the source of the data is inherently biased or the source of the data has only a single representation, we're never going to change that downstream. AI's not going to help us there. So that has to change upstream. That's where the policy makers come into play, the lawmakers come into play. But at the same time, as we're building models, I think we have a responsibility to say, "Can we triangulate? "Can we build with multiple models? "Can we look at the results of these models? "How are these features ranked? "Are they ranked based on biases, sex, age, PII information? "Are we taking the PII information out? "Are we really looking at one variable?" Somebody failed to pay their bill, but they just failed to pay their bill because they were late, versus that they don't have a bank account and we classify them as poor on having no bank account, you know what I mean? So all this becomes part of responsible AI. >> But humans are inherently biased, and so, if humans are building algorithms-- >> That's right, that's right. >> There is the bias. >> So you're saying that through iteration, we can stamp out the bias? Is that realistic? >> We can stamp out the bias, or we can confirm the bias. >> Or at least make it transparent. >> Make it transparent. So I think that even if we can have the trust to be able to have the discussion on, "Is this data "the right data that we are doing the analysis on?" and start the conversation there, we start to see the change. >> Well, wait, so we could make it transparent, then I'm thinking, a lot of AI is black box. Is that a problem? Is the black box syndrome an issue, or are we, how would we deal with it? >> Actually, AI is not a black box. We, in Oracle, we are building our data science platform with an explicit feature called explainability of the model, on how the model came up with the features, what features it picked. We can rearrange the features that the model picked. So I think explainability is very important for ordinary people to trust AI. Because we can't trust AI. Even data scientists can't trust AI, to a large extent. So for us to get to that level where we can really trust what AI's picking, in terms of a model, we need to have explainability. And I think a lot of the companies right now are starting to make that as part of their platform. >> So that's your promise to clients, is that your AI will not be a black box. >> Absolutely, absolutely. >> 'Cause that's not everybody's promise. >> Yes. >> I mean, there's a lot of black box in AI, as you well know. >> Yes, yes, there is. If you go to open source and you start downloading, you'll get a lot of black box. The other advantage to open source is sometimes you can just modify the black box. They can give you access and you can modify the black box. But if you get companies that have released to open source, it's somewhat of a black box, so you have to figure out the balance between. You don't really have to worry too much about the black box if you can see that the model has done a pretty good job as compared to other models. If I triangulate the results of the algorithm, and the triangulation turns out to be reasonable, the accuracy and the r values and the matrixes show reasonable results, then I don't really have to worry if one model is too biased compared to another model. But I worry if there's only one dimension to it. >> Mm-hm, well, ultimately, to much of the data scientists' dismay, somebody on the business side is going to ask about causality. >> That's right. >> "Well, this is what "the model says, why is it saying that?" >> Yeah, right. >> Yeah. >> And, ethical reasons aside, you're going to want to understand why the predictions are what they are, and certainly, as you go in to examine those things, as you look at the factors that are causing the predictions and the outcomes, I think any sort of business should be asking those responsibility questions of everything they do, AI included, for sure. >> So, we're entering a new era, we kind of all agree on that. So I just want to throw a few questions out and have a little fun here, so feel free to answer in any order. So when do you think machines will be able to make better diagnoses than doctors? >> I think they already are making better diagnoses. I mean, there's so much, like, I found out recently that most of the very complicated cancer surgeries are done by machines, doctors just standing by and making sure that the machines are doing it well. And so, I think the machines are taking over in some aspects, I wouldn't say all aspects. And then there's the bedside manners, where you (laughs) really need the human doctor, and you need the comfort of talking to the doctor. >> Smiley face, please! (Janet laughs) >> That's advanced AI, to give it a better bedside manner. >> Okay, when do you think that driving and owning your own vehicle is going to be the exception rather than the rule? >> That, I think, is so far ahead, it's going to be very, very near future, because if you've ever driven in an autonomous car, you'll find that after your initial reservations, you're going to feel a lot more safer in an autonomous car. Because it's got a vision that humans don't. It's got a communication mechanism that humans don't. It's talking to all the fleets of cars. >> It's got a richer sense of data. >> It's got a richer sense of data, it's got a richer sense of vision, it's got a richer sense of ability to (snaps) react when a kid jumps in front of the car. Where a human will be terrified and not able to make quick decisions, the car can. But at the same time, we're going to have some startup problems. We're going to see AI misfire in certain areas, and insurance companies are gearing themselves up for that, 'cause that's just, but the data's showing us that we will have tremendously decreased death rates. That's a pretty good start to have AI driving our cars. >> You're a believer, well, and you're right, there's going to be some startup issues, because this car, the vehicle has to decide, "Do I kill that person who jumped in front of me, "or do I kill the driver?" Not kill, I mean, that's overstating-- >> Yeah. >> But those are some of the startup things, and there will be others. >> And humans, you don't question the judgment system for that. >> Yes. >> There's no-- >> Dave: Right, they're yelling at humans. >> Person that developed, right. It's treated as a one-off. But I think if you look back five years, where were we? You figure, the pace of innovation and the speed and the gaps that we're closing now, where are we going to be in five years? >> Yeah. >> You have to figure it's, I have an eight-year-old son, and I question if he's ever going to drive a car. >> Yeah. >> Yeah. >> How about retail? Do you think retail stores largely will disappear? >> Oh, I think retail, there will be a customer service element to retail, but it will evolve from where it's at in a very, very high-stakes rate, because now, with RFID, you know who's, we used to be invisible as we walked, we still are invisible as you walk into a retail store, even if you spend a lot of money in retail. And now, with buying patterns and knowing who the customer is, and your profile is out there on the Web, just getting a sense of who this person is, what their intent is walking into the store, and doing responsible AI, bringing value to that intent, not irresponsibly, that will gain the trust, and as people gain the trust. And then RFIDs, you're in the location, you're nearby, you'd normally buy the suit, the suit's on sale, bring it all together. So I think there's a lot of connective tissue work that needs to happen, but that's all coming together. >> Yeah, it's about the value-add and what the proposition to the customer is. If it's simply there as a place where you go and pick out something you already know what you're going to get, that store doesn't add value, but if there's something in the human expertise, or in the shared, felt sudden experience of being in the store, that's where you'll see retailers differentiate themselves. >> I like to shop still. (laughs) >> Yeah, yeah. >> You mentioned Apple Pay before. Well, you think traditional banks will lose control of the payment systems? >> They're already losing control of payment systems. If you look at, there was no reason for the banks to create Siri-like assistants. They're all over right now. And we started with Alexa first. So you can see the banks are trying to be a lot more customized, customer service, trying to be personalized, trying to really make you connect to them in a way that you have not connected to the bank before. The way that you connected to the bank is you knew the person at the bank for 20 years, or since when you had your first bank account. That's how you connected with the banks. And then you go to a different branch, and then, all of a sudden, you're invisible. Nobody knows you, nobody knows that you were 20 years with the bank. That's changing. They're keeping track of which location you're going to, and trying to be a more personalized. So I think AI is a forcing function, in some ways, to provide more value, if anything. >> Well, we're definitely entering a new era, the age of AI, the autonomous enterprise. Folks, thanks very much for a great segment, really appreciate it. >> Yeah, our pleasure, thank you for having us. >> Thank you for having us. >> You're welcome, all right, and thank you. And keep it right there, we'll be right back with our next guest right after this short break. You're watching theCUBE's coverage of the rebirth of Oracle Consulting. We'll be right back. (upbeat electronic music)

Published Date : Mar 12 2020

SUMMARY :

Brought to you by Oracle Consulting. is looking into the rebirth of Oracle Consulting. Grant, I want to start with you because and people know that they need to take advantage of it, to its promise, 'cause we didn't have the horsepower, ready for the generation of AI, if you will. But the data is oftentimes now within organizations, So that's the reason why Hadoop and cloud gives you scale, it gives you agility. makes a lot of sense, is that at the beginning, is going to change, and you just started But at the end, it's about, how do you train and if you were in the finance industry, I wonder if you could maybe comment on how you're doing. you can chime in. the book industry, you know, different industries. that Janet is making, that, you mentioned you started off of applications, and I find that the space that data is the new oil, and we've said, at the core, be able to break down those silos. to figure out you need to go after a different set of data 75% of the failures, and you know the value that you don't have access to, so it's an enabler. You have to have the right ecosystem for you of the autonomous enterprise, help me here. One of the things we are seeing as results There's a lot of discussion in the press about So that has to change upstream. We can stamp out the bias, and start the conversation there, Is the black box syndrome an issue, or are we, called explainability of the model, So that's your promise to clients, is that your AI as you well know. about the black box if you can see that the model is going to ask about causality. as you go in to examine those things, So when do you think machines will be able and making sure that the machines are doing it well. to give it a better bedside manner. it's going to be very, very near future, It's got a richer But at the same time, we're going of the startup things, and there will be others. And humans, you don't question and the speed and the gaps that we're closing now, You have to figure it's, and as people gain the trust. you already know what you're going to get, I like to shop still. Well, you think traditional banks for the banks to create Siri-like assistants. the age of AI, the autonomous enterprise. of the rebirth of Oracle Consulting.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JanetPERSON

0.99+

AmazonORGANIZATION

0.99+

AppleORGANIZATION

0.99+

FacebookORGANIZATION

0.99+

Janet GeorgePERSON

0.99+

OracleORGANIZATION

0.99+

DavePERSON

0.99+

Grant GibsonPERSON

0.99+

90%QUANTITY

0.99+

75%QUANTITY

0.99+

Oracle ConsultingORGANIZATION

0.99+

20 yearsQUANTITY

0.99+

TeslaORGANIZATION

0.99+

25%QUANTITY

0.99+

TodayDATE

0.99+

GrantPERSON

0.99+

SiriTITLE

0.99+

ChicagoLOCATION

0.99+

UberORGANIZATION

0.99+

five yearsQUANTITY

0.99+

bothQUANTITY

0.99+

one variableQUANTITY

0.99+

singleQUANTITY

0.98+

one modelQUANTITY

0.98+

first bank accountQUANTITY

0.98+

todayDATE

0.98+

Whole FoodsORGANIZATION

0.98+

OneQUANTITY

0.97+

Oracle Transformation Day 2020EVENT

0.97+

NetflixORGANIZATION

0.97+

one wayQUANTITY

0.96+

more than oneQUANTITY

0.95+

theCUBEORGANIZATION

0.95+

eight-year-oldQUANTITY

0.94+

AlexaTITLE

0.94+

one dimensionQUANTITY

0.93+

six thingsQUANTITY

0.93+

trillion-dollarQUANTITY

0.92+

MoorePERSON

0.92+

HadoopPERSON

0.91+

HadoopTITLE

0.9+

FirstQUANTITY

0.9+

firstQUANTITY

0.84+

about fiveQUANTITY

0.78+

onceQUANTITY

0.76+

Amazon PayORGANIZATION

0.75+

Apple PayTITLE

0.74+

yearsQUANTITY

0.69+

one-QUANTITY

0.67+

CashCOMMERCIAL_ITEM

0.53+

Janet George, Western Digital | WiDS 2019


 

>> Live from Stanford University. It's the Cube covering global Women in Data Science conference brought to you by Silicon Angle media. >> Welcome back to the key. We air live at Stanford University for the fourth annual Women in Data Science Conference. The Cube has had the pleasure of being here all four years on I'm welcoming Back to the Cube, one of our distinguished alumni Janet George, the fellow chief data officer, scientists, big data and cognitive computing at Western Digital. Janet, it's great to see you. Thank you. Thank you so much. So I mentioned yes. Fourth, Annie will women in data science. And it's been, I think I met you here a couple of years ago, and we look at the impact. It had a chance to speak with Margo Garrett's in a about an hour ago, one of the co founders of Woods saying, We're expecting twenty thousand people to be engaging today with the Livestream. There are wigs events in one hundred and fifty locations this year, fifty plus countries expecting about one hundred thousand people to engage the attention. The focus that they have on data science and the opportunities that it has is really palpable. Tell us a little bit about Western Digital's continued sponsorship and what makes this important to you? >> So Western distal has recently transformed itself as a company, and we are a data driven company, so we are very much data infrastructure company, and I think that this momentum off A is phenomenal. It's just it's a foundational shift in the way we do business, and this foundational shift is just gaining tremendous momentum. Businesses are realizing that they're going to be in two categories the have and have not. And in order to be in the half category, you have started to embrace a You've got to start to embrace data. You've got to start to embrace scale and you've got to be in the transformation process. You have to transform yourself to put yourself in a competitive position. And that's why Vest Initial is here, where the leaders in storage worldwide and we'd like to be at the heart of their data is. >> So how has Western Digital transform? Because if we look at the evolution of a I and I know you're give you're on a panel tan, you're also giving a breakout on deep learning. But some of the importance it's not just the technical expertise. There's other really important skills. Communication, collaboration, empathy. How has Western digital transformed to really, I guess, maybe transform the human capital to be able to really become broad enough to be ableto tow harness. Aye, aye, for good. >> So we're not just a company that focuses on business for a We're doing a number of initiatives One of the initiatives were doing is a I for good, and we're doing data for good. This is related to working with the U. N. We've been focusing on trying to figure out how climate change the data that impacts climate change, collecting data and providing infrastructure to store massive amounts of species data in the environment that we've never actually collected before. So climate change is a huge area for us. Education is a huge area for us. Diversity is a huge area for us. We're using all of these areas as launching pad for data for good and trying to use data to better mankind and use a eye to better mankind. >> One of the things that is going on at this year's with second annual data fun. And when you talk about data for good, I think this year's Predictive Analytics Challenge was to look at satellite imagery to train the model to evaluate which images air likely tohave oil palm plantations. And we know that there's a tremendous social impact that palm oil and oil palm plantations in that can can impact, such as I think in Borneo and eighty percent reduction in the Oregon ten population. So it's interesting that they're also taking this opportunity to look at data for good. And how can they look at predictive Analytics to understand how to reduce deforestation like you talked about climate and the impact in the potential that a I and data for good have is astronomical? >> That's right. We could not build predictive models. We didn't have the data to put predictive boats predictive models. Now we have the data to put put out massively predictive models that can help us understand what change would look like twenty five years from now and then take corrective action. So we know carbon emissions are causing very significant damage to our environment. And there's something we can do about it. Data is helping us do that. We have the infrastructure, economies of scale. We can build massive platforms that can store this data, and then we can. Alan, it's the state at scale. We have enough technology now to adapt to our ecosystem, to look at disappearing grillers, you know, to look at disappearing insects, to look at just equal system that be living, how, how the ecosystem is going to survive and be better in the next ten years. There's a >> tremendous amount of power that data for good has, when often times whether the Cube is that technology conferences or events like this. The word trust issues yes, a lot in some pretty significant ways. And we often hear that data is not just the life blood of an organization, whether it's in just industry or academia. To have that trust is essential without it. That's right. No, go. >> That's right. So the data we have to be able to be discriminated. That's where the trust comes into factor, right? Because you can create a very good eh? I'm odder, or you can create a bad air more so a lot depends on who is creating the modern. The authorship of the model the creator of the modern is pretty significant to what the model actually does. Now we're getting a lot of this new area ofthe eyes coming in, which is the adversarial neural networks. And these areas are really just springing up because it can be creators to stop and block bad that's being done in the world next. So, for example, if you have malicious attacks on your website or hear militias, data collection on that data is being used against you. These adversarial networks and had built the trust in the data and in the so that is a whole new effort that has started in the latest world, which is >> critical because you mentioned everybody. I think, regardless of what generation you're in that's on. The planet today is aware of cybersecurity issues, whether it's H vac systems with DDOS attacks or it's ah baby boomer, who was part of the fifty million Facebook users whose data was used without their knowledge. It's becoming, I won't say accepted, but very much commonplace, Yes, so training the A I to be used for good is one thing. But I'm curious in terms of the potential that individuals have. What are your thoughts on some of these practices or concepts that we're hearing about data scientists taking something like a Hippocratic oath to start owning accountability for the data that they're working with. I'm just curious. What's >> more, I have a strong opinion on this because I think that data scientists are hugely responsible for what they are creating. We need a diversity of data scientists to have multiple models that are completely divorce, and we have to be very responsible when we start to create. Creators are by default, have to be responsible for their creation. Now where we get into tricky areas off, then you are the human auto or the creator ofthe Anay I model. And now the marshal has self created because it a self learned who owns the patent, who owns the copyright to those when I becomes the creator and whether it's malicious or non malicious right. And that's also ownership for the data scientist. So the group of people that are responsible for creating the environment, creating the morals the question comes into how do we protect the authors, the uses, the producers and the new creators off the original piece of art? Because at the end of the day, when you think about algorithms and I, it's just art its creation and you can use the creation for good or bad. And as the creation recreates itself like a learning on its own with massive amounts of data after an original data scientist has created the model well, how we how to be a confident. So that's a very interesting area that we haven't even touched upon because now the laws have to change. Policies have to change, but we can't stop innovation. Innovation has to go, and at the same time we have to be responsible about what we innovate >> and where do you think we are? Is a society in terms of catching As you mentioned, we can't. We have to continue innovation. Where are we A society and society and starting to understand the different principles of practices that have to be implemented in order for proper management of data, too. Enable innovation to continue at the pace that it needs. >> June. I would say that UK and other countries that kind of better than us, US is still catching up. But we're having great conversations. This is very important, right? We're debating the issues. We're coming together as a community. We're having so many discussions with experts. I'm sitting in so many panels contributing as an Aye aye expert in what we're creating. What? We see its scale when we deploy an aye aye, modern in production. What have we seen as the longevity of that? A marker in a business setting in a non business setting. How does the I perform and were now able to see sustained performance of the model? So let's say you deploy and am are in production. You're able inform yourself watching the sustained performance of that a model and how it is behaving, how it is learning how it's growing, what is its track record. And this knowledge is to come back and be part of discussions and part of being informed so we can change the regulations and be prepared for where this is going. Otherwise will be surprised. And I think that we have started a lot of discussions. The community's air coming together. The experts are coming together. So this is very good news. >> Theologian is's there? The moment of Edward is building. These conversations are happening. >> Yes, and policy makers are actively participating. This is very good for us because we don't want innovators to innovate without the participation of policymakers. We want the policymakers hand in hand with the innovators to lead the charter. So we have the checks and balances in place, and we feel safe because safety is so important. We need psychological safety for anything we do even to have a conversation. We need psychological safety. So imagine having a >> I >> systems run our lives without having that psychological safety. That's bad news for all of us, right? And so we really need to focus on the trust. And we need to focus on our ability to trust the data or a right to help us trust the data or surface the issues that are causing the trust. >> Janet, what a pleasure to have you back on the Cube. I wish we had more time to keep talking, but it's I can't wait till we talk to you next year because what you guys are doing and also your pact, true passion for data science for trust and a I for good is palpable. So thank you so much for carving out some time to stop by the program. Thank you. It's my pleasure. We want to thank you for watching the Cuba and Lisa Martin live at Stanford for the fourth annual Women in Data Science conference. We back after a short break.

Published Date : Mar 4 2019

SUMMARY :

global Women in Data Science conference brought to you by Silicon Angle media. We air live at Stanford University for the fourth annual Women And in order to be in the half category, you have started to embrace a You've got to start Because if we look at the evolution of a initiatives One of the initiatives were doing is a I for good, and we're doing data for good. So it's interesting that they're also taking this opportunity to We didn't have the data to put predictive And we often hear that data is not just the life blood of an organization, So the data we have to be able to be discriminated. But I'm curious in terms of the creating the morals the question comes into how do we protect the We have to continue innovation. And this knowledge is to come back and be part of discussions and part of being informed so we The moment of Edward is building. We need psychological safety for anything we do even to have a conversation. And so we really need to focus on the trust. I can't wait till we talk to you next year because what you guys are doing and also your pact,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Janet GeorgePERSON

0.99+

JanetPERSON

0.99+

AlanPERSON

0.99+

BorneoLOCATION

0.99+

next yearDATE

0.99+

fifty millionQUANTITY

0.99+

Western DigitalORGANIZATION

0.99+

Lisa MartinPERSON

0.99+

OregonLOCATION

0.99+

twenty thousand peopleQUANTITY

0.99+

JuneDATE

0.99+

Silicon AngleORGANIZATION

0.99+

eighty percentQUANTITY

0.99+

two categoriesQUANTITY

0.99+

AnniePERSON

0.99+

Stanford UniversityORGANIZATION

0.99+

Western distalORGANIZATION

0.99+

fifty plus countriesQUANTITY

0.98+

Vest InitialORGANIZATION

0.98+

oneQUANTITY

0.98+

this yearDATE

0.98+

OneQUANTITY

0.97+

Women in Data ScienceEVENT

0.97+

second annualQUANTITY

0.96+

FacebookORGANIZATION

0.96+

todayDATE

0.96+

CubeORGANIZATION

0.95+

StanfordLOCATION

0.95+

Western digitalORGANIZATION

0.94+

Women in Data Science ConferenceEVENT

0.93+

about one hundred thousand peopleQUANTITY

0.92+

one hundred and fifty locationsQUANTITY

0.92+

FourthQUANTITY

0.91+

EdwardPERSON

0.9+

USORGANIZATION

0.89+

Women in Data Science conferenceEVENT

0.88+

ten populationQUANTITY

0.88+

couple of years agoDATE

0.85+

WiDS 2019EVENT

0.85+

one thingQUANTITY

0.85+

CubaLOCATION

0.85+

Margo GarrettPERSON

0.84+

about an hour agoDATE

0.82+

U. N.LOCATION

0.82+

twenty five yearsQUANTITY

0.81+

LivestreamORGANIZATION

0.77+

next ten yearsDATE

0.73+

fourth annualEVENT

0.69+

annualQUANTITY

0.65+

halfQUANTITY

0.62+

fourthEVENT

0.6+

WoodsORGANIZATION

0.59+

fourQUANTITY

0.58+

UKLOCATION

0.58+

wigsQUANTITY

0.56+

CubeCOMMERCIAL_ITEM

0.52+

AnayPERSON

0.31+

Janet George , Western Digital | Western Digital the Next Decade of Big Data 2017


 

>> Announcer: Live from San Jose, California, it's theCUBE, covering Innovating to Fuel the Next Decade of Big Data, brought to you by Western Digital. >> Hey welcome back everybody, Jeff Frick here with theCUBE. We're at Western Digital at their global headquarters in San Jose, California, it's the Almaden campus. This campus has a long history of innovation, and we're excited to be here, and probably have the smartest person in the building, if not the county, area code and zip code. I love to embarrass here, Janet George, she is the Fellow and Chief Data Scientist for Western Digital. We saw you at Women in Data Science, you were just at Grace Hopper, you're everywhere and get to get a chance to sit down again. >> Thank you Jeff, I appreciate it very much. >> So as a data scientist, today's announcement about MAMR, how does that make you feel, why is this exciting, how is this going to make you be more successful in your job and more importantly, the areas in which you study? >> So today's announcement is actually a breakthrough announcement, both in the field of machine learning and AI, because we've been on this data journey, and we have been very selectively storing data on our storage devices, and the selection is actually coming from the preconstructed queries that we do with business data, and now we no longer have to preconstruct these queries. We can store the data at scale in raw form. We don't even have to worry about the format or the schema of the data. We can look at the schema dynamically as the data grows within the storage and within the applications. >> Right, cause there's been two things, right. Before data was bad 'cause it was expensive to store >> Yes. >> Now suddenly we want to store it 'cause we know data is good, but even then, it still can be expensive, but you know, we've got this concept of data lakes and data swamps and data all kind of oceans, pick your favorite metaphor, but we want the data 'cause we're not really sure what we're going to do with it, and I think what's interesting that you said earlier today, is it was schema on write, then we evolved to schema on read, which was all the rage at Hadoop Summit a couple years ago, but you're talking about the whole next generation, which is an evolving dynamic schema >> Exactly. >> Based whatever happens to drive that query at the time. >> Exactly, exactly. So as we go through this journey, we are now getting independent of schema, we are decoupled from schema, and what we are finding out is we can capture data at its raw form, and we can do the learning at the raw form without human interference, in terms of transformation of the data and assigning a schema to that data. We got to understand the fidelity of the data, but we can train at scale from that data. So with massive amounts of training, the models already know to train itself from raw data. So now we are only talking about incremental learning, as the train model goes out into the field in production, and actually performs, now we are talking about how does the model learn, and this is where fast data plays a very big role. >> So that's interesting, 'cause you talked about that also earlier in your part of the presentation, kind of the fast data versus big data, which kind of maps the flash versus hard drive, and the two are not, it's not either or, but it's really both, because within the storage of the big data, you build the base foundations of the models, and then you can adapt, learn and grow, change with the fast data, with the streaming data on the front end, >> Exactly >> It's a whole new world. >> Exactly, so the fast data actually helps us after the training phase, right, and these are evolving architectures. This is part of your journey. As you come through the big data journey you experience this. But for fast data, what we are seeing is, these architectures like Lambda and Kappa are evolving, and especially the Lambda architecture is very interesting, because it allows for batch processing of historical data, and then it allows for what we call a high latency layer or a speed layer, where this data can then be promoted up the stack for serving purposes. And then Kappa architecture's where the data is being streamed near real time, bounded and unbounded streams of data. So this is again very important when we build machine learning and AI applications, because evolution is happening on the fly, learning is happening on the fly. Also, if you think about the learning, we are mimicking more and more on how humans learn. We don't really learn with very large chunks of data all at once, right? That's important for initially model training and model learning, but on a regular basis, we are learning with small chunks of data that are streamed to us near real time. >> Right, learning on the Delta. >> Learning on the Delta. >> So what is the bound versus the unbound? Unpack that a little bit. What does that mean? >> So what is bounded is basically saying, hey we are going to get certain amounts of data, so you're sizing the data for example. Unbounded is infinite streams of data coming to you. And so if your architecture can absorb infinite streams of data, like for example, the sensors constantly transmitting data to you, right? At that point you're not worried about whether you can store that data, you're simply worried about the fidelity of that data. But bounded would be saying, I'm going to send the data in chunks. You could also do bounded where you basically say, I'm going to pre-process the data a little bit just to see if the data's healthy, or if there is signal in the data. You don't want to find that out later as you're training, right? You're trying to figure that out up front. >> But it's funny, everything is ultimately bounded, it just depends on how you define the unit of time, right, 'cause you take it down to infinite zero, everything is frozen. But I love the example of the autonomous cars. We were at the event with, just talking about navigation just for autonomous cars. Goldman Sachs says it's going to be a seven billion dollar industry, and the great example that you used of the two systems working well together, 'cause is it the car centers or is it the map? >> Janet: That's right. >> And he says, well you know, you want to use the map, and the data from the map as much as you can to set the stage for the car driving down the road to give it some level of intelligence, but if today we happen to be paving lane number two on 101, and there's cones, now it's the real time data that's going to train the system. But the two have to work together, and the two are not autonomous and really can't work independent of each other. >> Yes. >> Pretty interesting. >> It makes perfect sense, right. And why it makes perfect sense is because first the autonomous cars have to learn to drive. Then the autonomous cars have to become an experienced driver. And the experience cannot be learned. It comes on the road. So one of the things I was watching was how insurance companies were doing testing on these cars, and they had a human, a human driving a car, and then an autonomous car. And the autonomous car, with the sensors, were predicting the behavior, every permutation and combination of how a bicycle would react to that car. It was almost predicting what the human on the bicycle would do, like jump in front of the car, and it got it right 80% of the cases. But a human driving a car, we're not sure how the bicycle is going to perform. We don't have peripheral vision, and we can't predict how the bicycle is going to perform, so we get it wrong. Now, we can't transmit that knowledge. If I'm a driver and I just encountered a bicycle, I can't transmit that knowledge to you. But a driverless car can learn, it can predict the behavior of the bicycle, and then it can transfer that information to a fleet of cars. So it's very powerful in where the learning can scale. >> Such a big part of the autonomous vehicle story that most people don't understand, that not only is the car driving down the road, but it's constantly measuring and modeling everything that's happening around it, including bikes, including pedestrians, including everything else, and whether it gets in a crash or not, it's still gathering that data and building the model and advancing the models, and I think that's, you know, people just don't talk about that enough. I want follow up on another topic. So we were both at Grace Hopper last week, which is a phenomenal experience, if you haven't been, go. Ill just leave it at that. But Dr. Fei-Fei Li gave one of the keynotes, and she made a really deep statement at the end of her keynote, and we were both talking about it before we turned the cameras on, which is, there's no question that AI is going to change the world, and it's changing the world today. The real question is, who are the people that are going to build the algorithms that train the AI? So you sit in your position here, with the power, both in the data and the tools and the compute that are available today, and this brand new world of AI and ML. How do you think about that? How does that make you feel about the opportunity to define the systems that drive the cars, et cetera. >> I think not just the diversity in data, but the diversity in the representation of that data are equally powerful. We need both. Because we cannot tackle diverse data, diverse experiences with only a single representation. We need multiple representation to be able to tackle that data. And this is how we will overcome bias of every sort. So it's not the question of who is going to build the AI models, it is a question of who is going to build the models, but not the question of will the AI models be built, because the AI models are already being built, but some of the models have biases into it from any kind of lack of representation. Like who's building the model, right? So I think it's very important. I think we have a powerful moment in history to change that, to make real impact. >> Because the trick is we all have bias. You can't do anything about it. We grew up in the world in which we grew up, we saw what we saw, we went to our schools, we had our family relationships et cetera. So everyone is locked into who they are. That's not the problem. The problem is the acceptance of bring in some other, (chuckles) and the combination will provide better outcomes, it's a proven scientific fact. >> I very much agree with that. I also think that having the freedom, having the choice to hear another person's conditioning, another person's experiences is very powerful, because that enriches our own experiences. Even if we are constrained, even if we are like that storage that has been structured and processed, we know that there's this other storage, and we can figure out how to get the freedom between the two point of views, right? And we have the freedom to choose. So that's very, very powerful, just having that freedom. >> So as we get ready to turn the calendar on 2017, which is hard to imagine it's true, it is. You look to 2018, what are some of your personal and professional priorities, what are you looking forward to, what are you working on, what's top of mind for Janet George? >> So right now I'm thinking about genetic algorithms, genetic machine learning algorithms. This has been around for a while, but I'll tell you where the power of genetic algorithms is, especially when you're creating powerful new technology memory cell. So when you start out trying to create a new technology memory cell, you have materials, material deformations, you have process, you have hundred permutation combination, and the genetic algorithms, we can quickly assign a cause function, and we can kill all the survival of the fittest, all that won't fit we can kill, arriving to the fastest, quickest new technology node, and then from there, we can scale that in mass production. So we can use these survival of the fittest mechanisms that evolution has used for a long period of time. So this is biology inspired. And using a cause function we can figure out how to get the best of every process, every technology, all the coupling effects, all the master effects of introducing a program voltage on a particular cell, reducing the program voltage on a particular cell, resetting and setting, and the neighboring effects, we can pull all that together, so 600, 700 permutation combination that we've been struggling on and not trying to figure out how to quickly narrow down to that perfect cell, which is the new technology node that we can then scale out into tens of millions of vehicles, right? >> Right, you're going to have to >> Getting to that spot. >> You're going to have to get me on the whiteboard on that one, Janet. That is amazing. Smart lady. >> Thank you. >> Thanks for taking a few minutes out of your time. Always great to catch up, and it was terrific to see you at Grace Hopper as well. >> Thank you, I really appreciate it, I appreciate it very much. >> All right, Janet George, I'm Jeff Frick. You are watching theCUBE. We're at Western Digital headquarters at Innovating to Fuel the Next Generation of Big Data. Thanks for watching.

Published Date : Oct 11 2017

SUMMARY :

the Next Decade of Big Data, in San Jose, California, it's the Almaden campus. the preconstructed queries that we do with business data, Right, cause there's been two things, right. of the data and assigning a schema to that data. and especially the Lambda architecture is very interesting, So what is the bound versus the unbound? the sensors constantly transmitting data to you, right? and the great example that you used and the data from the map as much as you can and it got it right 80% of the cases. and advancing the models, and I think that's, So it's not the question of who is going to Because the trick is we all have bias. having the choice to hear another person's conditioning, So as we get ready to turn the calendar on 2017, and the genetic algorithms, we can quickly assign You're going to have to get me on the whiteboard and it was terrific to see you at Grace Hopper as well. I appreciate it very much. at Innovating to Fuel the Next Generation of Big Data.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Janet GeorgePERSON

0.99+

JeffPERSON

0.99+

Jeff FrickPERSON

0.99+

JanetPERSON

0.99+

Western DigitalORGANIZATION

0.99+

80%QUANTITY

0.99+

two thingsQUANTITY

0.99+

2018DATE

0.99+

last weekDATE

0.99+

2017DATE

0.99+

Goldman SachsORGANIZATION

0.99+

San Jose, CaliforniaLOCATION

0.99+

two systemsQUANTITY

0.99+

twoQUANTITY

0.99+

todayDATE

0.99+

bothQUANTITY

0.99+

seven billion dollarQUANTITY

0.99+

Fei-Fei LiPERSON

0.98+

AlmadenLOCATION

0.98+

two pointQUANTITY

0.97+

oneQUANTITY

0.97+

firstQUANTITY

0.95+

Grace HopperORGANIZATION

0.95+

theCUBEORGANIZATION

0.95+

hundred permutationQUANTITY

0.95+

MAMRORGANIZATION

0.94+

Women in Data ScienceORGANIZATION

0.91+

tens of millions of vehiclesQUANTITY

0.9+

one ofQUANTITY

0.89+

KappaORGANIZATION

0.89+

Dr.PERSON

0.88+

single representationQUANTITY

0.83+

a couple years agoDATE

0.83+

earlier todayDATE

0.82+

Next DecadeDATE

0.81+

LambdaTITLE

0.8+

101OTHER

0.8+

600, 700 permutationQUANTITY

0.77+

LambdaORGANIZATION

0.7+

of dataQUANTITY

0.67+

keynotesQUANTITY

0.64+

Hadoop SummitEVENT

0.62+

zeroQUANTITY

0.6+

numberOTHER

0.55+

DeltaOTHER

0.54+

twoOTHER

0.35+

Janet George, Western Digital –When IoT Met AI: The Intelligence of Things - #theCUBE


 

(upbeat electronic music) >> Narrator: From the Fairmont Hotel in the heart of Silicon Valley, it's theCUBE. Covering when IoT met AI, The Intelligence of Things. Brought to you by Western Digital. >> Welcome back here everybody, Jeff Frick here with theCUBE. We are at downtown San Jose at the Fairmont Hotel. When IoT met AI it happened right here, you saw it first. The Intelligence of Things, a really interesting event put on by readwrite and Western Digital and we are really excited to welcome back a many time CUBE alumni and always a fan favorite, she's Janet George. She's Fellow & Chief Data Officer of Western Digital. Janet, great to see you. >> Thank you, thank you. >> So, as I asked you when you sat down, you're always working on cool things. You're always kind of at the cutting edge. So, what have you been playing with lately? >> Lately I have been working on neural networks and TensorFlow. So really trying to study and understand the behaviors and patterns of neural networks, how they work and then unleashing our data at it. So trying to figure out how it's training through our data, how many nets there are, and then trying to figure out what results it's coming with. What are the predictions? Looking at how the predictions are, whether the predictions are accurate or less accurate and then validating the predictions to make it more accurate, and so on and so forth. >> So it's interesting. It's a different tool, so you're learning the tool itself. >> Yes. >> And you're learning the underlying technology behind the tool. >> Yes. >> And then testing it actually against some of the other tools that you guys have, I mean obviously you guys have been doing- >> That's right. >> Mean time between failure analysis for a long long time. >> That's right, that's right. >> So, first off, kind of experience with the tool, how is it different? >> So with machine learning, fundamentally we have to go into feature extraction. So you have to figure out all the features and then you use the features for predictions. With neural networks you can throw all the raw data at it. It's in fact data-agnostic. So you don't have to spend enormous amounts of time trying to detect the features. Like for example, If you throw hundreds of cat images at the neural network, the neural network will figure out image features of the cat; the nose, the eyes, the ears and so on and so forth. And once it trains itself through a series of iterations, you can throw a lot of deranged cats at the neural network and it's still going to figure out what the features of a real cat is. >> Right. >> And it will predict the cat correctly. >> Right. So then, how does that apply to, you know, the more specific use case in terms of your failure analysis? >> Yeah. So we have failures and we have multiple failures. Some failures through through the human eye, it's very obvious, right? But humans get tired, and over a period of time we can't endure looking at hundreds and millions of failures, right? And some failures are interconnected. So there is a relationship between these failure patterns or there is a correlation between two failures, right? It could be an edge failure. It could a radial failure, eye pattern type failure. It could be a radial failure. So these failures, for us as humans, we can't escape. >> Right. >> And we used to be able to take these failures and train them at scale and then predict. Now with neural networks, we don't have to take and do all that. We don't have to extract these labels and try to show them what these failures look like. Training is almost like throwing a lot of data at the neural networks. >> So it almost sounds like kind of the promise of the data lake if you will. >> Yes. >> If you have heard about, from the Hadoop Summit- >> Yes, yes, yes. >> For ever and ever and ever. Right? You dump it all in and insights will flow. But we found, often, that that's not true. You need hypothesis. >> Yes, yes. >> You need to structure and get it going. But what you're describing though, sounds much more along kind of that vision. >> Yes, very much so. Now, the only caveat is you need some labels, right? If there is no label on the failure data, it's very difficult for the neural networks to figure out what the failure is. >> Jeff: Right. >> So you have to give it some labels to understand what patterns it should learn. >> Right. >> Right, and that is where the domain experts come in. So we train it with labeled data. So if you are training with a cat, you know the features of a cat, right? In the industrial world, cat is really what's in the heads of people. The domain knowledge is not so authoritative. Like the sky or the animals or the cat. >> Jeff: Right. >> The domain knowledge is much more embedded in the brains of the people who are working. And so we have to extract that domain knowledge into labels. And then you're able to scale the domain. >> Jeff: Right. >> Through the neural network. >> So okay so then how does it then compare with the other tools that you've used in the past? In terms of, obviously the process is very different, but in terms of just pure performance? What are you finding? >> So we are finding very good performance and actually we are finding very good accuracy. Right? So once it's trained, and it's doing very well on the failure patterns, it's getting it right 90% of the time, right? >> Really? >> Yes, but in a machine learning program, what happens is sometimes the model is over-fitted or it's under-fitted or there is bias in the model and you got to remove the bias in the model or you got to figure out, well, is the model false-positive or false-negative? You got to optimize for something, right? >> Right, right. >> Because we are really dealing with mathematical approximation, we are not dealing with preciseness, we are not dealing with exactness. >> Right, right. >> In neural networks, actually, it's pretty good, because it's actually always dealing with accuracy. It's not dealing with precision, right? So it's accurate most of the time. >> Interesting, because that's often what's common about the kind of difference between computer science and statistics, right? >> Yes. >> Computers is binary. Statistics always has a kind of a confidence interval. But what you're describing, it sounds like the confidence is tightening up to such a degree that it's almost reaching binary. >> Yeah, yeah, exactly. And see, brute force is good when your traditional computing programing paradigm is very brute force type paradigm, right? The traditional paradigm is very good when the problems are simpler. But when the problems are of scale, like you're talking 70 petabytes of data or you're talking 70 billion roles, right? Find all these patterns in that, right? >> Jeff: Right. >> I mean you just, the scale at which that operates and at the scale at which traditional machine learning even works is quite different from how neural networks work. >> Jeff: Okay. >> Right? Traditional machine learning you still have to do some feature extraction. You still have to say "Oh I can't." Otherwise you are going to have dimensionality issues, right? It's too broad to get the prediction anywhere close. >> Right. >> Right? And so you want to reduce the dimensionality to get a better prediction. But here you don't have to worry about dimensionality. You just have to make sure the labels are right. >> Right, right. So as you dig deeper into this tool and expose all these new capabilities, what do you look forward to? What can you do that you couldn't do before? >> It's interesting because it's grossly underestimating the human brain, right? The human brain is supremely powerful in all aspects, right? And there is a great deal of difficulty in trying to code the human brain, right? But with neural networks and because of the various propagation layers and the ability to move through these networks we are coming closer and closer, right? So one example: When you think about driving, recently, Google driverless car got into an accident, right? And where it got into an accident was the driverless car was merging into a lane and there was a bus and it collided with the bus. So where did A.I. go wrong? Now if you train an A.I., birds can fly, and then you say penguin is a bird, it is going to assume penguin can fly. >> Jeff: Right, right. >> We as humans know penguin is a bird but it can't fly like other birds, right? >> Jeff: Right. >> It's that anomaly thing, right? Naturally when are driving and a bus shows up, even if it's yield, the bus goes. >> Jeff: Right, right. >> We yield to the bus because it's bigger and we know that. >> A.I. doesn't know that. It was taught that yield is yield. >> Right, right. >> So it collided with the bus. But the beauty is now large fleets of cars can learn very quickly based on what it just got from that one car. >> Right, right. >> So now there are pros and cons. So think about you driving down Highway 85 and there is a collision, it's Sunday morning, you don't know about the collision. You're coming down on the hill, right? Blind corner and boom that's how these crashes happen and so many people died, right? If you were driving a driverless car, you would have knowledge from the fleet and from everywhere else. >> Right. >> So you know ahead of time. We don't talk to each other when we are in cars. We don't have universal knowledge, right? >> Car-to-car communication. >> Car-to-car communications and A.I. has that so directly it can save accidents. It can save people from dying, right? But people still feel, it's a psychology thing, people still feel very unsafe in a driverless car, right? So we have to get over- >> Well they will get over that. They feel plenty safe in a driverless airplane, right? >> That's right. Or in a driveless light rail. >> Jeff: Right. >> Or, you know, when somebody else is driving they're fine with the driver who's driving. You just sit in the driver's car. >> But there's that one pesky autonomous car problem, when the pedestrian won't go. >> Yeah. >> And the car is stopped it's like a friendly battle-lock. >> That's right, that's right. >> Well good stuff Janet and always great to see you. I'm sure we will see you very shortly 'cause you are at all the great big data conferences. >> Thank you. >> Thanks for taking a few minutes out of your day. >> Thank you. >> Alright she is Janet George, she is the smartest lady at Western Digital, perhaps in Silicon Valley. We're not sure but we feel pretty confident. I am Jeff Frick and you're watching theCUBE from When IoT meets AI: The Intelligence of Things. We will be right back after this short break. Thanks for watching. (upbeat electronic music)

Published Date : Jul 2 2017

SUMMARY :

Brought to you by Western Digital. We are at downtown San Jose at the Fairmont Hotel. So, what have you been playing with lately? Looking at how the predictions are, So it's interesting. behind the tool. So you have to figure out all the features So then, how does that apply to, you know, So these failures, for us as humans, we can't escape. at the neural networks. the promise of the data lake if you will. But we found, often, that that's not true. But what you're describing though, sounds much more Now, the only caveat is you need some labels, right? So you have to give it some labels to understand So if you are training with a cat, in the brains of the people who are working. So we are finding very good performance we are not dealing with preciseness, So it's accurate most of the time. But what you're describing, it sounds like the confidence the problems are simpler. and at the scale at which traditional machine learning Traditional machine learning you still have to But here you don't have to worry about dimensionality. So as you dig deeper into this tool and because of the various propagation layers even if it's yield, the bus goes. It was taught that yield is yield. So it collided with the bus. So think about you driving down Highway 85 So you know ahead of time. So we have to get over- Well they will get over that. That's right. You just sit in the driver's car. But there's that one pesky autonomous car problem, I'm sure we will see you very shortly 'cause you are Alright she is Janet George, she is the smartest lady

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JeffPERSON

0.99+

Jeff FrickPERSON

0.99+

Janet GeorgePERSON

0.99+

JanetPERSON

0.99+

Western DigitalORGANIZATION

0.99+

90%QUANTITY

0.99+

Silicon ValleyLOCATION

0.99+

one carQUANTITY

0.99+

Highway 85LOCATION

0.99+

Sunday morningDATE

0.99+

two failuresQUANTITY

0.99+

70 billion rolesQUANTITY

0.99+

GoogleORGANIZATION

0.99+

CUBEORGANIZATION

0.98+

one exampleQUANTITY

0.96+

The Intelligence of ThingsTITLE

0.94+

hundreds of cat imagesQUANTITY

0.93+

firstQUANTITY

0.92+

theCUBEORGANIZATION

0.84+

San JoseLOCATION

0.8+

one pesky autonomous carQUANTITY

0.77+

70 petabytes of dataQUANTITY

0.77+

hundreds andQUANTITY

0.76+

IoTORGANIZATION

0.74+

millions of failuresQUANTITY

0.66+

Fairmont HotelLOCATION

0.66+

ollisionPERSON

0.65+

meetsTITLE

0.64+

#theCUBEORGANIZATION

0.57+

Hadoop SummitEVENT

0.51+

ofTITLE

0.47+

Janet George, Western Digital | Women in Data Science 2017


 

>> Male Voiceover: Live from Stanford University, it's The Cube covering the Women in Data Science Conference 2017. >> Hi, welcome back to The Cube, I'm Lisa Martin and we are live at Stanford University at the second annual Women in Data Science Technical Conference. It's a one day event here, incredibly inspiring morning we've had. We're joined by Janet George, who is the chief data scientist at Western Digital. Janet, welcome to the show. >> Thank you very much. >> You're a speaker at-- >> Very happy to be here. >> We're very happy to have you. You're a speaker at this event and we want to talk about what you're going to be talking about. Industrialized data science. What is that? >> Industrialized data science is mostly about how data science is applied in the industry. It's less about more research work, but it's more about practical application of industry use cases in which we actually apply machine learning and artificial intelligence. >> What are some of the use cases at Western Digital for that application? >> One of the use case that we use is, we are in the business of creating new technology nodes and for creating new technology nodes we actually create a lot of data. And with that data, we actually look at, can we understand pattern recognition at very large scale? We're talking millions of wafers. Can we understand memory holes? The shape, the type, the curvature, circularity, radius, can we detect these patterns at scale? And then how can we detect if the memory hole is warped or deformed and how can we have machine learning do that for us? We also look at things like correlations during the manufacturing process. Strong correlations, weak correlations, and we try to figure out interactions between different correlations. >> Fantastic. So if we look at big data, it's probably applicable across every industry. How has it helped to transform Western Digital, that's been an institution here in Silicon Valley for a while? >> We in Western Digital we move mountains of data. That's just part of our job, right? And so we are the leaders in storage technology, people store data in Western Digital products, and so data's inherently very familiar to us. We actually deal with data on a regular basis. And now we've started confronting our data with data science. And we started confronting our data with machine learning because we are very aware that artificial intelligence, machine learning can bring a different value to that data. We can look at the insides, we can develop intelligence about how we build our storage products. What we do with our storage. Failure analysis is a huge area for us. So we're really tapping into our data to figure out how can we make artificial intelligence and machine learning ingrained in the way we do work. >> So from a cultural perspective, you've really done a lot to evolve the culture of Western Digital to apply the learnings, to improve the values that you deliver to all of your customers. >> Yes, believe it or not, we've become a data-driven company. That's amazing, because we've invested in our own data, and we've said "Hey, if we are going to store the world's data, we need to lead, from a data perspective" and so we've sort of embraced machine learning and artificial intelligence. We've embraced new algorithms, technologies that's out there we can tap into to look at our data. >> So from a machine learning, human perspective, in storage manufacturing, is there still a dependence on human insight where storage manufacturing devices are concerned, or are you seeing the machine learning really, in this case, take more of a lead? >> No, I think humans play a huge role, right? Because these are domain experts. We're talking about Ph.D.'s in material science and device physics areas so what I see is the augmentation between machine learning and humans, and the domain experts. Domain experts will not be able to scale. When the scale of wafer production becomes very large. So let's talk about 3 million wafers. How is a machine going to physically look at all the failure patterns on those wafers? We're not going to be able to scale just having domain expertise. But taking our core domain expertise and using that as training data to build intelligence models that can inform the domain expert and be smart and come up with all the ideas, that's where we want to be. >> Excellent. So you talked a little bit about the manufacturing process. Who are some of the other constituents that you collaborate with as chief data scientist at Western Digital that are demanding access to data, marketing, etcetera, what are some of those key collaborators for your group? >> Many of our marketing department, as well as our customer service department, we also have collaborations going on with universities, but one of the things we found out was when a drive fails, and it goes to our customer, it's much better for us to figure out the failure. So we've started modeling out all the customer returns that we've received, and look at that and see "How can we predict the life cycle of our storage?" And get to those return possibilities or potential issues before it lands in the hands of customers. >> That's excellent. >> So that's one area we've been focusing quite a bit on, to look at the whole life cycle of failures. >> You also talked about collaborating with universities. Share a little bit about that in terms of, is there a program for internships for example? How are you helping to shape the next generation of computer scientists? >> We are very strongly embedded in universities. We usually have a very good internship program. Six to eight weeks, to 12 weeks in the summer, the interns come in. Ours is a little different where we treat our interns as real value add. They come in, and they're given a hypothesis, or problem domain that they need to go after. And within six to eight weeks, and they have access to tremendous amounts of data, so they get to play with all this industry data that they would never get to play with. They can quickly bring their academic background, or their academic learning to that data. We also take really hard research-ended problems or further out problems and we collaborate with universities on that, especially Stanford University, we've been doing great collaborations with them. I'm super encouraged with Feliz's work on computer vision, and we've been looking into things around deep neural networks. This is an area of great passion for me. I think the cognitive computing space is just started to open up and we have a lot to learn from neural networks and how they work and where the value can be added. >> Looking at, just want to explore the internship topic for a second. And we're at the second annual Women in Data Science Conference. There's a lot of young minds here, not just here in person, but in many cities across the globe. What are you seeing with some of the interns that come in? Are they confident enough to say "I'm getting access to real world data I wouldn't have access to in school", are they confident to play around with that, test out a hypothesis and fail? Or do they fear, "I need to get this right right away, this is my career at stake?" >> It's an interesting dichotomy because they have a really short time frame. That's an issue because of the time frame, and they have to quickly discover. Failing fast and learning fast is part of data science and I really think that we have to get to that point where we're really comfortable with failure, and the learning we get from the failure. Remember the light bulb was invented with 99% negative knowledge, so we have to get to that negative knowledge and treat that as learning. So we encourage a culture, we encourage a style of different learning cycles so we say, "What did we learn in the first learning cycle?" "What discoveries, what hypothesis did we figure out in the first learning cycle, which will then prepare our second learning cycle?" And we don't see it as a one-stop, rather more iterative form of work. Also with the internships, I think sometimes it's really essential to have critical thinking. And so the interns get that environment to learn critical thinking in the industry space. >> Tell us about, from a skills perspective, these are, you can share with us, presumably young people studying computer science, maybe engineering topics, what are some of the traditional data science skills that you think are still absolutely there? Maybe it's a hybrid of a hacker and someone who's got, great statistician background. What about the creative side and the ability to communicate? What's your ideal data scientist today? What are the embodiments of those? >> So this is a fantastic question, because I've been thinking about this a lot. I think the ideal data scientist is at the intersection of three circles. The first circle is really somebody who's very comfortable with data, mathematics, statistics, machine learning, that sort of thing. The second circle is in the intersection of implementation, engineering, computer science, electrical engineering, those backgrounds where they've had discipline. They understand that they can take complex math or complex algorithms and then actually implement them to get business value out of them. And the third circle is around business acumen, program management, critical thinking, really going deeper, asking the questions, explaining the results, very complex charts. The ability to visualize that data and understand the trends in that data. So it's the intersection of these very diverse disciplines, and somebody who has deep critical thinking and never gives up. (laughs) >> That's a great one, that never gives up. But looking at it, in that way, have you seen this, we're really here at a revolution, right? Have you seen that data science traditionalist role evolve into these three, the intersection of these three elements? >> Yeah, traditionally, if you did a lot of computer science, or you did a lot of math, you'd be considered a great data scientist. But if you don't have that business acumen, how do you look at the critical problems? How do you communicate what you found? How do you communicate that what you found actually matters in the scheme of things? Sometimes people talk about anomalies, and I always say "is the anomaly structured enough that I need to care about?" Is it systematic? Why should I care about this anomaly? Why is it different from an alert? If you have modeled all the behaviors, and you understand that this is a different anomaly than I've normally seen, and you must care about it. So you need to have business acumen to ask the right business questions and understand why that matters. >> So your background in computer science, your bachelor's Ph.D.? >> Bachelor's and master's in computer science, mathematics, and statistics, so I've got a combination of all of those and then my business experience comes from being in the field. >> Lisa: I was going to ask you that, how did you get that business acumen? Sounds like it was by in-field training, basically on-the-job? >> It was in the industry, it was on-the-job, I put myself in positions where I've had great opportunities and tackled great business problems that I had to go out and solve, very unique set of business problems that I had to dig deep into figuring out what the solutions were, and so then gained the experience from that. >> So going back to Western Digital, how you're leveraging data science to really evolve the company. You talked about the cultural evolution there, which we both were mentioning off-camera, is quite a feat because it's very challenging. Data from many angles, security, usage, is a board level, boardroom conversation. I'd love to understand, and you also talked about collaboration, so talk to us a little bit about how, and some of the ways, tangible ways, that data science and your team have helped evolve Western Digital. Improving products, improving services, improving revenue. >> I think of it as when an algorithm or a machine learning model is smart, it cannot be a threat. There's a difference between being smart and being a threat. It's smart when it actually provides value. It's a threat when it takes away or does something you would be wanting to do, and here I see that initially there's a lot of fear in the industry, and I think the fear is related to "oh, here's a new technology," and we've seen technologies come in and disrupt in a major way. And machine learning will make a lot of disruptions in the industry for sure. But I think that will cause a shift, or a change. Look at our phone industry, and how much the phone industry has gone through. We never complain that the smart phone is smarter than us. (laughs) We love the fact that the smartphone can show us maps and it can send us in the right, of course, it sends us in the wrong direction sometimes, most of the time it's pretty good. We've grown to rely on our cell phones. We've grown to rely on the smartness. I look at when technology becomes your partner, when technology becomes your ally, and when it actually becomes useful to you, there is a shift in culture. We start by saying "how do we earn the value of the humans?" How can machine learning, how can the algorithms we built, actually show you the difference? How can it come up with things you didn't see? How can it discover new things for you that will create a wow factor for you? And when it does create a wow factor for you, you will want more of it, so it's more, to me, it's most an intent-based progress, in terms of a culture change. You can't push any new technology on people. People will be reluctant to adapt. The only way you can, that people adopt to new technologies is when they the value of the technology instantly and then they become believers. It's a very grassroots-level change, if you will. >> For the foreseeable future, that from a fear perspective and maybe job security, that at least in the storage and manufacturing industry, people aren't going to be replaced by machines. You think it's going to maybe live together for a very long, long time? >> I totally agree. I think that it's going to augment the humans for a long, long time. I think that we will get over our fear, we worry that the humans, I think humans are incredibly powerful. We give way too little credit to ourselves. I think we have huge creative capacity. Machines do have processing capacity, they have very large scale processing capacity, and humans and machines can augment each other. I do believe that the time when we had computers and we relied on our computers for data processing. We're going to rely on computers for machine learning. We're going to get smarter, so we don't have to do all the automation and the daily grind of stuff. If you can predict, and that prediction can help you, and you can feed that prediction model some learning mechanism by reinforced learning or reading or ranking. Look at spam industry. We just taught the Spam-a-Guccis to become so good at catching spam, and we don't worry about the fact that they do the cleansing of that level of data for us and so we'll get to that stage first, and then we'll get better and better and better. I think humans have a natural tendency to step up, they always do. We've always, through many generations, we have always stepped up higher than where we were before, so this is going to make us step up further. We're going to demand more, we're going to invent more, we're going to create more. But it's not going to be, I don't see it as a real threat. The places where I see it as a threat is when the data has bias, or the data is manipulated, which exists even without machine learning. >> I love though, that the analogy that you're making is as technology is evolving, it's kind of a natural catalyst >> Janet: It is a natural catalyst. >> For us humans to evolve and learn and progress and that's a great cycle that you're-- >> Yeah, imagine how we did farming ten years ago, twenty years ago. Imagine how we drive our cars today than we did many years ago. Imagine the role of maps in our lives. Imagine the role of autonomous cars. This is a natural progression of the human race, that's how I see it, and you can see the younger, young people now are so natural for them, technology is so natural for them. They can tweet, and swipe, and that's the natural progression of the human race. I don't think we can stop that, I think we have to embrace that it's a gift. >> That's a great message, embracing it. It is a gift. Well, we wish you the best of luck this year at Western Digital, and thank you for inspiring us and probably many that are here and those that are watching the livestream. Janet George, thanks so much for being on The Cube. >> Thank you. >> Thank you for watching The Cube. We are again live from the second annual Women in Data Science conference at Stanford, I'm Lisa Martin, don't go away. We'll be right back. (upbeat electronic music)

Published Date : Feb 3 2017

SUMMARY :

it's The Cube covering the Women in I'm Lisa Martin and we are going to be talking about. data science is applied in the industry. One of the use case How has it helped to in the way we do work. apply the learnings, to to look at our data. that can inform the a little bit about the the things we found out quite a bit on, to look at the helping to shape the next started to open up and we but in many cities across the globe. That's an issue because of the time frame, the ability to communicate? So it's the intersection of the intersection of I always say "is the So your background in computer science, comes from being in the field. problems that I had to You talked about the how can the algorithms we built, that at least in the I do believe that the time of the human race, Well, we wish you the We are again live from the second annual

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JanetPERSON

0.99+

Lisa MartinPERSON

0.99+

Janet GeorgePERSON

0.99+

Western DigitalORGANIZATION

0.99+

SixQUANTITY

0.99+

LisaPERSON

0.99+

99%QUANTITY

0.99+

Silicon ValleyLOCATION

0.99+

12 weeksQUANTITY

0.99+

Stanford UniversityORGANIZATION

0.99+

third circleQUANTITY

0.99+

first circleQUANTITY

0.99+

twenty years agoDATE

0.99+

second circleQUANTITY

0.99+

eight weeksQUANTITY

0.99+

sixQUANTITY

0.99+

The CubeTITLE

0.99+

ten years agoDATE

0.99+

OneQUANTITY

0.98+

threeQUANTITY

0.98+

eight weeksQUANTITY

0.98+

three circlesQUANTITY

0.98+

Women in Data Science Technical ConferenceEVENT

0.98+

this yearDATE

0.98+

FelizPERSON

0.97+

StanfordLOCATION

0.97+

three elementsQUANTITY

0.97+

oneQUANTITY

0.97+

Women in Data Science Conference 2017EVENT

0.97+

bothQUANTITY

0.96+

Women in Data Science ConferenceEVENT

0.96+

many years agoDATE

0.96+

second learning cycleQUANTITY

0.96+

Women in Data ScienceEVENT

0.96+

one day eventQUANTITY

0.96+

first learning cycleQUANTITY

0.94+

first learning cycleQUANTITY

0.93+

todayDATE

0.91+

one areaQUANTITY

0.91+

Women in Data Science conferenceEVENT

0.89+

secondQUANTITY

0.88+

millions of wafersQUANTITY

0.87+

firstQUANTITY

0.86+

one-stopQUANTITY

0.86+

about 3 million wafersQUANTITY

0.84+

-a-GuccisORGANIZATION

0.81+

The CubeORGANIZATION

0.77+

UniversityORGANIZATION

0.6+

second annualQUANTITY

0.56+

2017DATE

0.51+

CubePERSON

0.36+

Breaking Down Your Data


 

>>from the Cube Studios in Palo Alto and Boston. It's the Cube covering empowering the autonomous enterprise brought to you by Oracle Consulting. Welcome back, everybody to this special digital event coverage. The Cube is looking into the rebirth of Oracle Consulting. Janet George is here. She's group VP Autonomous for Advanced Analytics with machine learning and artificial intelligence at Oracle on she joined by Grant Gibson is VP of growth and strategy. Folks, welcome to the Cube. Thanks so much for coming on. I want to start with you because you get strategy in your title start big picture. What is the strategy with Oracle specifically as it relates to autonomous and also consulting? >>Sure. So I think you know, Oracle has a deep legacy of strength and data and over the company's successful history, it's evolved what that is from steps along the way. If you look at the modern enterprise Oracle client, I think there's no denying that we've entered the age of AI, that everyone knows that artificial intelligence and machine learning are key to their success in the business marketplace going forward. And while generally it's acknowledged that it's a transformative technology and people know that they need to take advantage of it. It's the how that's really tricky and that most enterprises, in order to really get an enterprise level, are rely on AI investment. Need to engage in projects of significant scope, and going from realizing there's an opportunity realizing there's a threat to mobilize yourself to capitalize on it is a daunting task. Certainly one that's anybody that's got any sort of legacy of success has built in processes as building systems has built in skill sets, and making that leap to be an autonomous enterprise is challenging for companies to wrap their heads around. So as part of the rebirth of Oracle Consulting, we've developed a practice around how to both manage the technology needs for that transformation as well as the human needs as well as the data science needs. >>So there's about five or six things that I want to follow up with you there, so this is a good conversation. Ever since I've been in the industry, we were talking about a sort of start stop start stopping at the ai Winter, and now it seems to be here. I almost feel like the technology never lived up to its promise you didn't have the horsepower compute power data may be so we're here today. It feels like we are entering a new era. Why is that? And how will the technology perform this time? >>So for AI to perform is very reliant on the data. We entered the age of Ai without having the right data for AI. So you can imagine that we just launched into Ai without our data being ready to be training sex for AI. So we started with big data. We started the data that was already historically transformed. Formatted had logical structures, physical structures. This data was sort of trapped in many different tools. And then suddenly Ai comes along and we see Take this data, our historical data we haven't tested to see if this has labels in it. This has learning capability in it. Just trust the data to AI. And that's why we saw the initial wave of ai sort of failing because it was not ready to fully ai ready for the generation of ai if >>you will. And part of I think the leap that clients are finding success with now is getting novel data types and you're moving from zeros and ones of structured data, too. Image language, written language, spoken language You're capturing different data sets in ways that prior tools never could. So the classifications that come out of it, the insights that come out of it, the business process transformation comes out of it is different than what we would have understood under the structure data formats. So I think it's that combination of really being able to push massive amounts of data through a cloud product processes at scale. That is what I think is the combination that takes it to the next plateau, for >>sure. The language that we use today, I feel like it's going to change. And you just started to touch on some of it, sensing our senses and visualization on the the auditory. So it's it's sort of this new experience that customers are seeing a lot of this machine intelligence behind. >>I call it the autonomous and price right, the journey to be the autonomous enterprise, and when you're on this journey to be the autonomous enterprise, you need really the platform that can help you be cloud is that platform which can help you get to the autonomous journey. But the Thomas journey does not end with the cloud. It doesn't end with the Data Lake. These are just infrastructures that are basic necessary necessities for being on that on that autonomous journey. But at the end, it's about how do you train and scale at, um, very large scale training that needs to happen on this platform for AI to be successful. And if you are an autonomous and price, then you have really figured out how to tap into AI and machine learning in a way that nobody else has to derive business value, if you will. So you've got the platform, you've got the data, and now you're actually tapping into the autonomous components ai and machine learning to derive business, intelligence and business value. >>So I want to get into a little bit of Oracle's role. But to do that, I want to talk a little bit more about the industry. So if you think about the way that the industry seems to be restructuring around data, historically, industries had their own stack value chain and if you were in in in the finance industry, you were there for life. >>So when you think about banking, for example, highly regulated industry think about our culture. These are highly regulated industries there. It was very difficult to destruct these industries. But now you look at an Amazon, right? And what does an Amazon or any other tech giants like Apple have? They have incredible amounts of data. They understand how people use for how they want to do banking. And so they've come up with a lot of cash or Amazon pay. And these things are starting to eat into the market. Right? So you would have never thought and Amazon could be a competition to a banking industry just because of regulations. But they're not hindered by the regulations because they're starting at a different level. And so they become an instant threat in an instant destructive to these highly regulated industries. That's what data does, right when you use data as your DNA for your business and you are sort of born in data or you figure out how to be autonomous. If you will capture value from that data in a very significant manner, then you can get into industries that are not traditionally your own industry. It can be like the food industry can be the cloud industry, the book industry, you know, different industries. So you know that that's what I see happening with the tech giants. >>So great, there's a really interesting point that the Gina is making that you mentioned. You started off with a couple of industries that are highly regulated, harder to disrupt, use it got disrupted. Publishing got disrupted. But you've got these regulated businesses. Defense. Automotive actually hasn't been surely disrupted yet. Tesla. Maybe a harbinger. And so you've got this spectrum of disruption. But is anybody safe from disruption? >>I don't think anyone's ever say from it. It's It's changing evolution, right? That you whether it's, you know, swapping horseshoes for cars are TV for movies or Netflix are any sort of evolution of a business. You're I wouldn't coast on any of it. And I think t earlier question around the value that we can help bring the Oracle customers is that you know, we have a rich stack of applications, and I find that the space between the applications, the data that that spans more than one of them is a ripe playground for innovations that where the data already exists inside a company, but it's trapped from both a technology and a business perspective. And that's where I think really any company can take advantage of knowing it's data better and changing itself to take advantage of what's already there. >>Yet powerful people always throw the bromide of the data is the new oil. And we've said no data is far more valuable because you can use it in a lot of different places where you can use once, and it's follow the laws of scarcity data, if you can unlock it. And so a lot of the incumbents they have built a business around whatever factory, our process and people, a lot of the trillion are starting us that become millionaires. You know, I'm talking about data is at the core data company. So So it seems like a big challenge for your incumbent customers. Clients is to put data at the core, be able to break down those silos. How do they do that? >>Grading down silos is really super critical for any business. It was okay to operate in a silo, for example. You would think that Oh, you know, I could just be payroll, inexpensive falls, and it wouldn't matter matter if I get into vendor performance management or purchasing that can operate as asylum. But anymore, we are finding that there are tremendous insights. But in vendor performance management, I expensive for these things are all connected, so you can't afford to have your data sits in silos. So grading down that silo actually gives the business very good performance right insights that they didn't have before. So that's one way to go. But but another phenomena happens When you start to great down the silos, you start to recognize what data you don't have to take your business to the next level. That awareness will not happen when you're working with existing data so that Obama's comes into form. When you great the silos and you start to figure out you need to go after a different set of data to get you to a new product creation. What would that look like? New test insights or new Catholics avoidance that that data is just you have to go through the iteration to be able to figure that out. >>Stakes is what you're saying. So this notion of the autonomous enterprise. I help me here cause I get kind of autonomous and automation coming into I t I t ops. I'm interested in how you see customers taking that beyond the technology organization into the enterprise. >>I think when is a technology problem? The company? Is it a loss? AI has to be a business problem. AI has to inform the business strategy. Ai has been companies the successful companies that have done so. 90% of my investments are going towards state. We know that most of it going towards ai this data out there about this, right? And so we look at what are these? 90 90% of the companies investments where he's going and whose doing this right who's not doing this right? One of the things we're seeing as results is that the companies that are doing it right have brought data into the business strategy. They've changed their business model, right? So it's not like making a better taxi, but coming up with global, right? So it's not like saying Okay, I'm going to have all these. I'm going to be the drug manufacturing company. I'm gonna put drugs out there in the market this is I'm going to do connected help, right? And so how does data serves the business model of being connected? Help rather than being a drug company selling drugs to my customers, right? It's a completely different way of looking at it. And so now you guys informing drug discovery is not helping you just put more drugs to the market. Rather, it's helping you come up with new drugs that would help the process of connected games. There's a >>lot of discussion in the press about, you know, the ethics of a and how far should we take a far. Can we take it from a technology standpoint, Long room there? But how far should we take it? Do you feel as though public policy will take care of that? A lot of that narrative is just kind of journalists looking for, You know, the negative story. Well, that's sort itself out. How much time do you spend with your customers talking about that >>we in Oracle, we're building our data science platform with an explicit feature called Explained Ability. Off the model on how the model came up with the features what features they picked. We can rearrange the features that the model picked. Citing Explain ability is very important for ordinary people. Trust ai because we can't trust even even they decided this contrast right to a large extent. So for us to get to that level where we can really trust what AI is picking in terms of a modern, we need to have explain ability. And I think a lot of the companies right now are starting to make that as part of their platform. >>We're definitely entering a new era the age of of AI of the autonomous enterprise folks. Thanks very much for great segment. Really appreciate it. >>Yeah. Pleasure. Thank you for having us. >>All right. And thank you and keep it right there. We'll be back with our next guest right after this short break. You're watching the Cube's coverage of the rebirth of Oracle consulting right back. Yeah, yeah, yeah, yeah, yeah, yeah

Published Date : Jul 6 2020

SUMMARY :

empowering the autonomous enterprise brought to you by Oracle Consulting. So as part of the rebirth of Oracle Consulting, So there's about five or six things that I want to follow up with you there, so this is a good conversation. So you can imagine that we just launched into Ai without our So the classifications that come out of it, the insights that come out of it, the business process transformation comes And you just started to touch on some of I call it the autonomous and price right, the journey to be the autonomous enterprise, the finance industry, you were there for life. It can be like the food industry can be the cloud industry, the book industry, you know, different industries. So great, there's a really interesting point that the Gina is making that you mentioned. the value that we can help bring the Oracle customers is that you know, we have a rich stack the laws of scarcity data, if you can unlock it. the silos, you start to recognize what data you don't have to take your business to the I'm interested in how you see customers taking that beyond the technology And so now you guys informing drug discovery is lot of discussion in the press about, you know, the ethics of a and how far should we take a far. Off the model on how the model came up with the features what features they picked. We're definitely entering a new era the age of of AI of the autonomous enterprise Thank you for having us. And thank you and keep it right there.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
AppleORGANIZATION

0.99+

Janet GeorgePERSON

0.99+

AmazonORGANIZATION

0.99+

Grant GibsonPERSON

0.99+

OracleORGANIZATION

0.99+

BostonLOCATION

0.99+

Palo AltoLOCATION

0.99+

ObamaPERSON

0.99+

Oracle ConsultingORGANIZATION

0.99+

TeslaORGANIZATION

0.99+

90%QUANTITY

0.99+

NetflixORGANIZATION

0.99+

six thingsQUANTITY

0.98+

bothQUANTITY

0.98+

OneQUANTITY

0.98+

todayDATE

0.97+

more than oneQUANTITY

0.95+

GinaPERSON

0.93+

CubeCOMMERCIAL_ITEM

0.89+

Cube StudiosORGANIZATION

0.88+

CatholicsORGANIZATION

0.87+

one wayQUANTITY

0.87+

Advanced AnalyticsORGANIZATION

0.87+

90 90%QUANTITY

0.83+

CubeORGANIZATION

0.83+

about fiveQUANTITY

0.81+

CubePERSON

0.77+

StakesPERSON

0.67+

onceQUANTITY

0.63+

Data LakeCOMMERCIAL_ITEM

0.57+

zerosQUANTITY

0.55+

ThomasPERSON

0.39+

Breaking Down Your Data


 

>> Narrator: From theCUBE studios in Palo Alto and Boston, it's theCUBE, covering and powering the autonomous enterprise. Brought to you by: Oracle Consulting. >> Welcome back everybody to this special digital event coverage. TheCUBE is looking into the rebirth of Oracle Consulting. Janet George is here. She's Group VP Autonomous for Advanced Analytics with Machine Learning and Artificial Intelligence at Oracle. And she's joined by Grant Gibson as the Group VP of Growth and Strategy at Oracle. Folks, welcome to theCUBE thanks so much for coming on. >> Thank you. >> Thank you. >> Grant I want to start with you because you got strategy in your tittle, like the start big picture. What is the strategy with Oracle specifically as it relates to autonomous and also consulting. >> Sure. So I think, Oracle has a deep legacy of strength and data. And over the company's successful history, it's evolved what that is from steps along the way. And if you look at the modern enterprise at Oracle Client. I there's no denying that we've entered the age of AI. That everyone knows that artificial intelligence and machine learning are a key to their success and the business marketplace going forward. And while generally it's acknowledged that it's a transformative technology, and people know that they need to take advantage of it, it's the how that's really tricky. And that most enterprises, in order to really get an enterprise level RoI on an AI investment, need to engage in projects of significant scope. And going from realizing there's an opportunity or realizing there's a threat, to mobilizing yourself to capitalize on it is a daunting task for enterprise. Certainly one that's anybody that's got any sort of legacy of success has built in processes, has built in systems, has built in skill sets, and making that leap to be an autonomous enterprise is challenging for companies to wrap their heads around. So as part of the rebirth of Oracle Consulting, we've developed a practice around how to both manage the technology needs for that transformation, as well as the human needs, as well as the data science needs to it. >> There's about five or six things that I want to follow up with you there. So this is going to be a good conversation. Janet, ever since I've been in the industry we're talking about, AI, it's sort of start, stop, start, stop. We got the AI winter an now it seems to be here, it almost feel like the technology never lived up to its promise. We didn't have the horse power, or the compute power. Didn't have enough data maybe. So we're here today, feels like we are entering a new era. Why is that? And how will the technology perform this time. >> So for AI to perform, it's very reliant on the data. We enter the age of AI without having the right data for AI. So you can imagine that we just launched into AI without our data being ready to be training sets for AI. So we started with BI data, or we started with data that was already historically transformed, formatted, had logical structures physical structures, this data was sort of trapped in many different tools. And then suddenly AI comes along, and we say, take this data, our historical data. We haven't test it to see if this has labels in it, this has learning capability in it, we just thrust the data to AI. And that's why we saw the initial wave of AI sort of failing, because it was not ready for AI, ready for the generation of AI. >> And part of I think the leap that clients are finding success with now, is getting novel data types. And you're moving from the zeros and ones of structured data, to image, language, written language, spoken language, you're capturing different data sets in ways that prior tools never could. And so the classifications that come out of it, the insights that come out of it, the business process transformation comes out of it, is different than what we would have understood under the structured data format. So I think it's that combination of really being able to push massive amounts of data through a cloud product, to be able to process at its scale, that is what I think is the combination that takes it to the next plateau for sure. >> Beyond that, the language that we use today I feel like it's going to change, and you just started to touch on some of it. Sensing, our senses and the visualization and the auditory. So it's sort of this new experience that customers seeing. And a lot of this machine intelligence behind that, right? >> I call it the autonomous enterprise, right? The journey to be the autonomous enterprise. And when you're on this journey to be the autonomous enterprise, you need really, the platform that can help you be. Cloud is that platform which can help you get to the autonomous journey. But the autonomous journey does not end with the cloud, or doesn't end with the data lake. These are just infrastructures that are basic necessities for being on that autonomous journey. But in the end it's about how do train and scale at very large scale training that needs to happen on this platform, for AI to be successful. And if you are an autonomous enterprise, then you have really figured out how to tap into AI and machine learning in a way that nobody else has, to derive business value if you will. So you've got the platform, you've got the data and now you're actually tapping into the autonomous components, AI and machine learning, to derive business intelligence and business value. >> So I want to get into a little bit of Oracle's role, but to do that, I want to talk a little bit more about the industry. So if you think about the way this, the industry seems to be restructuring around data. You know historically, industries had their own stack or value chain. And if you were in the finance industry, you were there for life. >> So when you think about banking, for example, highly regulated industry, think about agriculture, these are highly regulated industries. It was very difficult to disrupt these industries, but now you're looking at Amazon, and what does an Amazon or any other tech giant like Apple have? They have incredible amounts of data. They understand how people use, or how they want to do banking. And so they've come up with Apple cash, or Amazon pay, and these things are starting to eat into the market. So you would have never thought an Amazon could a competition to a banking industry just because of regulations, but they are not hindered by the regulations because they are starting at a different level. And so they become an instant threat and an instant disrupter to these highly-regulated industries. That's what data does. When you use data as your DNA for your business and you are sort of born in data or you figured out how to be autonomous, if you will, capture value from that data, in a very significant manner. Then you can get into industries that are not traditionally your own industry. It can be like the food industry, it can be the cloud industry, the book industry, different industries. So that's what I see happening with the tech giants. >> So Grant, this is a really interesting point that Janet is making, that you've mentioned. You started off with like a couple of industries that are highly regulated, harder to disrupt. Music got disrupted, publishing got disrupted, but you've got these regulated businesses. Defense, Automotive actually, hasn't been truly disrupted yet, Tesla maybe is a harbinger. And so you've got this spectrum of disruption, but is anybody safe from disruption? >> I don't think anyone's ever safe from it. It's change in evolution, right? Whether it's swapping horseshoes for cars, or T.V. for movies, or Netflix or any sort of evolution of a business. I wouldn't coast on any of it. And I think to your earlier question around the value that we can help run to Oracle customers is that we have a rich sack of applications, and I find that the space between the applications, the data that spans more than one of them is a ripe playground for innovations where the data already exists inside a company but it's trapped from both a technology and a business perspective. And that's where I think really any company can take advantage of knowing its data better and changing itself to take advantage of what's already there. >> Yet powerful, but people always throw the bromide out that data is the new oil, and we've said no, data is far more valuable 'cause you can use it in a lot of different places. Oil you can use once and it has to follow the laws of scarcity, data, if you can unlock it. And so a lot of the incumbents, they have built a business around whatever, a factory or process and people. A lot of the trillion dollar start, they've become trillionaires, you know what I'm talking about. Data is at the core, they're data companies. So it seems like a big challenge for your incumbent customers, clients, is to put data at the core, be able to break down those silos, how do they do that? >> Grading down silos is really super critical for any business. If it's okay to operate in a silo for example, you would think that, oh you know I could just be payroll and expense reports and it wouldn't matter if I get into random performance management or purchasing, that can operate as a silo. But any more we are finding that there are tremendous insights between vendor performance management, eye expense reports, these things are all connected. So you can't afford to have your data sit in silos. So grading down that silo actually gives the business very good performance. Insights that they didn't have before. So that's one way to go. But another phenomena happens. Then you start to grade down the silos, you start to recognize what data you don't have to take your business to the next level. That awareness will not happen when you're working with existing data. So that event has comes into form when you grade the silos and you start to figure out you need to go after different set of data to get you to new product creation. What would that look like? New test insights or new type of avoidance. That data is just, you have to go through the iteration to be able to figure that out. >> Stakes is what you're saying. So this notion of the autonomous enterprise, help me here, 'cause I get kind of, autonomous and automation coming into IT, ITOps, I'm interested in how you see customers taking that beyond the technology organization into the enterprise. >> I think when AI is a technology problem, the company is at a loss. AI has to be a business problem. AI has to inform the business strategy. AI has to, when companies, the successful companies that have done. So 90% of our investments are going towards data, we know that. And most of it going towards AI, there's data out there about this. And so we look at, what are these 90% of the company's investments? Where are these going? And who is doing this right? And who is not doing this right? One of the things we are seeing as results is that the companies that are doing it right have brought data into their business strategy. They've changed their business model. So it's not making a better taxi, but coming up with Uber. So it's not like saying, okay I'm going to have all these, I'm going to be the drug manufacturing company, I'm going to put drugs out there in the market, versus I'm going to do connected health. And so how does data serve the business model of being connected health, rather than being a drug company selling drugs to my customers. It's a completely different way of looking at it. And so now AI is informing drug discovery. AI is not helping you just put more drugs to the market, rather, it's helping you come up with new drugs that would help the process of connected care. >> There's a lot of discussion in the press about the ethics of AI, and how far should we take AI, and how far can we take it from a technology standpoint (chuckles) long road map there, but how far should we take it. Do you feel as though public policy will take care of that? A lot of that narrative is just kind of journalists looking for the negative story. Will that sort itself out? How much time do you spend with your customers talking about that? And what's Oracle's role there? >> So we in Oracle, we're building our data science platform with an explicit feature called explainability of the model. On how the model came up with the features, what features it picked, we can rearrange the features that the model picked. So I think explainability is very important for ordinary people to trust AI, because we can't trust AI. Even data scientists can't trust AI to a large extent. So for us to get to that level where we can really trust what AI is picking in terms of a model, we need to have explainability. And I think a lot of the companies right now are starting to make that as part of their platform. >> Well we're definitely entering a new era. The age of AI, the autonomous enterprise. Folks, thanks very much for, great segment, really appreciate it. >> Yeah, a pleasure, thank you for having us. >> You're welcome. >> Thank you for having us. >> All right. And thank you. And keep it right there, we'll be right back with our next guest right after this short break. You're watching theCUBE's coverage of the rebirth of Oracle Consulting. Be right back. (gentle music)

Published Date : Apr 28 2020

SUMMARY :

Brought to you by: Oracle Consulting. TheCUBE is looking into the What is the strategy and making that leap to be So this is going to be So for AI to perform, it's And so the classifications And a lot of this machine the platform that can help you be. the industry seems to be out how to be autonomous, if you will, couple of industries that are And I think to your And so a lot of the incumbents, set of data to get you into the enterprise. One of the things we discussion in the press that the model picked. The age of AI, the autonomous enterprise. thank you for having us. coverage of the rebirth

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
AppleORGANIZATION

0.99+

Janet GeorgePERSON

0.99+

AmazonORGANIZATION

0.99+

Grant GibsonPERSON

0.99+

OracleORGANIZATION

0.99+

JanetPERSON

0.99+

Palo AltoLOCATION

0.99+

BostonLOCATION

0.99+

90%QUANTITY

0.99+

TeslaORGANIZATION

0.99+

Oracle ConsultingORGANIZATION

0.99+

UberORGANIZATION

0.99+

GrantPERSON

0.99+

OneQUANTITY

0.98+

bothQUANTITY

0.98+

todayDATE

0.98+

TheCUBEORGANIZATION

0.98+

six thingsQUANTITY

0.97+

NetflixORGANIZATION

0.97+

theCUBEORGANIZATION

0.94+

more than oneQUANTITY

0.94+

one wayQUANTITY

0.93+

about fiveQUANTITY

0.78+

waveEVENT

0.69+

zerosQUANTITY

0.67+

trillion dollarQUANTITY

0.56+

Dave Tang, Western Digital | Western Digital the Next Decade of Big Data 2017


 

(upbeat techno music) >> Announcer: Live from San Jose, California it's theCUBE, covering Innovating to Fuel the Next Decade of Big Data, brought to you by Western Digital. >> Hey, welcome back everybody. Jeff Frick here at theCUBE. We're at the Western Digital Headquarters off Almaden down in San Jose, a really important place. Western Digital's been here for a while, their headquarters. A lot of innovation's been going on here forever. So we're excited to be here really for the next generation. The event's called Innovating to Fuel the Next Generation of big data, and we're joined by many time Cuber, Dave Tang. He is the SVP in corporate marketing from Western Digital. Dave, always great to see you. >> Yeah. Always great to be here, Jeff. Thanks. >> Absolutely. So you got to MC the announcement today. >> Yes. >> So for the people that weren't there, let's give them a quick overview on what the announcement was and then we can dive in a little deeper. >> Great, so what we were announcing was a major breakthrough in technology that's going to allow us to drive the increase in capacity in density to support big data for the next decade and beyond, right? So capacities and densities had been starting to level off in terms of hard drive technology capability. So what we announced was microwave-assisted magnetic recording technology that will allow us to keep growing that areal density up and reducing the cost per terabyte. >> You know, it's fascinating cause everyone loves to talk about Moore's Law and have these silly architectural debates, whether Moore's Law is alive or dead, but, as anyone who's lived here knows, Moore's Law is really an attitude much more it is than the specific physics of microprocessor density growth. And it's interesting to see. As we know the growth of data is growing in giant and the types of data, and not only regular big data, but now streaming data are bigger and bigger and bigger. I think you talked about stuff coming off of people and machines compared to business data is way bigger. >> Right. >> But you guys continue to push limits and break through, and even though we expect everything to be cheaper, faster, and better, you guys actually have to execute it-- >> Dave: Right. >> Back at the factory. >> Right, well it's interesting. There's this healthy tension, right, a push and pull in the environment. So you're right, it's not just Moore's Law that's enabling a technology push, but we have this virtuous cycle, right? We've realized what the value is of data and how to extract the possibilities and value of data, so that means that we want to store more of that data and access more of that data, which drives the need for innovation to be able to support all of that in a cost effective way. But then that triggers another wave of new applications, new ways to tap into the possibilities of data. So it just feeds on itself, and fortunately we have great technologists, great means of innovation, and a great attitude and spirit of innovation to help drive that. >> Yeah, so for people that want more, they can go to the press releases and get the data. We won't dive deep into the weeds here on the technology, but I thought you had Janet George speak, and she's chief data scientist. Phenomenal, phenomenal big brain. >> Dave: Yes. >> A smart lady. But she talked about, from her perspective we're still just barely even getting onto this data opportunity in terms of automation, and we see over and over at theCUBE events, innovation's really not that complicated. Give more people access to the data, give them more access to the tools, and let them try things easier and faster and feel quick, there's actually a ton of innovation that companies can unlock within their own four walls. But the data is such an important piece of it, and there's more and more and more of this. >> Dave: Right, right. >> What used to be digital exhaust now is, I think maybe you said, or maybe it was Dave, that there's a whole economy now built on data like we used to do with petroleum. I thought that was really insightful. >> Yeah, right. It's like a gold mine. So not only are the sources of data increasing, which is driving increased volume, but, as Janet was alluding to, we're starting to come up with the tools and the sophistication with machine learning and artificial intelligence to be able to put that data to new use as well as to find the pieces of data to interconnect, to drive these new capabilities and new insights. >> Yeah, but unlike petroleum it doesn't get used up. I mean that's the beauty of data. (laughing) >> Yeah, that's right. >> It's a digital process that can be used over and over and over again. >> And a self-renewing, renewing resource. And you're right, in that sense that it's being used over and over again that the longevity of that data, the use for life is growing exponentially along with the volume. >> Right, and Western Digital's in a unique position cause you have systems and you have big systems that could be used in data centers, but you also have the media that powers a whole bunch of other people's systems. So I thought one of the real important announcements today was, yes it's an interesting new breakthrough technology that uses energy assist to get more density on the drives, but it's done in such a way that the stuff's all backward compatible. It's plug and play. You've got production scheduled in a couple years I think with test out the customers-- >> Dave: That's right. >> Next year. So, you know, that is such an important piece beyond the technology. What's the commercial acceptance? What are the commercial barriers? And this sounds like a pretty interesting way to skin that cow. >> Right, often times the best answers aren't the most complex answers. They're the more elegant and simplistic answers. So it goes from the standpoint of a user being able to plug and play with older systems, older technologies. That's beautiful, and for us, to be able to, the ability to manufacture it in high volume reliably and cost effectively is equally as important. >> And you also talked, which I think was interesting, is kind of the relationship between hard drives and flash, because, obviously, flash is a, I want to say the sexy new toy, but it's not a sexy new toy anymore. >> Right. >> It's been around for a while, but, with that pressure on flash performance, you're still seeing the massive amounts of big data, which is growing faster than that. And there is a rule for the high density hard drives in that environment, and, based on the forecast you shared, which I'm presuming came from IDC or people that do numbers for a living, still a significant portion of a whole lot of data is not going to be on flash. >> Yeah, that's right. I think we have a tendency, especially in technology, to think either or, right? Something is going to take over from something else, but in this case it's definitely an and, right. And a lot of that is driven by this notion that there's fast data and big data, and, while our attention seems to shift over to maybe some fast data applications like autonomous vehicles and realtime applications, surveillance applications, there's still a need for big data because the algorithms that drive those realtime applications have to come from analysis of vast amounts of data. So big data is here to stay. It's not going away or shifting over. >> I think it's a really interesting kind of cross over, which Janet talked about too, where you need the algorithms to continue sharing the system that are feeding, continuing, and reacting to the real data, but then that just adds more vocabulary to their learning set so they can continue to evolve overtime. >> Yeah, what really helps us out in the market place is that because we have technologies and products across that full spectrum of flash and rotating magnetic recording, and we sell to customers who buy devices as well as platforms and systems, we see a lot of applications, a lot of uses of data, and we're able to then anticipate what those needs are going to be in the near future and in the distant future. >> Right, so we're getting towards the end of 2017, which I find hard to say, but as you look forward kind of to 2018 and this insatiable desire for more storage, cause this insatiable creation of more data, what are some of your priorities for 2018 and what are you kind of looking at as, like I said, I can't believe we're going to actually flip the calendar here-- >> Dave: Right, right. >> In just a few short months. (laughing) >> Well, I think for us, it's the realization that all these applications that are coming at us are more and more diverse, and their needs are very specialized. So it's not just the storage, although we're thought of as a storage company, it's not just about the storage of that data, but you have contrive complete environments to capture and preserve and access and transform that data, which means we have to go well beyond storage and think about how that data is accessed, technical interfaces to our memory products as well as storage products, and then where compute sits. Does it still sit in a centralized place or do you move compute to out closer to where the data sits. So, all this innovation and changing the way that we think about how we can mine that data is top of the mind for us for the next year and beyond. >> It's only job security for you, Dave. (laughing) >> Dave: Funny to think about. >> Alright. He's Dave Tang. Thanks for inviting us and again congratulations on the presentation. >> Always a pleasure. >> Alright, Dave Tang, I'm Jeff Frick. You're watching theCUBE from Western Digital headquarters in San Jose, California. Thanks for watching. (upbeat techno music)

Published Date : Oct 11 2017

SUMMARY :

brought to you by Western Digital. He is the SVP in corporate marketing Always great to be here, Jeff. So you got to MC the announcement today. So for the people that weren't there, and reducing the cost per terabyte. and machines compared to business data and how to extract the possibilities and get the data. Give more people access to the data, that there's a whole economy now the pieces of data to interconnect, I mean that's the beauty of data. It's a digital process that can be used that the longevity of that data, that the stuff's all backward compatible. What are the commercial barriers? the ability to manufacture it in high volume is kind of the relationship between hard drives and, based on the forecast you shared, So big data is here to stay. and reacting to the real data, in the near future and in the distant future. (laughing) So it's not just the storage, It's only job security for you, Dave. and again congratulations on the in San Jose, California.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JanetPERSON

0.99+

Janet GeorgePERSON

0.99+

Jeff FrickPERSON

0.99+

Dave TangPERSON

0.99+

DavePERSON

0.99+

Western DigitalORGANIZATION

0.99+

JeffPERSON

0.99+

2018DATE

0.99+

San Jose, CaliforniaLOCATION

0.99+

AlmadenLOCATION

0.99+

Next yearDATE

0.99+

San JoseLOCATION

0.99+

next yearDATE

0.99+

todayDATE

0.98+

next decadeDATE

0.97+

oneQUANTITY

0.95+

Moore's LawTITLE

0.91+

Western DigitalLOCATION

0.89+

CuberPERSON

0.88+

theCUBEORGANIZATION

0.88+

end of 2017DATE

0.88+

Innovating to Fuel the Next Generation ofEVENT

0.81+

IDCORGANIZATION

0.8+

2017DATE

0.8+

Moore'sTITLE

0.76+

MoorePERSON

0.72+

monthsQUANTITY

0.68+

couple yearsQUANTITY

0.65+

lot of applicationsQUANTITY

0.61+

four wallsQUANTITY

0.6+

Innovating to Fuel the Next Decade of Big DataEVENT

0.59+

Next DecadeDATE

0.58+

Mike Cordano, Western Digital | Western Digital the Next Decade of Big Data 2017


 

>> Announcer: Live from San Jose, California, it's The Cube. Covering Innovating to Fuel the Next Decade of Big Data. Brought to you by Western Digital. >> Hey, welcome back everybody. Jeff Frick here with The Cube. We're at the Western Digital headquarters in San Jose, the Great Oaks Campus, a really historic place in the history of Silicon Valley and computing. It's The Innovating to Fuel the Next Generation of Big Data event with Western Digital. We're really excited to be joined by our next guest, Mike Cordano. He's the president and chief operating officer of Western Digital. Mike, great to see you. >> Great to see you as well. Happy you guys could be here. It's an exciting day. >> Absolutely. First off, I think the whole merger thing is about done, right? That's got to feel good. >> Yeah, it's done, but there's legs to it, right? So we've combined these companies now, three of them, three large ones, so obviously Western Digital and Hitachi Global Storage, now we've added SanDisk into one Western Digital, so we're all together. Obviously more to do, as you expect in a large scale integration. There will be a year or two of bringing all those business processes and systems together, but I got to say, the teams are coming together great, showing up in our financial performance and our product execution, so things are really coming together. >> Yeah, not an easy task by any stretch of the imagination. >> No, not easy, but certainly a compliment to our team. I mean, we've got great people. You know, like anything, if you can harness the capabilities of your team, there's a lot you can accomplish, and it really is a compliment to the team. >> Excellent. Well, congratulations on that, and talking a bit about this event here today, you've even used "Big Data" in the title of the event, so you guys are obviously in a really unique place, Western Digital. You make systems, big systems. You also make the media that feeds a lot of other people's systems, but as the big data grows, the demand for data grows, it's got to live somewhere, so you're sitting right at the edge where this stuff's got to sit. >> Yeah, that's right, and it's central to our strategy, right? So if you think about it, there's three fundamental technologies that we think are just inherent in all of the evolution of compute and IT architecture. Obviously, there is compute, there is storage or memory, and then there's sort of movement, or interconnect. We obviously live in the storage or memory node, and we have a very broad set of capabilities, all the way from rotating magnetic media, which was our heritage, now including non-volatile memory and flash, and that's just foundational to everything that is going to come, and as you said, we're not going to stop there. It's not just a devices or component company, we're going to continue to innovate above that into platforms and systems, and why that becomes important to us, is there's a lot of technology innovation we can do that enhances the offering that we can bring to market when we control the entire technology stat. >> Right. Now, we've had some other guests on and people can get more information on the nitty-gritty details of the announcement today, the main announcement. Basically, in a nutshell, enabling you to get a lot more capacity in hard drives. But, I thought in your opening remarks this morning, there were some more high-level things I wanted to dig into with you, and specifically, you made an analogy of the data economy, and compared it to the petroleum economy. I've never... A lot of times, they talk about big data, but no one really talks about it, that I've heard, in those terms, because when you think about the petroleum economy, it's so much more than fuel and cars, and the second-order impacts, and the third-order impacts on society are tremendous, and you're basically saying, "We're going to "do this all over again, but now it's based on data." >> Yeah, that's right, and I think it puts it into a form that people can understand, right? I think it's well-proven what happened around petroleum, so the discovery of petroleum, and then the derivative industries, whether it be automobiles, whether it be plastics, you pick it, the entire economy revolved around, and, to some degree, still revolves around petroleum. The same thing will occur around data. You're seeing it with investments, you hear now things like machine learning, or artificial intelligence, that is all ways to transform and mine data to create value. >> Right. >> And we're going to see industries change rapidly. Autonomous cars, that's going to be enabled by data, and capabilities here, so pick your domain. There's going to be innovation across a lot of fronts, across a lot of traditional vertical industries, that is all going to be about data and driven by data. >> It's interesting what Janet, Doctor Janet George talked about too a little bit is the types of data, and the nozzles of the data is also evolving very quickly from data at rest to data in motion, to real-time analytics, to, like you said, the machine learning and the AI, which is based on modeling prior data, but then ingesting new data, and adjusting those models so even the types and the rate and the speed of the data is under dramatic change right now. >> Yeah, that's right, and I think one of the things that we're helping enable is you kind of get to this concept of what do you need to do to do what you describe? There has to be an infrastructure there that actually enables it. So, when you think about the scale of data we're dealing with, that's one thing that we're innovating around, then the issue is, how do you allow multiple applications to simultaneously access and update and transform that? Those are all problems that need to be solved in the infrastructure to enable things like AI, right? And so, where we come into play, is creating that infrastructure layer that actually makes that possible. The other thing I talked about briefly in the Q and A was, think about the problem of a future where the data set is just too large to actually move it in a substantive way to the compute. We actually have to invert that model over time architecturally, and bring the compute to the data, right? Because it becomes too complicated and too expensive to move from the storage layer up to compute and back, right? That is a complex operation. That's why those three pillars of technology are so important. >> And you've talked, and we're seeing in the Cloud right, because this continuing kind of atomization, atomic, not automatic, but making these more atomic. A smaller unit that the Cloud has really popularized, so you need a lot, you need a little, really, by having smaller bits and bytes, it makes that that much more easy. But another concept that you delved into a little was fast data versus big data, and clearly flash has been the bright, shiny object for the last couple years, and you guys play in that market as well, but it is two very different ways to think of the data, and I thought the other statistic that was shared is you know, the amount of data coming off of the machines and people dwarfs the business data, which has been the driver of IT spend for the last several decades. >> Yeah, no, that's right, and sort of that... You think about that, and the best analogy is a broader definition of IOT, right? Where you've got all of these censors, whether it be camera censors, because that's just a censor, creating an image or a video, or if it's more industrialized too, you've got all these sources of data, and they're going to proliferate at an exponential rate, and our ability to aggregate that in some sort of an organized way, and then act upon it, again, let's use the autonomous car as the example. You've got all these censors that are in constant motion. You've got to be able to aggregate the data, and make decisions on it at the edge, so that's not something... You can't deal with latency up to the Cloud and back, if it's an automobile, and it needs to make an instantaneous decision, so you've got to create that capability locally, and so when you think about the evolution of all this, it's really the integration of the Cloud, which, as Janet talked about, is the ability to tap into this historical or legacy data to help inform a decision, but then there's things happening out at the edge that are real time, and you have to have the capability to ingest the content, make a decision on it very quickly, and then act on it. >> Right. There's a great example. We went to the autonomous... Just navigation for the autonomous vehicles. It's own subset that I think Goldman-Sachs said it a seven billion dollar industry in the not-too-distant future, and the great example is this combination of the big data and the live data is, when they actually are working on the road. So you've got maps that tell you, and are updated, kind of what the road looks like, but on Tuesday, they were shifting the lane, and that particular lane now has cones in it, so the combination of the two is such a powerful thing. >> That's right. >> I want to dive into another topic we talked about, which is really architecting for the future. Unlike oil, data doesn't get consumed and is no longer available, right? It's a reusable asset, and you talked about classic stove-topping of data within an application center world where now you want that data available for multiple applications, so very different architecture to be able to use it across many fronts, some of which you don't even know yet. >> That's right. I think that's a key point. One of the things, when we talk to CEOs, or CIOs I should say, what they're realizing, to the extent you can enable a cost-effective mechanism for me to store and keep everything, I don't know how I'll derive value from it some time in the future, because as applications evolve, we're finding new insights into what can help drive decisions or innovation, or, to take it to health care, some sort of innovation that cures disease. That's one of the things that everybody wants to do. I want to build aggregate everything. If I could do that cost effectively enough, I'll find a way to get value out of it over time, and that's something where, when we're thinking about big data and what we talked about today, that's central to that idea, and enabling it. >> Right, and digital transformation, right, the hot buzz word, but we hear, time and time again, such a big piece of that is giving the democratization. Democratization of the data, so more people have access to it, democratization of the tools to manipulate that data, not just Mahogany Row super smart people, and then to have a culture that lets people actually try, experiment, fail fast, and there's a lot of innovation that would be unlocked right within your four walls, that probably are not being tapped into. >> Well, that's right, and that's something that innovation, and an innovation culture is something that we're working hard at, right? So if you think about Western Digital, you might think of us as, you know, legacy Western Digital as sort of a fast following, very operational-centric company. We're still good at those things, but over the last five years, we've really pushed this notion of innovation, and really sort of pressing in to becoming more influential in those feature architectures. That drives a culture that, if we think about the technical community, if we create the right sort of mix of opportunity, appetite for some risk, that allows the best creativity to come out of our technical... Innovating along these lines. >> Right, I'll give you the last word. I can't believe we're going to turn the calendar here on 2017, which is a little scary. As you look forward to 2018, what are some of your top priorities? What are you going to be working on as we come into the new calendar year? >> Yeah, so as we look into 2018 and beyond, we really want to drive this continued architectural shift. You'll see us be very active, and I think you talked about it, you'll see us getting increasingly active in this democratization. So we're going to have to figure out how we engage the broader open-source development world, whether it be hardware or software. We agree with that mantra, we will support that. Obviously we can do unique development, but with some hooks and keys that we can drive a broader ecosystem movement, so that's something that's central to us, and one last word would be, one of the things that Martin Fink has talked about which is really part of our plans as we go onto the new year, is really this inverting the model, where we want to continue to drive an architecture that brings compute to the storage and enables some things that just can't be done today. >> All right, well Mike Cordano, thanks for taking a few minutes, and congratulations on the terrific event. >> Thank you. Appreciate it. >> He's Mike Cordano, I'm Jeff Frick, you're watching The Cube, we're at Western Digital headquarters in San Jose, Great Oaks Campus, it's historic. Check it out. Thanks for watching.

Published Date : Oct 11 2017

SUMMARY :

Brought to you by Western Digital. It's The Innovating to Fuel the Next Generation of Big Data Great to see you as well. That's got to feel good. Obviously more to do, as you expect and it really is a compliment to the team. of the event, so you guys are obviously in a really unique that is going to come, and as you said, more information on the nitty-gritty details of the and mine data to create value. that is all going to be about data and driven by data. to real-time analytics, to, like you said, the machine architecturally, and bring the compute to the data, right? and people dwarfs the business data, which has been talked about, is the ability to tap into this historical now has cones in it, so the combination of the two to be able to use it across many fronts, some of which that's central to that idea, and enabling it. and then to have a culture that lets people actually and really sort of pressing in to becoming more influential the new calendar year? architecture that brings compute to the storage and enables and congratulations on the terrific event. Thank you. The Cube, we're at Western Digital headquarters in San Jose,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Mike CordanoPERSON

0.99+

Western DigitalORGANIZATION

0.99+

JanetPERSON

0.99+

Jeff FrickPERSON

0.99+

San JoseLOCATION

0.99+

Janet GeorgePERSON

0.99+

TuesdayDATE

0.99+

MikePERSON

0.99+

2018DATE

0.99+

threeQUANTITY

0.99+

2017DATE

0.99+

Goldman-SachsORGANIZATION

0.99+

Silicon ValleyLOCATION

0.99+

Martin FinkPERSON

0.99+

San Jose, CaliforniaLOCATION

0.99+

Hitachi Global StorageORGANIZATION

0.99+

OneQUANTITY

0.99+

seven billion dollarQUANTITY

0.99+

twoQUANTITY

0.99+

SanDiskORGANIZATION

0.99+

The CubeTITLE

0.99+

a yearQUANTITY

0.99+

todayDATE

0.98+

three pillarsQUANTITY

0.97+

three fundamental technologiesQUANTITY

0.97+

Great Oaks CampusLOCATION

0.97+

oneQUANTITY

0.97+

FirstQUANTITY

0.95+

two very different waysQUANTITY

0.94+

one thingQUANTITY

0.93+

this morningDATE

0.92+

Next DecadeDATE

0.92+

three largeQUANTITY

0.91+

one last wordQUANTITY

0.91+

DoctorPERSON

0.87+

The CubeORGANIZATION

0.84+

third-orderQUANTITY

0.84+

last couple yearsDATE

0.83+

Mahogany RowORGANIZATION

0.82+

second-orderQUANTITY

0.81+

CloudTITLE

0.76+

last five yearsDATE

0.7+

last several decadesDATE

0.69+

four wallsQUANTITY

0.64+

Big DataEVENT

0.49+

Western Digital Taking the Cloud to the Edge, Panel 2 | DataMakesPossible


 

>> They are disruptive technologies. And if you think about the disruption that's happening in business, with IoT, with OT, and with big data, you can't get anything more disruptive to the whole of the business chain as this particular area. It's an area that I focused on myself, asking the question, should everything go to the cloud? Is that the new future? Is 90% of the computing going to go to the cloud with just little mobile devices right on the edge? Felt wrong when I did the math on it, I did some examples of real-world environments, wind farms, et cetera, it clearly was not the right answer, things need to be near the edge. And I think one of the areas to me that solidified it was when you looked at an area like video. Huge amounts of data, real important decisions being made on the content of that video, for example, recognizing a face, a white hat or a black hat. If you look at the technology, sending that data somewhere to do that recognition just does not make sense. Where is it going? It's going actually into the camera itself, right next to the data, because that's where you have the raw data, that's where you have the maximum granularity of data, that's where you need to do the processing of which faces are which, right close to the edge itself, and then you can send the other data back up to the cloud, for example, to improve those algorithms within that camera, to do all that sort of work on the batch basis over time, that's what I was looking at, and looking at the cost justification for doing that sort of work. So today, we've got a set people here on the panel, and we want to talk about coming down one level to where IoT and IT are going to have to connect together. So on the panel I've got, I'm going to get these names really wrong, Sanjeev Kumar? >> Yes, that's right. >> From FogHorn, could you introduce yourself and what you're doing where the data is meeting the people and the machines? >> Sure, sure, so my name is Sanjeev Kumar, I actually run engineering for a company called FogHorn Systems, we are actually bringing analytics and machine learning to the edge, and, so our goal and motto is to take computing to where the data is, than the other way around. So it's a two-year-old company that started, was incubated in the hive, and we are in the process of getting our second release of the product out shortly. >> Excellent, so let me start at the other end, Rohan, can you talk about your company and what contribution you're focusing on? >> Sure, I'm head product marketing for Maana, Maana is a startup, about three years old, what we're doing is we're offering an enterprise platform for large enterprises, we're helping the likes of Shell and Maersk and Chevron digitally transform, and that simply means putting the focus on subject matter experts, putting the focus on the people, and data's definitely an important part of it, but allowing them to bring their expertise into the decision flows, so that ultimately the key decisions that are driving the revenue for these behemoths, are made at a higher quality and faster. >> Excellent. Well, two software companies, we have a practitioner here who is actually doing fog computing, doing it for real, has been doing it for some time, so could you like, Janet George from Western Digital, can you introduce yourself, and say something from the trenches, of what's really going on? >> Okay, very good, thank you. I actually build infrastructure for the edge to deal with fog computing, and so for Western Digital, we're very lucky, because we are the largest storage manufacture, and we have what we call Internet of Things, and Internet of Test Equipment, and I process petabytes of data that comes out of the Internet of Things, which is basically our factories, and then I take these petabytes of data, I process them both on the cloud and then on the edge, but primarily, to be able to consume that data. And the way we consume that data is by building very high-profile models through artificial intelligence and machine learning, and I'll talk a lot more about that, but at the end of the day, it's all about consuming the data that you collect from anywhere, Internet of Things, computer equipment, data that's being produced through products, you have to figure out a way to compute that, and the cloud has many advantages and many trade-offs, and so we're going to talk about the trade-offs, that's where the gap for computing comes into play. >> Excellent, thanks very much. And last but not least, we have Val, and I can never pronounce your surname. >> Bercovici. >> Thank you. (chuckling) You are in the midst of a transition yourself, so talk about where you have been and where you're going. >> For the better part of this century, I've been with NetApp, working at various functions, obviously enterprise storage, and around 2008, my developer instinct kind of fired up, and this thing called cloud became very interesting to me. So I became a self-anointed cloud czar at NetApp, and I ended up initiating a lot of our projects which we know today as the NetApp Data Fabric, that culminated about 18 months ago, in acquisition of SolidFire, and I'm now the acting CTO of SolidFire, but I plan to retire from the storage industry at the end of our fiscal year, at the end of April, and I'm spending a lot of time with particularly the Cloud Native Compute Foundation, that is, the opensource home of Google's Kubernetes Technology and about seven other related projects, we keep adding some almost every month, and I'm starting to lose track, and spending a lot of time on the data gravity challenge. It's a challenge in the cloud, it's a particularly new and interesting challenge at the edge, and I look forward to talking about that. >> Okay, and data gravity is absolutely key, isn't it, it's extremely expensive and extremely heavy to move around. >> And the best analogy is workloads are like electricity, they move fairly easily and lightly, data's like water, it's really hard to move, particularly large bodies around. >> Great. I want to start with one question though, just in the problem, the core problem, particularly in established industries, of how do we get change to work? In an IT shop, we have enough problems dealing with operations and development. In the industrial world, we have the IT and the OT, who look at each other with less than pleasure, and mainly disdain. How do we solve the people problem in trying to put together solutions? You must be right in the middle of it, would you like to start with that question? >> Absolutely, so we are 26 years old, probably more than that, but we have very old and new mix of manufacturing equipment, it's a storage industry, and in our storage industry, we are used to doing things a certain way. We have existing data, we have historical data, we have trend data, you can't get rid of what you already have. The goal is to make connectors such that you can move from where you're at to where you're going, and so you have to be able to take care of the shift that is happening in the market, so at the end of the day, if you look at five years from now, it's all going to be machine learning and AI, right? Agent technology's already here, it's proven, we can see, Siri is out here, we can see Alexa, we can see these agent technologies out there, so machine learning is a getting a lot of momentum, deep learning and neural networks, things like that. So we got to be able to look at that data and tap into our data, near realistically, very different, and the way to do that is really making these connections happen, tapping into old versus new. Like for example, if you look at storage, you have file storage, you have block storage, and then you have object storage, right? We've not really tapped into the field of object storage, and the reason is because if you are going to process one trillion objects like Amazon is doing right now with S3, you can't do it with the file system level storage or with the blog system level storage, you have to go to objects. Think Internet of Things. How many trillions of objects are going to come out of these Internet of Things? So one, you have to be positioned from an infrastructure standpoint. Two, you have to be positioned from a use case prototyping perspective, and three, you got to be able to scale that very rapidly, very quickly, and that's how change happens, change does not happen because you ask somebody to change their behavior, change happens when you show value, and people are so eager to get that value out of what you've shown them in real life, that they are so quick to adapt. >> That's an excellent-- >> If I could comment on that as well, which is, we just got through training a bunch of OT guys on our software, and two analogies that actually work very well, one is sort of, the operational people are very familiar with circuit diagrams, and so, and sort of, flow of things through essentially black boxes, you can think of these as something that has a bunch of inputs and has a bunch of outputs. So that's one thing that worked very well. The second thing that works very well is the PLC model, and there are direct analogies between PLC's and analytics, which people on the floor can actually relate to. So if you have software that's basically based on data streams and time, as a first-class citizen, the PLC model again works very well in terms of explaining the new software to the OT people. >> Excellent, okay, would you want to come in on that as well? >> Sure, I think a couple of points to add to what Janet said, I couldn't agree more in terms of the result, I think Maana did a few projects, a few pilots to convince customers of their value, and we typically focus very heavily on operationalizing the output, so we are very focused on making sure that there is some measurable value that comes out of it, and it's not until the end user started seeing that value that they were willing and open to adopt the newer methodologies. A second point to that is, a lot of the more recent techniques available to solve certain challenges, there are deep learning neural nets there's all sorts of sophisticated AI and machine learning algorithms that are out there, a lot of these are very sophisticated in their ability to deliver results, but not necessarily in the transparency of how you got that, and I think that's another thing that Maana's learning, is yes, we have this arsenal of fantastic algorithms to throw at problems, but we try to start with the simplest approach first, we don't unnecessarily try to brute force, because I think an enterprise, they are more than willing to have that transparency in how they're solving something, so if they're able to see how they were able to get to us, how the software was able to get to a certain conclusion, then they are a lot happier with that approach. >> Could you maybe just give one example, a real-world example, make it a little bit real? >> Right, absolutely, so we did a project for a very large organization for collections, they have a lot of outstanding capital locked up and customers not paying, it's a standard problem, you're going to find it in pretty much any industry, and so for that outstanding invoice, what we did was we went ahead and we worked with the subject matter experts, we looked at all the historical accounts receivable data, we took data from a lot of other sources, and we were able to come up with models to predict when certain customers are likely to pay, and when they should be contacted. Ultimately, what we wanted to give the collection agent were a list of customers to call. It was fairly straightforward, of course, the solution was not very, very easy, but at least on a holistic level, it made a lot of sense to us. When we went to the collection agents, many of them actually refused to use that approach, and this is part of change management in some sense, they were so used to doing things their way, they were so used to trying to target the customers with the largest outstanding invoice, or the ones that hadn't paid for the longest amount of time, that it actually took us a while, because initially, what the feedback we got was that your approach is not working, we're not seeing the results. And when we dug into it, it was because it wasn't being used, so that would be one example. >> So again, proof points that you will actually get results from this. >> Absolutely, and the transparency, I think we actually sent some of our engineers to work with the collections agents to help them understand what approach is it that we're taking, and we showed them that this is not magic, we're actually, instead of looking at the final dollar value, we're looking, we're calculating time value lost, so we are coming up with a metric that allows us to incorporate not just the outstanding amount, or the time that they haven't paid for, but a lot of other factors as well. >> Excellent, Val. >> When you asked that question, I immediately went to more of a nontechnical business side of my brain to answer it, so my experience over the years has been particularly during major industry transitions, I'm old enough to remember the mainframe to client server transition, and now client server to virtualization and cloud, and really, sales reps have that well-earned reputation of being coin-operated, though it's remarkable how much you can adjust compensation plans for pretty much anyone, in a capitalist environment, and the IT/OT divide, if you will, is pretty easy to solve from a business perspective when you take someone with an IT supporting the business mentality, and you compensate them on new revenue streams, new business, all of a sudden, the world perspective changes sometimes overnight, or certainly when that contract is signed. That's probably the number one thing you can do from a people perspective, is incent them and motivate them to focus on these new things, the technology is, particularly nowadays is evolving to support them for these new initiatives, but nothing motivates like the right compensation plan. >> Excellent, a great series of different viewpoints. So the second question I have again coming down a bit to this level, is how do we architect a solution? We heard you got to architect it, and you've got less, like this, it seems to me that that's pretty difficult to do ahead of where you're going, that in general, you take smaller steps, one step at a time, you solve one problem, you go on to the next. Am I right in that? If I am, how would you suggest the people go about this decision-making of putting architectures together, and if you think I'm wrong and you have a great new way of doing it, I'd love to hear about it. >> I can take a shorter route. So we have a number of customers that are trying to adopt, are going through a phased way of adopting our technology and products, and so it begins with first gathering of the data, and replaying it back, to build the first level of confidence, in the sense that the product is actually doing what you're expecting it to do. So that's more from monitoring administration standpoint. The second stage is you should begin to capture analytical logic into the project, where it can start doing prediction for you, so you go into, so from operational, you go into a predictive maintenance, predictive maintenance, predictive models standpoint. The third part is prescriptive, where you actually help create a machine learning model, now, it's still in flux in terms of where the model gets created, whether it's on the cloud, in a central fashion, or some sort of a, the right place, the right context in a multi-level hierarchical fog layer, and then, you sort of operationalize that as close to the data again as possible, so you go through this operational to predictive to prescriptive adoption of the technology, and that's how people actually build confidence in terms of adopting something new into, let's say, a manufacturing environment, or things that are pretty expensive, so I give you another example where you have the case of capacitors being built on a assembly line, manufacturing, and so how do you, can you look at data across different stations and manufacturing on a assembly line? And can you predict on the second station that it's going to fail on the eighth one? By that, what you're doing is you are actually reducing the scrap that's coming off of the assembly line. So, that's the kind of usage that you're going to in the second and third stage. >> Host: Excellent. Janet, do you want to go on? >> Yeah, I agree and I have a slightly different point of view also. I think architecture's very difficult, it's like Thomas Edison, he spent a lot of time creating negative knowledge to get to that positive knowledge, and so that's kind of the way it is in the trenches, we spend a lot of time trying to think through, the keyword that comes to mind is abstraction layers, because where we came from, everything was tightly coupled, and tightly coupled, computer and storage are tightly coupled, structured and unstructured data are tightly coupled, they're tightly coupled with the database, schema is tightly coupled, so now we are going into this world of everything being decoupled. In that, multiple things, multiple operating systems should be able to use your storage. Multiple models should be able to use your data. You cannot structure your data in any kind of way that is customized to one particular model. Many models have to run on that data on the fly, retrain itself, and then run again, so when you think about that, you think about what suits best to stay in the cloud, maybe large amounts of training data, schema that's already processed can stay on the cloud. Schema that is very dynamic, schema that is on the fly, that you need to read, and data that's coming at you from the Internet of Things that's changing, I call it heteroscedastic data, which is very statistical in nature, and highly variable in nature, you don't have time to sit there and create rows and columns and structure this data and put it into some sort of a structured set, you need to have a data lake, you need to have a stack on top of that data lake that can then adapt, create metadata, process that data and make it available for your models, so, and then over time, like I totally believe that now we're running into near realtime compute bottleneck, processing all this pattern processing for the different models and training sets, so we need a stack that we can quickly replace with GPUs, which is where the future is going, with pattern processing and machine learning, so your architecture has to be extremely flexible, high layers of abstraction, ability to train and grow and iterate. >> Excellent. Do you want to go next? >> So I'll be a broken record, back to data gravity, I think in an edge context, you really got to look at the cost of processing data is orders of magnitude less than moving it or even storing it, and so I think that the real urgency, I don't know, there's 90% that think of data at the edge is kind of wasted, you can filter through it and find that signal through the noise, so processing data to make sure that you're dealing with really good data at the edge first, figuring out what's worth retaining for future steps, I love the manufacturing example, I have lots of customer examples ourselves where, for quality control in a high-moving assembly line, you want to take thousands of not millions of images and compare frame and frame exactly according to the schematics where the device is compared to where it should be, or where the components, and the device compared to where they should be, processing all of that data locally and making sure you extract the maximum value before you move data to a central data lake to correlate it against other anomalies or other similarities, that's really key, so really focus on that cost of moving and storing data, yeah. >> Yes, do you want the last word? >> Sure, Maana takes an interesting approach, I'm going to up-level a little bit. Whenever we are faced with a customer or a particular problem for a customer, we try to go over the question-answer approach, so we start with taking a very specific business question, we don't look at what data sources are available, we don't ask them whether they have a data lake, or we literally get their business leaders, their subject matter experts, we literally lock them up in a room and we say, "You have to define "a very specific problem statement "from which we start working backwards," each problem statement can be then broken down into questions, and what we believe is any question can be answered by a series of models, you talked about models, we go beyond just data models, we believe anything in the real world, in the case of, let's say, manufacturing, since we're talking about it, any smallest component of a machine should be represented in the form of a concept, relationships between people operating that machinery should be represented in the form of models, and even physics equations that are going into predicting behavior should be able to represent in the form of a model, so ultimately, what that allows us is that granularity, that abstraction that you were talking about, that it shouldn't matter what the data source is, any model should be able to plug into any data source, or any more sophisticated bigger model, I'll give you an example of that, we started solving a problem of predictive maintenance for a very large customer, and while we were solving that predictive maintenance problem, we came up with a number of models to go ahead and solve that problem. We soon realized that within that enterprise, there are several related problems, for example, replacement of part inventory management, so now that you figured out which machine is going to fail at roughly what instance of time from now, we can also figure out what parts are likely to fail, so now you don't have to go ahead and order a ton of replacement parts, because you know what parts are going to likely fail, and then you can take that a step further by figuring out which equipment engineer has the skillset to go ahead and solve that particular issue. Now, all of that, in today's world, is somewhat happening in some companies, but it is actually a series of point solutions that are not talking to each other, that's where our pattern technology graph is coming into play where each and every model is actually a note on the graph including computational models, so once you build 10 models to solve that first problem, you can reuse some of them to solve the second and third, so it's a time-to-value advantage. >> Well, you've been a fantastic panel, I think these guys would like to get to a drink at the bar, and there's an opportunity to talk to you people, I think this conversation could go on for a long, long time, there's so much to learn and so much to share in this particular information. So with that, over to you! >> I'll just wrap it up real quick, thanks everyone, give the panel a hand, great job. Thanks for coming out, we have drinks for the next hour or two here, so feel free to network and mingle, great questions to ask them privately one-on-one, or just have a great conversation, and thanks for coming, we really appreciate it, for our Big Data SV Event livestreamed out, it'll be on demand on YouTube.com/siliconangle, all the video, if you want to go back, look at the presentations, go to YouTube.com/siliconangle, and of course, siliconangle.com, and Wikibond.com for the research and content coverage, so thanks for coming, one more time, big round of applause for the panel, enjoy your evening, thanks so much.

Published Date : Mar 16 2017

SUMMARY :

Is 90% of the computing going to go to the cloud of getting our second release of the product out shortly. and that simply means putting the focus so could you like, Janet George from Western Digital, consuming the data that you collect from anywhere, and I can never pronounce your surname. so talk about where you have been the acting CTO of SolidFire, but I plan to retire Okay, and data gravity is absolutely key, isn't it, And the best analogy is workloads are like electricity, would you like to start with that question? and the reason is because if you are going to process in terms of explaining the new software to the OT people. but not necessarily in the transparency of how you got that, and we were able to come up with models to predict So again, proof points that you will actually Absolutely, and the transparency, and the IT/OT divide, if you will, and if you think I'm wrong and you have a great new way and then, you sort of operationalize that Janet, do you want to go on? the keyword that comes to mind is abstraction layers, Do you want to go next? and the device compared to where they should be, and then you can take that a step further and there's an opportunity to talk to you people, all the video, if you want to go back,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Janet GeorgePERSON

0.99+

JanetPERSON

0.99+

Western DigitalORGANIZATION

0.99+

Cloud Native Compute FoundationORGANIZATION

0.99+

90%QUANTITY

0.99+

10 modelsQUANTITY

0.99+

AmazonORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

ShellORGANIZATION

0.99+

SiriTITLE

0.99+

Sanjeev KumarPERSON

0.99+

secondQUANTITY

0.99+

RohanPERSON

0.99+

one questionQUANTITY

0.99+

MaanaORGANIZATION

0.99+

FogHorn SystemsORGANIZATION

0.99+

two analogiesQUANTITY

0.99+

thousandsQUANTITY

0.99+

BercoviciPERSON

0.99+

Thomas EdisonPERSON

0.99+

second questionQUANTITY

0.99+

second stationQUANTITY

0.99+

SolidFireORGANIZATION

0.99+

FogHornORGANIZATION

0.99+

thirdQUANTITY

0.99+

firstQUANTITY

0.99+

third partQUANTITY

0.99+

todayDATE

0.99+

second thingQUANTITY

0.98+

TwoQUANTITY

0.98+

two-year-oldQUANTITY

0.98+

one problemQUANTITY

0.98+

end of AprilDATE

0.98+

AlexaTITLE

0.98+

first problemQUANTITY

0.98+

one exampleQUANTITY

0.97+

threeQUANTITY

0.97+

second releaseQUANTITY

0.97+

third stageQUANTITY

0.97+

second pointQUANTITY

0.96+

one trillion objectsQUANTITY

0.96+

second stageQUANTITY

0.96+

oneQUANTITY

0.96+

one levelQUANTITY

0.95+

two software companiesQUANTITY

0.95+

NetApp Data FabricORGANIZATION

0.95+

eachQUANTITY

0.95+

millions of imagesQUANTITY

0.95+

2008DATE

0.95+

first levelQUANTITY

0.95+

bothQUANTITY

0.95+

eighth oneQUANTITY

0.94+

S3TITLE

0.93+

trillions of objectsQUANTITY

0.92+

Wikibond.comORGANIZATION

0.92+

each problem statementQUANTITY

0.92+

one thingQUANTITY

0.92+

one stepQUANTITY

0.91+

Big Data SV EventEVENT

0.91+

siliconangle.comOTHER

0.91+

NetAppORGANIZATION

0.91+

MaerskORGANIZATION

0.9+

about three years oldQUANTITY

0.89+

five yearsQUANTITY

0.89+

MaanaPERSON

0.89+

about 18 months agoDATE

0.88+

26 years oldQUANTITY

0.82+

one particular modelQUANTITY

0.82+

Kubernetes TechnologyORGANIZATION

0.82+

ValPERSON

0.82+

everyQUANTITY

0.81+

ChevronORGANIZATION

0.79+

Panel 2QUANTITY

0.77+

seven other related projectsQUANTITY

0.7+

next hourDATE

0.69+

toTITLE

0.66+

petabytesQUANTITY

0.64+

timeQUANTITY

0.64+

twoQUANTITY

0.63+

series of modelsQUANTITY

0.52+