Image Title

Search Results for Google Photos:

Yolande Piazza & Zac Maufe, Google Cloud


 

(upbeat music) >> Hello, and welcome to this Cube conversation. I'm Dave Nicholson, and this is part of our continuing coverage of Google Cloud Next 2021. We have a very interesting subject to discuss. I have two special guests from Google to join me in a conversation about the financial services space. I'm joined by Yolande Piazza, vice president of financial services sales for Google Cloud and Zac Mauf, managing director for global financial services solutions for Google Cloud. Yolande and Zac, welcome to the Cube. >> Thank you for having us. Looking forward to it. >> Well it's great to have you here. You know, financial services is really an interesting area when you talk about cloud because I'm sure you both remember a time, not that long ago, when we could ask a financial services organization, what their plans for cloud or what their cloud strategy was, and they would give a one word answer and that answer was, never. (laughing) So Zac, let's start out with you, what has changed? Are you and Yolande going to tell us that in fact, financial services organizations are leveraging cloud now? >> Yeah, it's a very exciting time to be in the cloud space, in financial services, because you're exactly right David. People are starting to make the transition to cloud in a real way. And a lot has gone into that, as you know, it's a highly regulated space and so there were a lot of legitimate reasons around getting kind of the regulatory frameworks in place and making sure that the risk and compliance pieces were addressed. But then there was also, as you know, technology is a major backbone for financial services. And so there's also this question of, how do we transition? And a lot of work and time has gone into moving workloads, thinking about like, what is the sort of the right migration strategy for you to get from the current situation to a more cloud native world. And to your point, we're really early, we're really early, but we're very excited and we've been investing heavily on our side to get those foundational pieces in place. But we also realized that we have to think about what are the business cases, that we want to build on top of cloud. It's not just a kind of IT modernization, which is a big part of the story, but the other part of the story is once you get all of this, technology onto the cloud platform, there are things that you can do that you couldn't do in on-prem situations. And a lot of that for us is around the data, AI and ML space. And we really see that being the way to really unlock huge amounts of value. Both of them require massive amounts of compute and breaking down all of these silos that have really developed over time within financial institutions. And really moving to the cloud is the way to unlock a lot of that. So we're really excited about a lot of those use cases that are starting to come to life now. >> Yeah. So I want to dig a little deeper on some of that Zac, but before we do, Yolande make this real for us. Give me some examples of actual real-life financial services organizations and what they're doing with Google Cloud now. >> Yeah, absolutely. And I think we're really proud to be able to announce, a number of new partnerships across the industry. You think about Wells Fargo, you think about Scotia Bank, you think about what we're doing with HSBC. They really are starting to bring to life and recognized that it's not just internally, you have to look at that transformation to cloud, it's really, how do you use this platform to help you go on the journey with your customers? I think a move to a multi-cloud common approach for our customers and our clients, is exactly what we need to be focused on. And the other- >> Hold on, hold on, Yolande. I'm sorry. Did the Google person just say multi-cloud? Because multi- cloud doesn't sound like, only Google Cloud to me. Can you- >> No, and I think Wells, absolutely, and I think Wells announced it's taking a multi-cloud approach to its digital infrastructure strategy, leveraging both Google Cloud and Microsoft Azure. And the reason being is they've openly communicated that a locked in and preparatory systems, isn't the way to go for them. They want that open flexibility. They want the ability to be able to move workloads across the different industries. And I think it's well known that this aligns completely with our principles and at Google we've always said that we support open multi and hybrid cloud strategies because we believe our customers should be able to run what they want, where they want it. And that was exactly the philosophy that that Wells took. So, and if you look at what they were trying to do is they're looking to be able to serve their customers in a different way. I think that it's true now that customers are looking for personalized services, instant gratification, the ability to interact, where they want and when they want. So we're walking with the Wells teams to really bring to life through AI, our complex AI and data solutions to really enable them to move at speed and serve their customers in a rapidly changing world. >> So Yolande, part of the move to cloud includes the fact that we're all human beings and perception can become reality. Issues like security, which are always at the forefront of someone's mind in financial services space, there is the perception, and then there is the reality. Walk us through today where perception is in the financial services space. And then Zac, I'm going to go back to you to tell us what's the reality. And is there a disconnect? Because often technology in this space has been ahead of people's comfort level for rational reasons. So Yolande, can you talk about from a perception perspective where people are. >> So I have to tell you, we are having conversations with both the incumbents and traditional organizations, as well as, the uprising, the fintechs, and the neobanks around how does technology really unlock and unleash a new business model. So we're talking about things like how does technology and help them grow that organization. How does it take out costs in that organization? How do you use all cloud platform to think about managing risks, whether that's operational, whether it's reputational, industry or regulatory type risk? And then how do we enable our partners and our customers to be able to move at speed? So all of those conversations are now on the table. And I think a big shift from when Zac and I both were sitting on the other side of the table in those financial services industries is a recognition that this couldn't and shouldn't be done alone, that it's going to require a partnership, it's going to require, really shifting to put technology at the forefront. And I think when you talk about perception, I would say a couple of years ago, I think it was more of a perception that they were really technology companies. And I think now we're really starting to see the shifts that these are technology companies serving their customers in a banking environment. >> So Zac, can you give us some- Yeah. Yeah. Zac, can you give us some examples of how that plays out from a solutions perspective? What are some of the things that you and Yolande are having conversations with these folks in? >> Yeah. - I mean, absolutely. I think there's three major trends that we're seeing, where I think we can bring the power of sort of the Google ecosystem to really change business models and change how things are done. The first is really this massive change that's been happening for like over 10 years now, but it's really this change in customers, expecting financial institutions to meet them where they are. And that started with information being delivered to them through mobile devices and online banking. And then it went to payments, and now it's going into lending and it's going into insurance. But it changes the way that financial services companies need to operate because now they need to figure out how to deliver everything digitally, embedded into the experience that their customers are having in all of these digital ecosystems. So there's lot that we're doing in that space. The second is really around modernizing the technology environment. There is still a massive amount of paper in these organizations. Most of it has been transferred to digital paper, but the workflows and the processes that are still needing to be streamlined. And there's a lot that we can do with our AI model and technology to be able to basically take unstructured data and create structured data. Thank Google Photos, you can now search for your photo library and find, pictures of you on bridges. The same thing we can now do with documents and routine interactions with chat bot. People are expecting 24/7 service. And a lot of people want to be able to interact through chat versus through voice. And the final part of this that we're seeing a lot of use cases in is in the kind of risk and regulatory space. Coming out of the financial crisis, there was this need to massively upgrade everybody's data capabilities and control and risk environments, because so much it was very manual, and a lot of the data to do a lot of the risk and control work was kind of glued together. So everybody went off and built data lakes and figured out that that was actually a really difficult challenge and they quickly became data swamps. And so really how do you unlock the value of those things? Those three use cases, and there's lots of things underneath those, are areas that we're working with customers on. And it's, like you said, it's really exciting because the perception has changed. The perception has changed that now cloud is the sort of future, and everybody is kind of now realized they have to figure out how to engage. And I think a lot of the partnership things that Yolande was talking about is absolutely true. They're looking for a strategic relationship versus a vendor relationship, and those are really exciting changes for us. >> So I just imagined a scenario where a Dave, Zac, and Yolande are at the cloud pub talking after hours over a few pints, and Dave says, "Wow, you know, 75%, 80% of IT is still on-premises." And Yolande looks at me and says, "On-premises? We're dealing with on-paper still." Such as the life of a financial services expert in this space. So Yolande, what would you consider sort of the final frontier or at least the next frontier in cloud meets financial services? What are the challenges that we have yet to overcome? I just mentioned, the large amount of stuff that's still on premises, the friction associated with legacy applications and infrastructure. That's one whole thing. But is there one thing that in a calendar year, 2022, if you guys could solve this for the financial services industry, what would it be? And if I'm putting you on the spot, so be it. >> No, no. I'm not going to hold it to just one thing. I think the shift, I think the shift to personalization and how does the power of, you know, AI and machine learning really start to change and get into way more predictive technologies. As I mentioned, customers want to be a segmentation of one. They don't want to be forced fit into the traditional banking ecosystems. There's a reason that customers have on average 14 different financial services apps on their phones. Yep. Less than three to 5% of their screen time is actually spent on them. It's because something is missing in that environment. There's a reason that you could go to any social media site and in no time at all, be able to pull up over 200 different communities of people trying to find out financial services information in layman's terms that is relevant to them. So the ability and where we're really doubling down is on this personalization. Being way more predictive, understanding where a customer is on their journey and being able to meet them at that point, whether that's the bright offers, whether that's recognizing, to Zac's point, that they've come in on one channel but they now want to switch to another channel. And how do they not have to start again every time? So these are some of the basics things, so we really doubled down on how do we start to solve in those areas. I think also the shift, I think in many cases, especially in the risk space, it's been very much what I would call, a people process technology approach, start to imagine what happens if you turn that around and think about how technology can help you be more predictive internally in your business and create better outcomes. So I think there's so many areas of opportunities, and what's really exciting is we're not restricted, we're having conversations that are titled, the art of the possible, or the future of, or help us come in and reinvent. So I think you're going to see a lot of shift probably in the next 12 to 18 months, I would say, and the capabilities and the ability to service the customer differently and meet them on their journey. >> Well, it sounds like the life of a cloud financial services person is much more pleasurable than back when it consisted of primarily running into brick walls constantly. This conversation five or 10 years ago would have been more like, please trust us, please. Just give us a shot. >> I think Zac and I both reminisce that we couldn't have joined at a more exciting time. It's the locker or whatever you want to call it, but it is a completely different world and the conversations are fun and refreshing, and you can really start to see how we have the ability to partner to change the landscape, across all of the different financial services industries. And I think that's what keeps Zac and I going every day. >> And you said earlier that you alluded to the idea that you used to be on the other side of the table, in other words, in the financial services industry on the customer side. So you pick the right time to come across. >> Without a doubt, without a doubt. Yes. >> Well, with that, I want to thank both of you for joining me today. This is really fascinating. Financial services is something that touches all of us individually in our daily lives. It's something that everyone can relate to at some level. And it also represents, that tip of the spear, the cutting edge of cloud. So very interesting. Thank you both again, pleasure to meet you both. Next time, hopefully it will be in-person and we can compare our steps that we've taken during the conference. With that I'll sign off. This has been a fantastic Cube conversation, part of our continuing coverage of Google Cloud Next 2021. I'm Dave Nicholson, Thanks again for joining us. >> Thank you. (upbeat music)

Published Date : Nov 4 2021

SUMMARY :

subject to discuss. Looking forward to it. Well it's great to have you here. and making sure that the risk and what they're doing to help you go on the only Google Cloud to me. the ability to interact, And then Zac, I'm going to go back to you And I think when you of how that plays out from and a lot of the data So Yolande, what would you consider and how does the power of, you Well, it sounds like the life and you can really start to that you alluded to the idea Without a doubt, without a doubt. pleasure to meet you both. Thank you.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavePERSON

0.99+

Dave NicholsonPERSON

0.99+

YolandePERSON

0.99+

ZacPERSON

0.99+

HSBCORGANIZATION

0.99+

DavidPERSON

0.99+

GoogleORGANIZATION

0.99+

Scotia BankORGANIZATION

0.99+

Wells FargoORGANIZATION

0.99+

Zac MaufPERSON

0.99+

Yolande PiazzaPERSON

0.99+

75%QUANTITY

0.99+

WellsORGANIZATION

0.99+

80%QUANTITY

0.99+

BothQUANTITY

0.99+

bothQUANTITY

0.99+

one wordQUANTITY

0.99+

firstQUANTITY

0.99+

todayDATE

0.99+

MicrosoftORGANIZATION

0.98+

one channelQUANTITY

0.98+

secondQUANTITY

0.98+

one thingQUANTITY

0.98+

Less than threeQUANTITY

0.98+

2022DATE

0.98+

over 10 yearsQUANTITY

0.97+

fiveDATE

0.97+

14 different financial services appsQUANTITY

0.97+

three use casesQUANTITY

0.96+

over 200 different communitiesQUANTITY

0.96+

two special guestsQUANTITY

0.96+

10 years agoDATE

0.94+

12QUANTITY

0.93+

couple of years agoDATE

0.9+

Google CloudTITLE

0.89+

three major trendsQUANTITY

0.88+

Google CloudORGANIZATION

0.83+

18 monthsQUANTITY

0.83+

Next 2021DATE

0.81+

PiazzaPERSON

0.81+

Zac MaufePERSON

0.79+

5%QUANTITY

0.79+

Google Cloud NextTITLE

0.78+

one whole thingQUANTITY

0.77+

oneQUANTITY

0.76+

CubeORGANIZATION

0.71+

Google PhotosORGANIZATION

0.68+

YolandeORGANIZATION

0.62+

2021DATE

0.58+

CloudTITLE

0.56+

Google CloudTITLE

0.53+

vicePERSON

0.51+

CubePERSON

0.47+

AzureTITLE

0.44+

Teresa Tung, Accenture | Accenture Tech Vision 2020


 

>> Announcer: From San Francisco, it's theCUBE, covering Accenture Tech Vision 2020, brought to you by Accenture. >> Hey, welcome back, everybody. Jeff Rick here with theCUBE. We're high atop San Francisco on a beautiful day at the Accenture San Francisco Innovation Hub, 33rd floor of the Salesforce Tower, for the Accenture Tech Vision 2020 reveal. It's where they come up with four or five themes to really look forward to, a little bit innovative, a little bit different than cloud will be big or mobile will be big. And we're excited to have, really, one of the biggest brains here on the 33rd floor. She's Teresa Tung, the managing director of Accenture Labs. Teresa, great to see you. >> Nice to see you again. >> So I have to tease you because the last time we were here, everyone was bragging on all the patents that you've filed over the years, so congratulations on that. It's almost kind of like a who's who roadmap of what's happening in tech. I looked at a couple of them. You've got a ton of stuff around cloud, a ton of stuff around Edge, but now, you're getting excited about robots and AI. >> That's right. >> That's the new passion. >> That's the new passion. >> All right, so robots, one of the five trends was robots in the wild, so what does that mean, robots in the wild, and why is this something that people should be paying attention to? >> Well, robots have been around for decades, right? So if you think about manufacturing, you think about robots. But as your kid probably knows, robots are now programmable, kids can do it, so why not enterprise? And so, now that robots are programmable, you can buy them and apply them. We're going to unlock a whole bunch of new use cases beyond just those really hardcore manufacturing ones that are very strictly designed in a very structured environment, to things in an unstructured and semi-structured environment. >> So does the definition of robot begin to change? We were just talking before we turned on the cameras about, say, Tesla. Is a Tesla a robot in your definition or does that not quite make the grade? >> I think it is, but we're thinking about robots as physical robots. So sometimes people think about robotics process automation, AI, those are robots, but here, I'm really excited about the physical robots; the mobile units, the gantry units, the arms. This is going to allow us to close that sense-analyze-actuate loop. Now the robot can actually do something based off of the analytics. >> Right, so where will we see robots kind of operating in the wild versus, as we said, the classic manufacturing instance, where they're bolted down, they do a step along the process? Where do you see some of the early adoption is going to, I guess, see them on the streets, right, or wherever we will see them? >> Well, you probably do see them on the streets already. You see them for security use cases, maybe mopping up a store after, where the employees can actually focus on the customers, and the robot's maybe restocking. We see them in the airports, so if you pay attention to modern airports, you see robots bringing out the baggage and doing some of the baggage handling. So really, the opportunities for robots are jobs that are dull, dirty, or dangerous. These are things that humans don't want to or shouldn't be doing. >> Right, so what's the breakthrough tech that's enabling the robots to take this next step? >> Well, a lot of it is AI, right? So the fact that you don't have to be a data scientist and you can apply these algorithms that do facial recognition, that can actually help you to find your way around, it's actually the automation that's programmable. As I was saying, kids can program these robots, so they're not hard to do. So if a kid can do it, maybe somebody who knows oil and gas, insurance, security, can actually do the same thing. >> Right, so a lot of the AI stuff that people are familiar with is things like photo recognition and Google Photos, so I can search for my kids, I can search for a beach, I can search for things like that, and it'll come back. What are some of the types of AI and algorithms that you're applying with kind of this robot revolution? >> It's definitely things like the image analytics. It's for the routing. So let me give you an example of how easy it is to apply. So anybody who can play a video game, you have a video game type controller, so when your kid's, again, playing games, they're actually training for on the skilled jobs. Right, so you map a scene by using that controller to drive the robot around a factory, around the airport, and then, the AI algorithm is smart enough to create the map. And then, from that, we can actually use the robot just out of the box to be able to navigate and you have a place to, say, going from Teresa, here, and then, I might be able to go into the go get us a beer, right? >> Right, right. >> Maybe we should have that happen. (laughs) >> They're setting up right over there. >> They are setting up right there. >> That's right. So it's kind of like when you think of kind of the revolution of drones, which some people might be more familiar with 'cause they're very visible. >> Yes. >> Where when you operate a DJI drone now, you don't actually fly the drone. You're not controlling pitch and yaw and those things. You're just kind of telling it where you want it to go and it's the actual AI under the covers that's making those adjustments to thrust and power and angle. Is that a good analogy? >> That is a great analogy. >> And so, the work that we would do now is much more about how you string it together for the use case. If a robot were to come up to us now, what should it do, right? So if we're here, do we want the robot to even interact with us to get us that beer? So robots don't usually speak. Should speaking be an option for it? Should maybe it's just gesturing and it has a menu? We would know how to interact with it. So a lot of that human-robot interface is some of the work that we're doing. So that was kind of a silly example, but now, imagine that we were surveying an oil pipeline or we were actually as part of a manufacturing line, so in this case it's not getting us a beer, but it might need to do the same sort of thing. What sort of tool does Theresa need to actually finish her job? >> Yeah, and then, the other one is AI and me. And you just said that AI is getting less complicated to program, these machines are getting less complicated to program, but I think most people still are kind of stuck in the realm of we need a data scientist and there are not a lot of data scientists and they got to be super, super smart. You've got to have tons and tons of data and these types of factors, so how is it becoming AI and me, Jeff who's not necessarily a data scientist. I don't have a PhD in molecular physics, how's that going to happen? >> I think we need more of that democratization for the people who are not data scientists. So data scientists, they need the data, and so, a lot of the hard part is getting the data as to how it should interact, right? So in that example, we were saying how does Teresa and Jeff interact with the robot? The data scientist needs tons, right, thousands, tens of thousands of instances of those data types to actually make an insight. So what if, instead, when we think about AI and me, what about we think about, again, the human, not the, well, data scientists are people too. >> Right, right. >> But let's think about democratizing the rest of the humans to saying, how should I interact with the robot? So a lot of the research that we do is around how do you capture this expert knowledge. So we don't actually need to have tens of thousands of that. We can actually pretty much prescribe we don't want the robot to talk to us. We want him to give us the beer. So why don't we just use things like that? We don't have to start with all the data. >> Right, right, so I'm curious because there's a lot of conversation about machines plus people is better than one or the other, but it seems like it's much more complicated to program a robot to do something with a person as opposed to just giving it a simple task, which is probably historically what we've done more. Here, you go do that task. Now, people are not involved in that task. They don't have to worry about the nuance. They don't have to worry about reacting, reading what I'm trying to communicate. So is it a lot harder to get these things to work with people as opposed to kind of independently and carve off a special job? >> It may be harder, but that's where the value is. So if we think about the AI of, let's say, yesterday, there's a lot of dashboards. So it's with the pure data-driven, the pure AI operating on its own, it's going to look at the data. It's going to give us the insight. At the end of the day, the human's going to need to read, let's say, a static report and make a decision. Sometimes, I look at these reports and I have a hard time even understanding what I'm seeing, right? When they show me all these graphs, I'm supposed to be impressed. >> Right, right. >> I don't know what to do versus if you do. I use TurboTax as an example. When you're filing TurboTax, there's a lot of AI behind the scenes, but it's already looked at my data. As I'm filling in my return, it's telling me maybe you should claim this deduction. It's asking me yes or no questions. That's how I imagine AI at scale being in the future, right? It's not just for TurboTax, but everything we do. So in the robot, in the moment that we were describing, maybe it would see that you and I were talking, and it's not going to interrupt our conversation. But in a different context, if Teresa's by herself, maybe it would come up and say, hey, would you like a beer? >> Right, right. >> I think that's the sort of context that, like a TurboTax, but more sexy of course. >> Right, right, so I'm just curious from your perspective as a technologist, again, looking at your patent history, a lot of stuff on cloud, a lot of stuff on edge, but we've always kind of operated in this kind of new world, which is, if you had infinite compute, infinite storage, and infinite bandwidth, which was taking another. >> Yes. >> Big giant step with 5G, kind of what would you build and how could you build it? You got to just be thrilled as all three of those vectors are just accelerating and giving you, basically, infinite power in terms of tooling to work with. >> It is, I mean, it feels like magic. If you think about, I watch things like "Harry Potter", and you think about they know these spells and they can get things to happen. I think that's exactly where we are now. I get to do all these things that are magic. >> And are people ready for it? What's the biggest challenge on the people side in terms of getting them to think about what they could do, as opposed to what they know today? 'Cause the future could be so different. >> That is the challenge, right, because I think people, even with processes, they think about the process that existed today, where you're going to take AI and even robotics, and just make that process step faster. >> Right. >> But with AI and automation, what if we jumped that whole step, right? If as humans, if I can see everything 'cause I had all the data and then, I had AI telling me these are the important pieces, wouldn't you jump towards the answer? A lot of the processes that we have today are meant so that we actually explore all the conditions that need to be explored, that we do look at all the data that needs to be looked at. So you're still going to look at those things, right? Regulations, rules, that still happens, but what if AI and automation check those for you and all you're doing is actually checking the exceptions? So it's going to really change the way we do work. >> Very cool, well, Teresa, great to catch up and you're sitting right in the catbird seat, so exciting to see what your next patents will be, probably all about robotics as you continue to move this train forward. So thanks for the time. >> Thank you. >> All right, she's Teresa, I'm Jeff. You're watching theCUBE. We're at the Accenture Tech Vision 2020 Release Party on the 33rd floor of the Salesforce Tower. Thanks for watching. We'll see you next time. (upbeat music)

Published Date : Feb 12 2020

SUMMARY :

brought to you by Accenture. 33rd floor of the Salesforce Tower, So I have to tease you because the last time So if you think about manufacturing, you think about robots. So does the definition of robot begin to change? This is going to allow us to close and doing some of the baggage handling. So the fact that you don't have to be a data scientist Right, so a lot of the AI stuff just out of the box to be able to navigate Maybe we should have that happen. They're setting up They are setting up So it's kind of like when you think and it's the actual AI under the covers that's making those So a lot of that human-robot interface and they got to be super, super smart. and so, a lot of the hard part is getting the data So a lot of the research that we do is around So is it a lot harder to get these things At the end of the day, the human's going to need So in the robot, in the moment that we were describing, I think that's the sort which is, if you had infinite compute, infinite storage, kind of what would you build and how could you build it? and they can get things to happen. in terms of getting them to think about what they could do, and just make that process step faster. So it's going to really change the way we do work. so exciting to see what your next patents will be, on the 33rd floor of the Salesforce Tower.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Stephane MonoboissetPERSON

0.99+

AnthonyPERSON

0.99+

TeresaPERSON

0.99+

AWSORGANIZATION

0.99+

RebeccaPERSON

0.99+

InformaticaORGANIZATION

0.99+

JeffPERSON

0.99+

Lisa MartinPERSON

0.99+

Teresa TungPERSON

0.99+

Keith TownsendPERSON

0.99+

Jeff FrickPERSON

0.99+

Peter BurrisPERSON

0.99+

Rebecca KnightPERSON

0.99+

MarkPERSON

0.99+

SamsungORGANIZATION

0.99+

DeloitteORGANIZATION

0.99+

JamiePERSON

0.99+

John FurrierPERSON

0.99+

Jamie SharathPERSON

0.99+

RajeevPERSON

0.99+

AmazonORGANIZATION

0.99+

JeremyPERSON

0.99+

Ramin SayarPERSON

0.99+

HollandLOCATION

0.99+

Abhiman MatlapudiPERSON

0.99+

2014DATE

0.99+

RajeemPERSON

0.99+

Jeff RickPERSON

0.99+

SavannahPERSON

0.99+

Rajeev KrishnanPERSON

0.99+

threeQUANTITY

0.99+

Savannah PetersonPERSON

0.99+

FranceLOCATION

0.99+

Sally JenkinsPERSON

0.99+

GeorgePERSON

0.99+

StephanePERSON

0.99+

John FarerPERSON

0.99+

JamaicaLOCATION

0.99+

EuropeLOCATION

0.99+

AbhimanPERSON

0.99+

YahooORGANIZATION

0.99+

130%QUANTITY

0.99+

Amazon Web ServicesORGANIZATION

0.99+

2018DATE

0.99+

30 daysQUANTITY

0.99+

ClouderaORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

183%QUANTITY

0.99+

14 millionQUANTITY

0.99+

AsiaLOCATION

0.99+

38%QUANTITY

0.99+

TomPERSON

0.99+

24 millionQUANTITY

0.99+

TheresaPERSON

0.99+

AccentureORGANIZATION

0.99+

AccelizeORGANIZATION

0.99+

32 millionQUANTITY

0.99+

Paul Daugherty, Accenture | Accenture Tech Vision 2020


 

>> Announcer: From San Francisco, it's theCUBE, covering Accenture Tech Vision 2020. Brought to you by Accenture. >> Hey, welcome back, everybody. Jeff Frick here from theCUBE. We are high atop San Francisco at the Accenture Innovation Hub, 33rd floor of the Salesforce Tower. It's a beautiful night, but we're here for a very special occasion. It's the Tech Vision 2020 reveal, and we are happy to have the guy that runs the whole thing, he's going to reveal on stage a little bit later, but we got him in advance. He's Paul Daugherty, the chief technology and innovation officer for Accenture. Paul, great to see you as always. >> Great to see you, Jeff, too. It is a beautiful evening here, looking out over the Bay. >> If only we could turn the cameras around, but, sorry, we can't do that. >> Yeah. >> All right, so you've been at this now, the Tech Vision's been going on for 20 years, we heard earlier today. >> Yeah. >> You've been involved for almost a decade. How has this thing evolved over that time? >> Yeah, you know, we've been doing the Vision for 20 years, and what we've been trying to do is forecast what's happening with business and technology in a way that's actionable for executives. There's lots of trend forecasts and lists and things, but if you just see a list of cloud, or-- >> Jeff: Mobile's going to be really big. (laughs) >> AI, mobile, it doesn't really help you. We're trying to talk a little bit about the impact on business, impact to the world, and the decisions that you need to make. What's changed over that period of time is just the breadth of the impact that technology's having on people, so we focus a lot of our Visions on the impact on humans, on individuals, what's happening with technology, what the impact on business, we can talk about that a little bit more, but business is certainly not the back office of companies anymore. It's not just the back office and front office, either. In business, it's instrumental in the fabric of how every part of the company operates, their strategy, their operations, their products and services, et cetera, and that's really the trajectory we've seen. As technology advances, we have this accelerating exponential increase in technology, the implications for executives and the stakes just get higher and higher. >> It's weird, there are so many layers to this. One of the things we've talked about a lot is trust, and you guys talk about trust a lot. But what strikes me as kind of this dichotomy is on one hand, do I trust the companies, right? Do I trust Mark Zuckerberg with my data, to pick on him, he gets picked on all the time. That might be a question, but do I trust that Facebook is going to work? Absolutely. And so, our reliance on the technology, our confidence in the technology, our just baseline assumption that this stuff is going to work, is crazy high, up to and including people taking naps in their Teslas, (laughs) which are not autonomous vehicles! >> Not an advisable practice. >> Not autonomous vehicles! So it's this weird kind of split where it's definitely part of our lives, but it seems like kind of the consciousness is coming up as kind of the second order. What does this really mean to me? What does this mean to my data? What are people actually doing with this stuff? And am I making a good value exchange? >> Well, that's the, we talk in the Vision this year about value versus values, and the question you're asking is getting right at that, the crux between value and values. You know, businesses have been using technology to drive value for a long time. That's how applying different types of technology to enterprise, whether it be back to the mainframe days or ERP packages, cloud computing, et cetera, artificial intelligence. So value is what they were talking about in the Vision. How do you drive value using the technology? And one thing we found is there's a big gap. Only 10% of organizations are really getting full value in the way they're applying technology, and those that are are getting twice the revenue growth as companies that aren't, so that's one big gap in value. And this values point is really getting to be important, which is, as technology can be deployed in ways that are more pervasive and impact our experience, they're tracking our health details-- >> Right, right. >> They know where we are, they know what we're doing, they're anticipating what we might do next. How does that impact the values? And how are the values of companies important in other ways? The values you have around sustainability and other things are increasingly important to new generations of consumers and consumers who are thinking in new ways. This value versus values is teeing up what we call a tech-clash, which isn't a tech-lash, just, again, seeing people reacting against tech companies, as you said earlier, it's a tech-clash, which is the values that consumer citizens and people want sometimes clashing with the value of the models that companies have been using to deliver their products and services. >> Right. Well, it seems like it's kind of the "What are you optimizing for?" game, and it seems like it was such an extreme optimization towards profitability and shareholder value, and less, necessarily, employees, less, necessarily, customers, and certainly less in terms of the social impact. So that definitely seems to be changing, but is it changing fast enough? Are people really grasping it? >> Well, I think the data's mixed on that. I think there's a lot of mixed data on "What do people really want?" So people say they want more privacy, they say they want access and control of their data, but they still use a lot of the services that it may be inconsistent with the values that they talk about, and the values that come out in surveys. So, but that's changing. So consumers are getting more educated about how they want their data to be used. But the other thing that's happening is that companies are realizing that it's really a battle for experience. Experience is what, creating broader experiences, better experiences for consumers is what the battleground is. A great experience, whether you're a travel company or a bank or a manufacturing company, or whatever you might be, creating the experience requires data, and to get the data from an individual or another company, it takes trust. So this virtuous circle of experience, data, and trust is something that companies are realizing is essential to their competitive advantage going forward. We say trust is the currency of the digital and post-digital world that we're moving into. >> Right, it's just how explicit is that trust, or how explicit does it need to be? And as you said, that's unclear. People can complain on one hand, but continue to use the services, so it seems to be a little bit kind of squishy. >> It's a sliding scale. It's really a value exchange, and you have to think about it. What's the value exchange and the value that an individual consumer places on their privacy versus free access to a service? That's what's being worked out right now. >> Right, so I'm going to get your take on another thing, which is exponential curves, and you've mentioned time and time again, the pace of change is only accelerating. Well, you've been saying that, probably, for (laughs) 20 years. (Paul laughs) So the curve's just getting steeper. How do you see that kind of playing out over time? Will we eventually catch up? Is it just presumed that this is kind of the new normal? Or how is this going to shake out? 'Cause people aren't great at exponential curves. It's just not really in our DNA. >> Yeah, but I think that's the world we're operating in now, and I think the exponential potential is going to continue. We don't see a slowdown in the exponential growth rates of technology. So artificial intelligence, we're at the early days. Cloud computing, only about 20% enterprise adoption, a lot more to go. New adoptions are on the horizon, things like central bank digital currencies that we've done some research and done some work on recently. Quantum computing and quantum cryptography for networking, et cetera. So the pace of innovation is going to accelerate, and the challenge for organizations is rationalizing that and deciding how to incorporate that into their business, change their business, and change the way that they're leveraging their workforce and change the way that they're interacting with customers. And that's why what we're trying to address in the Vision is provide a little bit of that road map into how you digest it down. Now, there's also technology foundations of this. We talk about something at Accenture called living systems. Living systems is a new way of looking at the architecture of how you build your technology, because you don't have static systems anymore. Your systems have to be living and biological, adapting to the new technology, adapting to the business, adapting to new data over time. So this concept of living systems is going to be really important to organizations' success going forward. >> But the interesting thing is, one of the topics is "AI and Me," and traditional AI was very kind of purpose-built. For instance, Google Photos, can you find the cat? Can I find the kids at the beach? But you're talking about models where the AI can evolve and not necessarily be quite so data-centric around a specific application, but much more evolutionary and adaptable, based on how things change. >> Yeah, I think that's the future of AI that we see. There's been a lot of success in applying AI today, and a lot of it's been based on supervised learning, deep learning techniques that require massive amounts of data. Solving problems like machine vision requires massive amounts of data to do it right. And that'll continue. There'll continue to be problem sets that need large data. But what we're also seeing is a lot of innovation and AI techniques around small data. And we actually did some research recently, and we talk about this a little bit in our Vision, around the future being maybe smaller data sets and more structured data and intelligence around structured data, common-sense AI, and things that allow us to make breakthroughs in different ways. And that's, we used to look at "AI and Me," which is the trend around the workforce and how the workforce changes. It's those kinds of adaptations that we think are going to be really important. >> So another one is robotics, "Robots in the Wild." And you made an interesting comment-- >> Paul: Not "Robots Gone Wild," "Robots in the Wild," "Robots in the Wild." >> Well, maybe they'll go wild once they're in the wild. You never know. Once they get autonomy. Not a lot of autonomy, that's probably why. But it's kind of interesting, 'cause you talk about robots being designed to help people do a better job, as opposed to carving out a specific function for the robot to do without a person, and it seems like that's a much easier route to go, to set up a discrete thing that we can carve out and program the robot to do. Probably early days of manufacturing and doing spot welding in cars, et cetera. >> Right. >> So is it a lot harder to have the robot operate with its human partner, if you will, but are the benefits worth it? How do you kind of see that shaking out, versus, "Ah, I can carve out one more function"? >> Yeah, I think it's going to be a mix. I think there'll be, we see a lot of application of the robots paired with people in different ways, cobots in manufacturing being a great example, and something that's really taking off in manufacturing environments, but also, you have robots of different forms that serve human needs. There's a lot of interesting things going on in healthcare right now. How can you support autistic children or adults better using human-like robots and agents that can interact in different ways? A lot of interesting things around Alzheimer's and dealing with cognitive impairment and such using robots and robotics. So I think the future isn't, there's a lot of robots in the wild in the form of C-3POs and R2-D2s and those types of robots, and we'll see some of those. And those are being used widely in business today, even, in different contexts, but I think the interesting advance will be looking at robots that complement and augment and serve human needs more effectively. >> Right, right, and do people do a good enough job of getting some of the case studies? Like, you just walked through kind of the better use cases, the more humane use cases, the kind of cool medical breakthroughs, versus just continued optimization of getting me my Starbucks coupon when I walk by out front? (Paul laughs) >> Yeah, I'm not sure. >> Doesn't seem like I get the pub, like they just don't get the pub, I don't think. >> Yeah, yeah, yeah, maybe not. A little mixology is another (Jeff laughs) inflection that robots are getting good at. But I think that's what we're trying to do, is through the effort we do with the Vision, as well as our Tech for Good work and other things, is look at how we amplify and highlight some of the great work that is happening in those areas. >> So, you've been doing it for a decade. What struck you this year as being a little bit different, a little bit unexpected, not necessarily something you may have anticipated? >> I think the thing that is maybe a tipping point that I see in this Vision that I didn't anticipate is this idea that every company's really becoming a technology company. We said eight years ago, "Every business "will be a digital business," and that was, while ridiculed by some at the time, that really came true, and every business and every industry really is becoming digital or has already become digital. But I think we might've gotten it slightly wrong. Digital was kind of a step, but every company is deploying technology in the way they serve their customers, in the way they build their products and services. Every product and service is becoming technology-enabled. The ecosystem of technology providers is critical to companies in every industry. So every company's really becoming a technology company. Maybe every company needs to be as good as a digital native company at developing products and services, operating them. So I think that this idea of every company becoming a technology company, every CEO becoming a technology CEO, technology leader, is something that I think will differentiate companies going forward as well. >> Well, really, good work, you, Michael, and the team. It's fun to come here ever year, because you guys do a little twist. Like you said, it's not "Cloud's going to be really big, "mobile's going to be really big," but a little bit more thoughtful, a little bit more deep, a little bit longer kind of thought cycles on these trends. >> Yeah, and I think the, if you read through the Vision, we're trying to present a complete story, too, so it's, as you know, "We, the post-digital people." But if you look at innovation, "The I in Experience" is about serving your customers differently. "The Dilemma of Smart Machines" and "Robots in the Wild" is about your new products and services and the post-digital environment powered by technology. "AI and Me" is about the new workforce, and "Innovation DNA" is about driving continuous innovation in your organization, your culture, as you develop your business into the future. So it really is providing a complete narrative on what we think the future looks like for executives. >> Right, good, still more utopian than dystopian, I like it. >> More utopia than dystopia, but you got to steer around the roadblocks. (Jeff chuckles) >> All right, Paul, well, thanks again, and good luck tonight with the big presentation. >> Thanks, Jeff. >> All right, he's Paul, I'm Jeff. You're watching theCUBE. We're at the Accenture innovation reveal 2020, when we're going to know everything with the benefit of hindsight. Thanks for watching, (laughs) we'll see you next time. (upbeat pop music)

Published Date : Feb 12 2020

SUMMARY :

Brought to you by Accenture. Innovation Hub, 33rd floor of the Salesforce Tower. It is a beautiful evening here, looking out over the Bay. If only we could turn the cameras around, at this now, the Tech Vision's been going on How has this thing evolved over that time? but if you just see a list of cloud, or-- Jeff: Mobile's going to and the decisions that you need to make. One of the things we've talked about a lot is trust, but it seems like kind of the consciousness and the question you're asking is getting How does that impact the values? and certainly less in terms of the social impact. and the values that come out in surveys. but continue to use the services, and you have to think about it. Or how is this going to shake out? So the pace of innovation is going to accelerate, But the interesting thing is, one of the topics and how the workforce changes. So another one is robotics, "Robots in the Wild." "Robots in the Wild." carve out and program the robot to do. of the robots paired with people in different ways, the pub, like they just don't get the pub, amplify and highlight some of the great work not necessarily something you may have anticipated? in the way they serve their customers, "mobile's going to be really big," "AI and Me" is about the new workforce, I like it. the roadblocks. and good luck tonight with the big presentation. We're at the Accenture innovation reveal 2020,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JeffPERSON

0.99+

Paul DaughertyPERSON

0.99+

MichaelPERSON

0.99+

Jeff FrickPERSON

0.99+

FacebookORGANIZATION

0.99+

Mark ZuckerbergPERSON

0.99+

PaulPERSON

0.99+

20 yearsQUANTITY

0.99+

San FranciscoLOCATION

0.99+

AccentureORGANIZATION

0.99+

oneQUANTITY

0.99+

twiceQUANTITY

0.99+

OneQUANTITY

0.98+

eight years agoDATE

0.98+

tonightDATE

0.98+

2020DATE

0.98+

about 20%QUANTITY

0.98+

Tech VisionORGANIZATION

0.98+

this yearDATE

0.98+

second orderQUANTITY

0.98+

C-3POsCOMMERCIAL_ITEM

0.98+

R2-D2sCOMMERCIAL_ITEM

0.96+

StarbucksORGANIZATION

0.96+

Salesforce TowerLOCATION

0.95+

todayDATE

0.94+

10%QUANTITY

0.94+

The Dilemma of Smart MachinesTITLE

0.93+

33rd floorQUANTITY

0.92+

earlier todayDATE

0.89+

theCUBEORGANIZATION

0.88+

a decadeQUANTITY

0.85+

VisionORGANIZATION

0.85+

Robots in the WildTITLE

0.85+

AlzheimerOTHER

0.85+

RobotsTITLE

0.81+

Accenture Innovation HubLOCATION

0.8+

one handQUANTITY

0.74+

TeslasORGANIZATION

0.72+

Accenture Tech VisionORGANIZATION

0.71+

one big gapQUANTITY

0.66+

2020TITLE

0.61+

Google PhotosTITLE

0.57+

almostQUANTITY

0.56+

Tech VisionEVENT

0.5+

VisionEVENT

0.36+

TechTITLE

0.24+

Around theCUBE, Unpacking AI Panel, Part 3 | CUBEConversation, October 2019


 

(upbeat music) >> From our studios in the heart of Silicon Valley, Palo Alto, California, this is a CUBE conversation. >> Hello, and welcome to theCUBE Studios here in Palo Alto, California. We have a special Around theCUBE segment, Unpacking AI. This is a Get Smart Series. We have three great guests. Rajen Sheth, VP of AI and Product Management at Google. He knows well the AI development for Google Cloud. Dr. Kate Darling, research specialist at MIT media lab. And Professor Barry O'Sullivan, Director SFI Centre for Training AI, University of College Cork in Ireland. Thanks for coming on, everyone. Let's get right to it. Ethics in AI as AI becomes mainstream, moves out to the labs and computer science world to mainstream impact. The conversations are about ethics. And this is a huge conversation, but first thing people want to know is, what is AI? What is the definition of AI? How should people look at AI? What is the definition? We'll start there, Rajen. >> So I think the way I would define AI is any way that you can make a computer intelligent, to be able to do tasks that typically people used to do. And what's interesting is that AI is something, of course, that's been around for a very long time in many different forms. Everything from expert systems in the '90s, all the way through to neural networks now. And things like machine learning, for example. People often get confused between AI and machine learning. I would think of it almost the way you would think of physics and calculus. Machine learning is the current best way to use AI in the industry. >> Kate, your definition of AI, do you have one? >> Well, I find it interesting that there's no really good universal definition. And also, I would agree with Rajen that right now, we're using kind of a narrow definition when we talk about AI, but the proper definition is probably much more broad than that. So probably something like a computer system that can make decisions independent of human input. >> Professor Barry, your take on the definition of AI, is there one? What's a good definition? >> Well, you know, so I think AI has been around for 70 years, and we still haven't agreed the definition for it, as Kate said. I think that's one of those very interesting things. I suppose it's really about making machines act and behave rationally in the world, ideally autonomously, so without human intervention. But I suppose these days, AI is really focused on achieving human level performance in very narrowly defined tasks, you know, so game playing, recommender systems, planning. So we do those in isolation. We don't tend to put them together to create the fabled artificial general intelligence. I think that's something that we don't tend to focus on at all, actually if that made sense. >> Okay the question is that AI is kind of elusive, it's changing, it's evolving. It's been around for awhile, as you guys pointed out, but now that it's on everyone's mind, we see it in the news every day from Facebook being a technology program with billions of people. AI was supposed to solve the problem there. We see new workloads being developed with cloud computing where AI is a critical software component of all this. But that's a geeky world. But the real world, as an ethical conversation, is not a lot of computer scientists have taken ethics classes. So who decides what's ethical with AI? Professor Barry, let's start with you. Where do we start with ethics? >> Yeah, sure, so one of the things I do is I'm the Vice-Chair of the European Commission's High-Level Expert Group on Artificial Intelligence, and this year we published the Ethics Guidelines for Trustworthy AI in Europe, which is all about, you know, setting an ethical standard for what AI is. You're right, computer scientists don't take ethical standards, but I suppose what we are faced with here is a technology that's so pervasive in our lives that we really do need to think carefully about the impact of that technology on, you know, human agency, and human well-being, on societal well-being. So I think it's right and proper that we're talking about ethics at this moment in time. But, of course, we do need to realize that ethics is not a panacea, right? So it's certainly something we need to talk about, but it's not going to solve, it's not going to rid us of all of the detrimental applications or usages of AI that we might see today. >> Kate, your take on ethics. Start all over, throw out everything, build on it, what do we do? >> Well, what we do is we get more interdisciplinary, right? I mean, because you asked, "Who decides?". Until now it has been the people building the technology who have had to make some calls on ethics. And it's not, you know, it's not necessarily the way of thinking that they are trained in, and so it makes a lot of sense to have projects like the one that Barry is involved in, where you bring together people from different areas of expert... >> I think we lost Kate there. Rajen, why don't you jump in, talk about-- >> (muffled speaking) you decide issues of responsibility for harm. We have to look at algorithmic bias. We have to look at supplementing versus replacing human labor, we have to look at privacy and data security. We have look at the things that I'm interested in like the ways that people anthropomorphize the technology and use it in a way that's perhaps different than intended. So, depending on what issue we're looking at, we need to draw from a variety of disciplines. And fortunately we're seeing more support for this within companies and within universities as well. >> Rajen, your take on this. >> So, I think one thing that's interesting is to step back and understand why this moment is so compelling and why it's so important for us to understand this right now. And the reason for that is that this is the moment where AI is starting to have an impact on the everyday person. Anytime I present, I put up a slide of the Mosaic browser from 1994 and my point is that, that's where AI is today. It's at the very beginning stages of how we can impact people, even though it's been around for 70 years. And what's interesting about ethics, is we have an opportunity to do that right from the beginning right now. I think that there's a lot that you can bring in from the way that we think about ethics overall. For example, in our company, can you all hear me? >> Yep. >> Mm-hmm. >> Okay, we've hired an ethicist within our company, from a university, to actually bring in the general principles of ethics and bring that into AI. But I also do think that things are different so for example, bias is an ethical problem. However, bias can be encoded and actually given more legitimacy when it could be encoded in an algorithm. So, those are things that we really need to watch out for where I think it is a little bit different and a little bit more interesting. >> This is a great point-- >> Let me just-- >> Oh, go ahead. >> Yeah, just one interesting thing to bear in mind, and I think Kate said this, and I just want to echo it, is that AI is becoming extremely multidisciplinary. And I think it's no longer a technical issue. Obviously there are massive technical challenges, but it's now become as much an opportunity for people in the social sciences, the humanities, ethics people. Legal people, I think need to understand AI. And in fact, I gave a talk recently at a legal symposium, and the idea of this on a parallel track of people who have legal expertise and AI expertise, I think that's a really fantastic opportunity that we need to bear in mind. So, unfortunately us nerds, we don't own AI anymore. You know, it's something we need to interact with the real world on a significant basis. >> You know, I want to ask a question, because you know, the algorithms, everyone talks about the algorithms and the bias and all that stuff. It's totally relevant, great points on interdisciplinary, but there's a human component here. As AI starts to infiltrate the culture and hit everyday life, the reaction to AI sometimes can be, "Whoa, my job's going to get automated away." So, I got to ask you guys, as we deal with AI, is that a reflection on how we deal with it to our own humanity? Because how we deal with AI from an ethics standpoint ultimately is a reflection on our own humanity. Your thoughts on this. Rajen, we'll start with you. >> I mean it is, oh, sorry, Rajen? >> So, I think it is. And I think that there are three big issues that I see that I think are reflective of ethics in general, but then also really are particular to AI. So, there's bias. And bias is an overall ethical issue that I think this is particular here. There's what you mentioned, future of work, you know, what does the workforce look like 10 years from now. And that changes quite a bit over time. If you look at the workforce now versus 30 years ago, it's quite a bit different. And AI will change that radically over the next 10 years. The other thing is what is good use of AI, and what's bad use of AI? And I think one thing we've discovered is that there's probably 10% of things that are clearly bad, and 10% of things that are clearly good, and 80% of things that are in that gray area in between where it's up to kind of your personal view. And that's the really really tough part about all this. >> Kate, you were going to weigh in. >> Well, I think that, I'm actually going to push back a little, not on Rajen, but on the question. Because I think that one of the fallacies that we are constantly engaging in is we are comparing artificial intelligence to human intelligence, and robots to people, and we're failing to acknowledge sufficiently that AI has a very different skillset than a person. So, I think it makes more sense to look at different analogies. For example, how have we used and integrated animals in the past to help us with work? And that lets us see that the answer to questions like, "Will AI disrupt the labor market?" "Will it change infrastructures and efficiencies?" The answer to that is yes. But will it be a one-to-one replacement of people? No. That said, I do think that AI is a really interesting mirror that we're holding up to ourselves to answer certain questions like, "What is our definition of fairness?" for example. We want algorithms to be fair. We want to program ethics into machines. And what it's really showing us is that we don't have good definitions of what these things are even though we thought we did. >> All right, Professor Barry, your thoughts? >> Yeah, I think there's many points one could make here. I suppose the first thing is that we should be seeing AI, not as a replacement technology, but as an assistive technology. It's here to help us in all sorts of ways to make us more productive, and to make us more accurate in how we carry out certain tasks. I think, absolutely the labor force will be transformed in the future, but there isn't going to be massive job loss. You know, the technology has always changed how we work and play and interact with each other. You know, look at the smart phone. The smart phone is 12 years old. We never imagined in 2007 that our world would be the way it is today. So technology transforms very subtly over long periods of time, and that's how it should be. I think we shouldn't fear AI. I think the thing we should fear most, in fact, is not Artificial Intelligence, but is actual stupidity. So I think we need to, I would encourage people not to think, it's very easy to talk negatively and think negatively about AI because it is such a impactful and promising technology, but I think we need to keep it real a little bit, right? So there's a lot of hype around AI that we need to sort of see through and understand what's real and what's not. And that's really some of the challenges we have to face. And also, one of the big challenges we have, is how do we educate the ordinary person on the street to understand what AI is, what it's capable of, when it can be trusted, and when it cannot be trusted. And ethics gets of some of the way there, but it doesn't have to get us all of the way there. We need good old-fashioned good engineering to make people trust in the system. >> That was a great point. Ethics is kind of a reflection of that mirror, I love that. Kate, I want to get to that animal comment about domesticating technology, but I want to stay in this culture question for a minute. If you look at some of the major tech companies like Microsoft and others, the employees are revolting around their use of AI in certain use cases. It's a knee-jerk reaction around, "Oh my God, "We're using AI, we're harming the world." So, we live in a culture now where it's becoming more mission driven. There's a cultural impact, and to your point about not fearing AI, are people having a certain knee-jerk reaction to AI because you're seeing cultures inside tech companies and society taking an opinion on AI. "Oh my God, it's definitely bad, our company's doing it. "We should not service those contracts. "Or, maybe I shouldn't buy that product "because it's listening to me." So, there's a general fear. Does this impact the ethical conversation? How do you guys see this? Because this is an interplay that we see that's a personal, it's a human reaction. >> Yeah, so if I may start, I suppose, absolutely there are, you know, the ethics debates is a critical one, and people are certainly fearful. There is this polarization in debate about good AI and bad AI, but you know, AI is good technology. It's one of these dual-use technologies. It can be applied to bad situation in ways that we would prefer it wasn't. And it can also, it's a force for tremendous good. So, we need to think about the regulation of AI, what we want it to do from a legal point of view, who is responsible, where does liability lie? We also think about what our ethical framework is, and of course, there is no international agreement on what is, there is no universal code of ethics, you know? So this is something that's very very heavily contextualized. But I think we certainly, I think we generally agree that we want to promote human well-being. We want to compute, we want to have a prosperous society. We want to protect the well-being of society. We don't want technology to impact society in any negative way. It's actually very funny. If you look back about 25-30 years ago, there was a technology where people were concerned that privacy was going to be a thing of the past. That computer systems were going to be tremendously biased because data was going to be incomplete and not representative. And there was a huge concern that good old-fashioned databases were going to be the technology that will destroy the fabric of society. That didn't happen. And I don't think we're going to have AI do that either. >> Kate? >> Yeah, we've seen a lot of technology panic, that may or may not be warranted, in the past. I think that AI and robotics suffers from a specific problem that people are influenced by science fiction and pop culture when they're thinking about the technology. And I feel like that can cause people to be worried about some things that maybe perhaps aren't the thing we should be worrying about currently. Like robots and jobs, or artificial super-intelligence taking over and killing us all, aren't maybe the main concerns we should have right now. But, algorithmic bias, for example, is a real thing, right? We see systems using data sets that disadvantage women, or people of color, and yet the use of AI is seen as neutral even though it's impinging existing biases. Or privacy and data security, right? You have technologies that are collecting massive amounts of data because the way learning works is you use lots of data. And so there's a lot of incentive to collect data. As a consumer, there's not a lot of incentive for me to want to curb that, because I want the device to listen to me and to be able to perform better. And so the question is, who is thinking about consumer protection in this space if all the incentives are toward collecting and using as much data as possible. And so I do think there is a certain amount of concern that is warranted, and where there are problems, I endorse people revolting, right? But I do think that we are sometimes a little bit skewed in our, you know, understanding where we currently are at with the technology, and what the actual problems are right now. >> Rajen, I want your thoughts on this. Education is key. As you guys were talking about, there's some things to pay attention to. How do you educate people about how to shape AI for good, and at the same time calm the fears of people at the same time, from revolting around misinformation or bad data around what could be? >> Well I think that the key thing here is to organize kind of how you evaluate this. And back to that one thing I was saying a little bit earlier, it's very tough to judge kind of what is good and what is bad. It's really up to personal perception. But then the more that you organize how to evaluate this, and then figure out ways to govern this, the easier it gets to evaluate what's in or out . So one thing that we did, was that we created a set of AI principles, and we kind of codified what we think AI should do, and then we codified areas that we would not go into as a company. The thing is, it's very high level. It's kind of like the constitution, and when you have something like the constitution, you have to get down to actual laws of what you would and wouldn't do. It's very hard to bucket and say, these are good use cases, these are bad use cases. But what we now have is a process around how do we actually take things that are coming in and figure out how do we evaluate them? A last thing that I'll mention, is that I think it's very important to have many many different viewpoints on it. Have viewpoints of people that are taking it from a business perspective, have people that are taking it from kind of a research and an ethics perspective, and all evaluate that together. And that's really what we've tried to create to be able to evaluate things as they come up. >> Well, I love that constitution angle. We'll have that as our last final question in a minute, that do we do a reset or not, but I want to get to that point that Kate mentioned. Kate, you're doing research around robotics. And I think robotics is, you've seen robotics surge in popularity from high schools have varsity teams now. You're seeing robotics with software advances and technology advances really become kind of a playful illustration of computer technology and software where AI is playing a role, and you're doing a lot of work there. But as intelligence comes into, say robotics, or software, or AI, there's a human reaction to all of this. So there's a psychology interaction to either AI and robotics. Can you guys share your thoughts on the humanization interaction between technology, as people stare at their phones today, that could be relationships in the future. And I think robotics might be a signal. You mentioned domesticating animals as an example back in the early days of when we were (laughing) as a society, that happened. Now we all have pets. Are we going to have robots as pets? Are we going to have AI pets? >> Yes, we are. (laughing) >> Is this kind of the human relationship? Okay, go ahead, share your thoughts. >> So, okay, the thing that I love about robots, and you know, in some applications to AI as well, is that people will treat these technologies like they're alive. Even though they know that they're just machine. And part of that is, again, the influence of science fiction and pop culture, that kind of primes us to do this. Part of it is the novelty of the technology moving into shared spaces, but then there's this actual biological element to it, where we have this inherent tendency to anthropomorphize, project human-like traits, behaviors, qualities, onto non-humans. And robots lend themselves really well to that because our brains are constantly scanning our environments and trying to separate things into objects and agents. And robots move like agents. We are evolutionarily hardwired to project intent onto the autonomous movement in our physical space. And this is why I love robots in particular as an AI use case, because people end up treating robots totally differently. Like people will name their Roomba vacuum cleaner and feel bad for it when it gets stuck, which they would never do with their normal vacuum cleaner, right? So, this anthropomorphization, I think, makes a huge difference when you're trying to integrate the technology, because it can have negative effects. It can lead to inefficiencies or even dangerous situations. For example, if you're using robots in the military as tools, and they're treating them like pets instead of devices. But then there are also some really fantastic use cases in health and education that rely specifically on this socialization of the robot. You can use a robot as a replacement for animal therapy where you can't use real animals. We're seeing great results in therapy with autistic children, engaging them in ways that we haven't seen previously. So there are a lot of really cool ways that we can make this work for us as well. >> Barry, your thoughts, have you ever thought that we'd be adopting AI as pets some day? >> Oh yeah, absolutely. Like Kate, I'm very excited about all of this too, and I think there's a few, I agree with everything Kate has said. Of course, you know, coming back to the remark you made at the beginning about people putting their faces in their smartphones all the time, you know, we can't crowdsource our sense of dignity, or that we can't have social media as the currency for how we value our lives or how we compare ourselves with others. So, you know, we do have to be careful here. Like, one of the really nice things about, one of the really nice examples of an AI system that was given some significant personality was, quite recently, Tuomas Sandholm and others at Carnegie Mellon produced this Liberatus poker playing bot, and this AI system was playing against these top-class Texas hold' em players. And all of these Texas hold 'em players were attributing a level of cunning and sophistication and mischief on this AI system that clearly it didn't have because it was essentially trying to just behave rationally. But we do like to project human characteristics onto AI systems. And I think what would be very very nice, and something we need to be very very careful of, is that when AI systems are around us, and particularly robots, you know, we do need to treat them with respect. And what I mean is, we do make sure that we treat those things that are serving society in as positive and nice a way as possible. You know, I do judge people on how they interact with, you know, sort of the least advantaged people in society. And you know, by golly, I will judge you on how you interact with a robot. >> Rajen-- >> We've actually done some research on that, where-- >> Oh, really-- >> We've shown that if you're low empathy, you're more willing to hit a robot, especially if it has a name. (panel laughing) >> I love all my equipment here, >> Oh, yeah? >> I got to tell ya, it's all beautiful. Rajen, computer science, and now AIs having this kind of humanization impact, this is an interesting shift. I mean, this is not what we studied in computer science. We were writin' code. We were going to automate things. Now there's notions of math, and not just math cognition, human relations, your thoughts on this? >> Yeah, you know what's interesting is that I think ultimately it boils down to the user experience. And I think there is this part of this which is around humanization, but then ultimately it boils down to what are you trying to do? And how well are you doing it with this technology? And I think that example around the Roomba is very interesting, where I think people kind of feel like this is more, almost like a person. But, also I think we should focus as well on what the technology is doing, and what impact it's having. My best example of this is Google Photos. And so, my whole family uses Google Photos, and they don't know that underlying it is some of the most powerful AI in the world. All they know is that they can find pictures of our kids and their grandkids on the beach anytime that they want. And so ultimately, I think it boils down to what is the AI doing for the people? And how is it? >> Yeah, expectations become the new user experience. I love that. Okay, guys, final question, and also humanization, we talked about the robotics, but also the ethics here. Ethics reminds me of the old security debate, and security in the old days. Do you increase the security, or do you throw it all away and start over? So with this idea of how do you figure out ethics in today's modern society with it being a mirror? Do we throw it all away and do a do-over, can we recast this? Can we start over? Do we augment? What's the approach that you guys see that we might need to go through right now to really, not hold back AI, but let it continue to grow and accelerate, educate people, bring value and user experience to the table? What is the path? We'll start with Barry, and then Kate, and then Rajen. >> Yeah, I can kick off. I think ethics gets us some of the way there, right? So, obviously we need to have a set of principles that we sign up to and agree upon. And there are literally hundreds of documents on AI ethics. I think in Europe, for example, there are 128 different documents around AI ethics, I mean policy documents. But, you know, we have to think about how are we actually going to make this happen in the real world? And I think, you know, if you take the aviation industry, that we trust in airplanes, because we understand that they're built to the highest standards, that they're tested rigorously, and that the organizations that are building these things are held account when things go wrong. And I think we need to do something similar in AI. We need good strong engineering, and you know, ethics is fantastic, and I'm a strong believer in ethical codes, but we do need to make it practical. And we do need to figure out a way of having the public trust in AI systems, and that comes back to education. So, I think we need the general public, and indeed ourselves, to be a little more cynical and questioning when we hear stories in the media about AI, because a lot of it is hyped. You know, and that's because researchers want to describe their research in an exciting way, but also, newspaper people and media people want to have a sticky subject. But I think we do need to have a society that can look at these technologies and really critique them and understand what's been said. And I think a healthy dose of cynicism is not going to do us any harm. >> So, modernization, do you change the ethical definition? Kate, what's your thoughts on all this? >> Well, I love that Barry brought up the aviation industry because I think that right now we're kind of an industry in its infancy, but if we look at how other industries have evolved to deal with some thorny ethical issues, like for example, medicine. You know, medicine had to develop a whole code of ethics, and develop a bunch of standards. If you look at aviation or other transportation industries, they've had to deal with a lot of things like public perception of what the technology can and can't do, and so you look at the growing pains that those industries have gone through, and then you add in some modern insight about interdisciplinary, about diversity, and tech development generally. Getting people together who have different experiences, different life experiences, when you're developing the technology, and I think we don't have to rebuild the wheel here. >> Yep. >> Rajen, your thoughts on the path forward, throw it all away, rebuild, what do we do? >> Yeah, so I think this is a really interesting one because of all the technologies I've worked in within my career, everything from the internet, to mobile, to virtualization, this is probably the most powerful potential for human good out there. And AI, the potential of what it can do is greater than almost anything else that's out there. However, I do think that people's perception of what it's going to do is a little bit skewed. So when people think of AI, they think of self-driving cars and robots and things like that. And that's not the reality of what AI is today. And so I think two things are important. One is to actually look at the reality of what AI is doing today and where it impacts people lives. Like, how does it personalize customer interactions? How does it make things more efficient? How do we spot things that we never were able to spot before? And start there, and then apply the ethics that we've already known for years and years and years, but adapt it to a way that actually makes sense for this. >> Okay, like that's it for the Around theCUBE. Looks like we've tallied up. Looks like Professor Barry 11, third place, Kate in second place with 13. Rajen with 16 points, you're the winner, so you get the last word on the segment here. Share your final thoughts on this panel. >> Well, I think it's really important that we're having this conversation right now. I think about back to 1994 when the internet first started. People did not have that conversation nearly as much at that point, and the internet has done some amazing things, and there have been some bad side effects. I think with this, if we have this conversation now, we have this opportunity to shape this technology in a very very positive way as we go forward. >> Thank you so much, and thanks everyone for taking the time to come in. All the way form Cork, Ireland, Professor Barry O'Sullivan. Dr. Kate Darling doing some amazing research at MIT on robotics and human psychology and like a new book coming out. Kate, thanks for coming out. And Rajen, thanks for winning and sharing your thoughts. Thanks everyone for coming. This is Around theCUBE here, Unpacking AI segment around ethics and human interaction and societal impact. I'm John Furrier with theCUBE. Thanks for watching. (upbeat music)

Published Date : Nov 6 2019

SUMMARY :

in the heart of Silicon Valley, What is the definition of AI? is any way that you can make a computer intelligent, but the proper definition is probably I think that's something that we don't tend Where do we start with ethics? that we really do need to think carefully about the impact what do we do? And it's not, you know, I think we lost Kate there. we need to draw from a variety of disciplines. from the way that we think about ethics overall. and bring that into AI. that we need to bear in mind. is that a reflection on how we deal with it And I think that there are three big issues and integrated animals in the past to help us with work? And that's really some of the challenges we have to face. and to your point about not fearing AI, But I think we certainly, I think we generally agree But I do think that we are sometimes a little bit skewed and at the same time calm the fears of people and we kind of codified what we think AI should do, that do we do a reset or not, Yes, we are. the human relationship? that we can make this work for us as well. and something we need to be very very careful of, that if you're low empathy, I mean, this is not what we studied in computer science. And I think there is this part of this that we might need to go through right now And I think we need to do something similar in AI. and I think we don't have to rebuild the wheel here. And that's not the reality of what AI is today. Okay, like that's it for the Around theCUBE. I think about back to 1994 when the internet first started. and thanks everyone for taking the time to come in.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
MicrosoftORGANIZATION

0.99+

KatePERSON

0.99+

BarryPERSON

0.99+

Rajen ShethPERSON

0.99+

Carnegie MellonORGANIZATION

0.99+

RajenPERSON

0.99+

John FurrierPERSON

0.99+

1994DATE

0.99+

EuropeLOCATION

0.99+

2007DATE

0.99+

October 2019DATE

0.99+

16 pointsQUANTITY

0.99+

Palo Alto, CaliforniaLOCATION

0.99+

Barry O'SullivanPERSON

0.99+

Tuomas SandholmPERSON

0.99+

80%QUANTITY

0.99+

10%QUANTITY

0.99+

firstQUANTITY

0.99+

Kate DarlingPERSON

0.99+

European CommissionORGANIZATION

0.99+

University of CollegeORGANIZATION

0.99+

third placeQUANTITY

0.99+

GoogleORGANIZATION

0.99+

todayDATE

0.99+

second placeQUANTITY

0.99+

MITORGANIZATION

0.99+

oneQUANTITY

0.99+

two thingsQUANTITY

0.99+

FacebookORGANIZATION

0.98+

this yearDATE

0.98+

hundreds of documentsQUANTITY

0.98+

30 years agoDATE

0.98+

billions of peopleQUANTITY

0.98+

ProfessorPERSON

0.98+

three big issuesQUANTITY

0.97+

SFI Centre for Training AIORGANIZATION

0.97+

128 different documentsQUANTITY

0.96+

first thingQUANTITY

0.96+

three great guestsQUANTITY

0.95+

12 years oldQUANTITY

0.94+

70 yearsQUANTITY

0.94+

one thingQUANTITY

0.94+

Dr.PERSON

0.93+

IrelandLOCATION

0.92+

Ethics Guidelines for Trustworthy AITITLE

0.91+

theCUBE StudiosORGANIZATION

0.91+

Silicon Valley,LOCATION

0.9+

OneQUANTITY

0.89+

Ron Bodkin, Google | Big Data SV 2018


 

>> Announcer: Live from San Jose, it's theCUBE. Presenting Big Data, Silicon Valley, brought to you by Silicon Angle Media and its ecosystem partners. >> Welcome back to theCUBE's continuing coverage of our event Big Data SV. I'm Lisa Martin, joined by Dave Vellante and we've been here all day having some great conversations really looking at big data, cloud, AI machine-learning from many different levels. We're happy to welcome back to theCUBE one of our distinguished alumni, Ron Bodkin, who's now the Technical Director of Applied AI at Google. Hey Ron, welcome back. >> It's nice to be back Lisa, thank you. >> Yeah, thanks for coming by. >> Thanks Dave. >> So you have been a friend of theCUBE for a long time, you've been in this industry and this space for a long time. Let's take a little bit of a walk down memory lane, your perspectives on Big Data Hadoop and the evolution that you've seen. >> Sure, you know so I first got involved in big data back in 2007. I was VP in generating a startup called QuantCast in the online advertising space. You know, we were using early versions of Hadoop to crunch through petabytes of data and build data science models and I saw a huge opportunity to bring those kind of capabilities to the enterprise. You know, we were working with early Hadoop vendors. Actually, at the time, there was really only one commercial vendor of Hadoop, it was Cloudera and we were working with them and then you know, others as they came online, right? So back then we had to spend a lot of time explaining to enterprises what was this concept of big data, why it was Hadoop as an open source could get interesting, what did it mean to build a data lake? And you know, we always said look, there's going to be a ton of value around data science, right? Putting your big data together and collecting complete information and then being able to build data science models to act in your business. So you know, the exciting thing for me is you know, now we're at a stage where many companies have put those assets together. You've got access to amazing cloud scale resources like we have at Google to not only work with great information, but to start to really act on it because you know, kind of in parallel with that evolution of big data was the evolution of the algorithms as well as the access to large amounts of digital data that's propelled, you know, a lot of innovation in AI through this new trend of deep learning that we're invested heavily in. >> I mean the epiphany of Hadoop when I first heard about it was bringing, you know, five megabytes of code to a petabyte of data as sort of the bromide. But you know, the narrative in the press has really been well, they haven't really lived up to expectations, the ROI has been largely a reduction on investment and so is that fair? I mean you've worked with practitioners, you know, all your big data career and you've seen a lot of companies transform. Obviously Google as a big data company is probably the best example of one. Do you think that's a fair narrative or did the big data hype fail to live up to expectations? >> I think there's a couple of things going on here. One is, you know, that the capabilities in big data have varied widely, right? So if you look at the way, for example, at Google we operate with big data tools that we have, they're extremely productive, work at massive scale, you know, with large numbers of users being able to slice and dice and get deep analysis of data. It's a great setup for doing machine learning, right? That's why we have things like BigQuery available in the cloud. You know, I'd say that what happened in the open source Hadoop world was it ended up settling in on more of the subset of use cases around how do we make it easy to store large amounts of data inexpensively, how do we offload ETL, how do we make it possible for data scientists to get access to raw data? I don't think that's as functional as what people really had imagined coming out of big data. But it's still served a useful function complementing what companies were already doing at their warehouse, right? So I'd say those efforts to collect big data and to make them available have really been a, they've set the stage for analytic value both through better building of analytic databases but especially through machine learning. >> And there's been some clear successes. I mean, one of them obviously is advertising, Google's had a huge success there. But much more, I mean fraud detection, you're starting to see health care really glom on. Financial services have been big on this, you know, maybe largely for marketing reasons but also risk, You know for sure, so there's been some clear successes. I've likened it to, you know, before you got to paint, you got to scrape and you got to, you put in caulking and so forth. And now we're in a position where you've got a corpus of data in your organization and you can really start to apply things like machine learning and artificial intelligence. Your thoughts on that premise? >> Yeah, I definitely think there's a lot of truth to that. I think some of it was, there was a hope, a lot of people thought that big data would be magic, that you could just dump a bunch of raw data without any effort and out would come all the answers. And that was never a realistic hope. There's always a level of you have to at least have some level of structure in the data, you have to put some effort in curating the data so you have valid results, right? So it's created a set of tools to allow scaling. You know, we now take for granted the ability to have elastic data, to have it scale and have it in the cloud in a way that just wasn't the norm even 10 years ago. It's like people were thinking about very brittle, limited amounts of data in silos was the norm, so the conversation's changed so much, we almost forget how much things have evolved. >> Speaking of evolution, tell us a little bit more about your role with applied AI at Google. What was the genesis of it and how are you working with customers for them to kind of leverage this next phase of big data and applying machine learning so that they really can identify, well monetize content and data and actually identify new revenue streams? >> Absolutely, so you know at Google, we really started the journey to become an AI-first company early this decade, a little over five years ago. We invested in the Google X team, you know, Jeff Dean was one of the leaders there, sort of to invest in, hey, these deep learning algorithms are having a big impact, right? Fei-Fei Li, who's now the Chief Scientist at Google Cloud was at Stanford doing research around how can we teach a computer to see and catalog a lot of digital data for visual purposes? So combining that with advances in computing with first GPUs and then ultimately we invested in specialized hardware that made it work well for us. The massive-scale TPU's, right? That combination really started to unlock all kinds of problems that we could solve with machine learning in a way that we couldn't before. So it's now become central to all kinds of products at Google, whether it be the biggest improvements we've had in search and advertising coming from these deep learning models but also breakthroughs, products like Google Photos where you can now search and find photos based on keywords from intelligence in a machine that looks at what's in the photo, right? So we've invested and made that a central part of the business and so what we're seeing is as we build up the cloud business, there's a tremendous interest in how can we take Google's capabilities, right, our investments in open source deep learning frameworks, TensorFlow, our investments in hardware, TPU, our scalable infrastructure for doing machine learning, right? We're able to serve a billion inferences a second, right? So we've got this massive capability we've built for our own products that we're now making available for customers and the customers are saying, "How do I tap into that? "How can I work with Google, how can I work with "the products, how can I work with the capabilities?" So the applied AI team is really about how do we help customers drive these 10x opportunities with machine learning, partnering with Google? And the reason it's a 10x opportunity is you've had a big set of improvements where models that weren't useful commercially until recently are now useful and can be applied. So you can do things like translating languages automatically, like recognizing speech, like having automated dialog for chat bots or you know, all kinds of visual APIs like our AutoML API where engineers can feed up images and it will train a model specialized to their need to recognize what you're looking for, right? So those types of advances mean that all kinds of business process can be reconceived of, and dramatically improved with automation, taking a lot of human drudgery out. So customers are like "That's really "exciting and at Google you're doing that. "How do we get that, right? "We don't know how to go there." >> Well natural language processing has been amazing in the last couple of years. Not surprising that Google is so successful there. I was kind of blown away that Amazon with Alexa sort of blew past Siri, right? And so thinking about new ways in which we're going to interact with our devices, it's clearly coming, so it leads me into my question on innovation. What's driven in your view, the innovation in the last decade and what's going to drive innovation the next 10 years? >> I think innovation is very much a function of having the right kind of culture and mindset, right? So I mean for us at Google, a big part of it is what we call 10x thinking, which is really focusing on how do you think about the big problem and work on something that could have a big impact? I also think that you can't really predict what's going to work, but there's a lot of interesting ideas and many of them won't pan out, right? But the more you have a culture of failing fast and trying things and at least being open to the data and give it a shot, right, and say "Is this crazy thing going to work?" That's why we have things like Google X where we invest in moonshots but that's where, you know, throughout the business, we say hey, you can have a 20% project, you can go work on something and many of them don't work or have a small impact but then you get things like Gmail getting created out of a 20% project. It's a cultural thing that you foster and encourage people to try things and be open to the possibility that something big is on your hands, right? >> On the cultural front, it sounds like in some cases depending on the enterprise, it's a shift, in some cases it's a cultural journey. The Google on Google story sounds like it could be a blueprint, of course, how do we do this? You've done this but how much is it a blueprint on the technology capitalizing on deep learning capabilities as well as a blueprint for helping organizations on this cultural journey to be actually being able to benefit and profit from this? >> Yeah, I mean that's absolutely right Lisa that these are both really important aspects, that there's a big part of the cultural journey. In order to be an AI-first company, to really reconceive your business around what can happen with machine learning, it's important to be a digital company, right? To have a mindset of making quick decisions and thinking about how data impacts your business and activating in real time. So there's a cultural journey that companies are going through. How do we enable our knowledge workers to do this kind of work, how do we think about our products in a new way, how do we reconceive, think about automation? There's a lot of these aspects that are cultural as well, but I think a big part of it is, you know, it's easy to get overwhelmed for companies but it's like you have pick somewhere, right? What's something you can do, what's a true north, what's an area where you can start to invest and get impact and start the journey, right? Start to do pilots, start to get something going. What we found, something I've found in my career has been when companies get started with the right first project and get some success, they can build on that success and invest more, right? Whereas you know, if you're not experimenting and trying things and moving, you're never going to get there. >> Momentum is key, well Ron, thank you so much for taking some time to stop by theCUBE. I wish we had more time to chat but we appreciate your time. >> No, it's great to be here again. >> See ya. >> We want to thank you for watching theCUBE live from our event, Big Data SV in San Jose. I'm Lisa Martin with Dave Vellante, stick around we'll be back with our wrap shortly. (relaxed electronic jingle)

Published Date : Mar 8 2018

SUMMARY :

brought to you by Silicon Angle Media We're happy to welcome back to theCUBE So you have been a friend of theCUBE for a long time, and then you know, others as they came online, right? was bringing, you know, five megabytes of code One is, you know, that the capabilities and you can really start to apply things like There's always a level of you have to at What was the genesis of it and how are you We invested in the Google X team, you know, been amazing in the last couple of years. we invest in moonshots but that's where, you know, on this cultural journey to be actually but I think a big part of it is, you know, Momentum is key, well Ron, thank you We want to thank you for watching theCUBE live

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

Ron BodkinPERSON

0.99+

Lisa MartinPERSON

0.99+

2007DATE

0.99+

Jeff DeanPERSON

0.99+

RonPERSON

0.99+

DavePERSON

0.99+

GoogleORGANIZATION

0.99+

LisaPERSON

0.99+

Silicon Angle MediaORGANIZATION

0.99+

San JoseLOCATION

0.99+

Fei-Fei LiPERSON

0.99+

AmazonORGANIZATION

0.99+

20%QUANTITY

0.99+

oneQUANTITY

0.99+

HadoopTITLE

0.99+

five megabytesQUANTITY

0.99+

SiriTITLE

0.99+

theCUBEORGANIZATION

0.99+

QuantCastORGANIZATION

0.99+

10xQUANTITY

0.99+

bothQUANTITY

0.99+

Google XORGANIZATION

0.98+

first projectQUANTITY

0.97+

Silicon ValleyLOCATION

0.97+

GmailTITLE

0.97+

Big DataORGANIZATION

0.97+

firstQUANTITY

0.96+

OneQUANTITY

0.96+

10 years agoDATE

0.95+

BigQueryTITLE

0.94+

early this decadeDATE

0.94+

last couple of yearsDATE

0.94+

Big Data SVEVENT

0.94+

AlexaTITLE

0.94+

Big Data SV 2018EVENT

0.93+

ClouderaORGANIZATION

0.91+

last decadeDATE

0.89+

Google CloudORGANIZATION

0.87+

over five years agoDATE

0.85+

first companyQUANTITY

0.82+

10x opportunitiesQUANTITY

0.82+

one commercialQUANTITY

0.81+

next 10 yearsDATE

0.8+

first GPUsQUANTITY

0.78+

Big Data HadoopTITLE

0.68+

AutoMLTITLE

0.68+

Google XTITLE

0.63+

AppliedORGANIZATION

0.62+

a secondQUANTITY

0.61+

petabytesQUANTITY

0.57+

petabyteQUANTITY

0.56+

billion inferencesQUANTITY

0.54+

TensorFlowTITLE

0.53+

StanfordORGANIZATION

0.51+

Google PhotosORGANIZATION

0.42+